Give Me Liberty Or Give Me Death

by Richard A. Easterlin (University of Southern California)

This blog is  part G of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. The full article from this blog is “How Beneficent Is the Market? A Look at the Modern History of Mortality.” European Review of Economic History 3, no. 3 (1999): 257-94. https://doi.org/10.1017/S1361491699000131

 

VACCINATION_06
A child is vaccinated, Brazil, 1970.

Patrick Henry’s memorable plea for independence unintentionally also captured the long history of conflict between the free market and public health, evidenced in the current struggle of the United States with the coronavirus.  Efforts to contain the virus have centered on  measures to forestall transmission of the disease such as stay-at-home orders, social distancing, and avoiding large gatherings, each of which infringes on individual liberty.  These measures have given birth to a resistance movement objecting to violations of one’s freedom.

My 1999 article posed the question “How Beneficent is the Market?” The answer, based on “A Look at the Modern History of Mortality” was straightforward: because of the ubiquity of market failure, public intervention was essential to achieve control of major infectious disease. This intervention  centered on the creation of a public health system. “The functions of this system have included, in varying degrees, health education, regulation, compulsion, and the financing or direct provision of services.”

Regulation and compulsion, and the consequent infringement of individual liberties, have always been  critical building blocks of the public health system. Even before formal establishment of public health agencies, regulation and compulsion were features of measures aimed at controlling the spread of infectious disease in mid-19th century Britain. The “sanitation revolution” led to the regulation of water supply and sewage disposal, and, in time to regulation of slum-  building conditions.  As my article notes, there was fierce opposition to these measures:

“The backbone of the opposition was made up of those whose vested interests were threatened: landlords, builders, water companies, proprietors of refuse heaps and dung hills, burial concerns, slaughterhouses, and the like … The opposition appealed to the preservation of civil liberties and sought to debunk the new knowledge cited by the public health advocates …”

The greatest achievement of public health was the eradication of smallpox, the one disease in the world that has been eliminated from the face of the earth. Smallpox was the scourge of humankind until William Jenner’s discovery of a vaccine in 1798.   Throughout the 19th and 20th centuries, requirements for smallpox vaccination were fiercely opposed by anti-vaccinationists.  In 1959 the World Health Organization embarked on a program to eradicate the disease. Over the ensuing two decades its efforts to persuade governments worldwide to require vaccination of infants were eventually successful, and in 1980 WHO officially declared the disease eradicated. Eventually public health triumphed over liberty. But It took almost two centuries to realize Jenner’s hope that vaccination would annihilate smallpox.

In the face of the coronavirus pandemic the U. S. market-based health care system  has demonstrated once again the inability of the market to  deal with infectious disease, and the need for forceful public intervention. The  current health care system requires that:

 “every player, from insurers to hospitals to the pharmaceutical industry to doctors, be financially self-sustaining, to have a profitable business model. It excels in expensive specialty care. But there’s no return on investment in being positioned for the possibility of a pandemic” (Rosenthal 2020).

Commercial and hospital labs have been slow to respond to the need for developing a test for the virus.  Once tests became available, conducting them was handicapped by insufficient supplies of testing capacity — kits, chemical reagents, swabs, masks and other personal protective equipment. In hospitals, ventilators  were also in short supply. These deficiencies reflected the lack of profitability in responding to these needs, and of a government reluctant to compensate for market failure.

At the current time, the halting efforts of federal public health authorities  and state and local public officials to impose quarantine and “shelter at home” measures have been seriously handicapped by public protests over infringement of civil liberties, reminiscent of the dissidents of the 19th  and 20th centuries and their current day heirs. States are opening for business well in advance of guidelines of the Center for Disease Control.  The lesson of history regarding such actions is clear: The cost of liberty is sickness and death.  But do we learn from history? Sadly, one is put in mind of Warren Buffet’s aphorism: “What we learn from history is that people don’t learn from history.”

 

Reference

Rosenthal, Elizabeth, “A Health System Set up to Fail”,  New York Times, May 8, 2020, p.A29.

 

To contact the author: easterl@usc.edu

Police as ploughmen in the First World War

by Mary Fraser (Associate, The Scottish Centre for Crime & Justice Research, University of Glasgow)

This blog is part of our EHS 2020 Annual Conference Blog Series.

 

Fraser1
Police group portrait Bury St Edmunds Suffolk. Available at Wikimedia Commons.

That policemen across Britain were released to plough the fields in the food shortages of 1917 is currently unrecognised, although soldiers, prisoners of war, women and school children have been widely acknowledged as helping agriculture. A national project is seeking to redress the imbalance in our understanding.

In March 1917, Britain faced starvation. Massive losses of shipping, which brought around 80% of the population’s requirements of grain, mainly from America and Canada, were being sunk by enemy U-boats. Added to this, the harsh and lengthy winter rotted the potato crop in the ground. These factors largely removed two staple items from the diet: bread and potatoes. Together with soaring food prices, the poor faced starvation.

To overcome this threat, the campaign to change the balance from pasture to arable began in December 1916 (Ernle, 1961). Government took control of farming and demanded a huge increase in home-grown grain and potatoes, so that Britain could become self-sufficient in food.

But the land had been stripped of much of its skilled labour by the attraction of joining the army or navy, so that farmers felt helpless to respond. Also, equipment was idle due to lack of maintenance as mechanics had similarly signed up to war or had left for better-paid work in the munitions factories. The need to help farmers to produce home-grown food was so great that every avenue was explored.

When the severe winter broke around mid-March, not only were many hundreds of soldiers deployed to farms, but also local authorities were asked to help. One of the first groups to come forward was the police. Many had been skilled farm workers in their previous employment and so were ideal to operate the manual ploughs, which needed skill and strength to turn over heavy soil, some of which had not been ploughed for many years.

A police popular journal at the time revealed ‘Police as Ploughmen’ and gave many of the 18 locations across Britain (Fraser, 2019). Estimates are that between 500 and 600 policemen were released, some for around two months.

For example, Glasgow agreed to the release of 90 policemen while Berwick, Roxburgh and Selkirk agreed to release 40. These two areas were often held up as examples of how other police forces across Britain could help farmers: Glasgow being an urban police force while Berwick, Roxburgh and Selkirk was rural.

To release this number was a considerable contribution by police forces, as many of their young fit policemen had also been recruited into the army, to be partially replaced by part-time older Special Constables.

This help to farmers paid huge dividends. It prevented the food riots seen in other combatant nations, such as Austria-Hungary, Germany, Russia and France (Ziemann, 2014). By the harvest of 1917, the substitution of ploughmen allowed Britain to claim an increase of 1,000,000 acres of arable land, producing over 4,000,000 more tons of wheat, barley, oats and potatoes (Ernle, 1961). Britain was also able to send food to troops in France and Italy, supplementing their local failed harvests.

It is now time that policemen were recognised for their social conscience by helping their local populations. This example of ‘Police as Ploughmen’ shows that as well as investigations, cautions and arrests, the police in Britain also have a remit to help local people, particularly in times of dire need, such as in the food crisis of the First World War.

 

References

Ernle, Lord (RE Prothero) (1961) English Farming, Past and Present, 6th edition, Heinemann Educational Book Ltd.

Fraser, M (2019) Policing the Home Front, 1914-1918: The control of the British population at war, Routledge.

Ziemann, B (2014) The Cambridge History of the First World War. Volume 2: The State.


 

Mary Fraser

https://writingpolicehistory.blogspot.com 

@drmaryfraser

Unequal access to food during the nutritional transition: evidence from Mediterranean Spain

by Francisco J. Medina-Albaladejo & Salvador Calatayud (Universitat de València).

This article is forthcoming in the Economic History Review.

 

Medina1
Figure 1 – General pathology ward, Hospital General de Valencia (Spain), 1949. Source: Consejo General de Colegios Médicos de España. Banco de imágenes de la medicina española. Real Academia Nacional de Medicina de España. Available here.

Over the last century, European historiography has debated whether industrialisation brought about an improvement in working class living standards.  Multiple demographic, economic, anthropometric and wellbeing indicators have been examined in this regard, but it was Eric Hobsbawm (1957) who, in the late 1950s, incorporated food consumption patterns into the analysis.

Between the mid-19th and the first half of the 20th century, the diet of European populations underwent radical changes. Caloric intake increased significantly, and cereals were to a large extent replaced by animal proteins and fat, resulting from a substantial increase in meat, milk, eggs and fish consumption. This transformation was referred to by Popkin (1993) as the ‘Nutritional transition’.

These dietary changes were  driven, inter alia,  by the evolution of income levels which raises the possibility  that significant inequalities between different social groups ensued. Dietary inequalities between different social groups are a key component in the analysis of inequality and living standards; they directly affect mortality, life expectancy, and morbidity. However, this hypothesis  remains unproven, as historians are still searching for adequate sources and methods with which to measure the effects of dietary changes on living standards.

This study contributes to the debate by analysing a relatively untapped source: hospital diets. We have analysed the diet of psychiatric patients and members of staff in the main hospital of the city of Valencia (Spain) between 1852 and 1923. The diet of patients depended on their social status and the amounts they paid for their upkeep. ‘Poor psychiatric patients’ and abandoned children, who paid no fee, were fed according to hospital regulations, whereas ‘well-off psychiatric patients’ paid a daily fee in exchange for a richer and more varied diet. There were also differences among members of staff, with nuns receiving a richer diet than other personnel (launderers, nurses and wet-nurses). We think that our source  broadly  reflects dietary patterns of the Spanish population and the effect of income levels thereon.

Figure 2 illustrates some of these differences in terms of animal-based caloric intake in each of the groups under study. Three population groups can be clearly distinguished: ‘well-off psychiatric patients’ and nuns, whose diet already presented some of the features of the nutritional transition by the mid-19th century, including fewer cereals and a meat-rich diet, as well as the inclusion of new products, such as olive oil, milk, eggs and fish; hospital staff, whose diet was rich in calories,to compensate for their demanding jobs, but still traditional in structure, being largely based on cereals, legumes, meat and wine; and, finally, ‘poor psychiatric patients’ and abandoned children, whose diet was poorer and which, by the 1920, had barely joined the trends that characterised the nutritional transition.

 

Medina2
Figure 2. Percentage of animal calories in the daily average diet by population groups in the Hospital General de Valencia, 1852-1923 (%). Source: as per original article.

 

In conclusion, the nutritional transition was not a homogenous process, affecting all diets at the time or at the same pace. On the contrary, it was a process marked by social difference, and the progress of dietary changes was largely determined by social factors. By the mid-19th century, the diet structure of well-to-do social groups resembled diets that were more characteristic of the 1930s, while less favoured and intermediate social groups had to wait until the early 20th century before they could incorporate new foodstuffs into their diet. As this sequence clearly indicates, less favoured social groups always lagged behind.

 

References

Medina-Albaladejo, F. J. and Calatayud, S., “Unequal access to food during the nutritional transition: evidence from Mediterranean Spain”, Economic History Review, (forthcoming).

Hobsbawm, E. J., “The British Standard of Living, 1790-1850”, Economic History Review, 2nd ser., X (1957), pp. 46-68.

Popkin B. M., “Nutritional Patterns and Transitions”, Population and Development Review, 19, 1 (1993), pp. 138-157.

Airborne diseases: Tuberculosis in the Union Army

by Javier Birchenall (University of California, Santa Barbara)

This is Part F of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History The full article from this blog was published in ‘Explorations in Economic History’ and is available here

TB-do-not-spit-1910-v2
1910 advertising postcard for the National Association for the Prevention of Tuberculosis. 

Tuberculosis (TB) is one of the oldest and deadliest diseases. Traces of TB in humans can be found as early as 9,000 years ago, and written accounts date back 3,300 years in India. Untreated, TB’s case-fatality rate is as high as 50 percent. It was a dreaded disease.  TB is an airborne disease caused by the bacteria Mycobacterium tuberculosis. Tuberculosis spreads through the air when a person who has an active infection coughs, sneezes, speaks, or sings. Most cases remain latent and do not develop symptoms. Activation of tuberculosis is particularly influenced by undernutrition.

Tuberculosis played a prominent role in the secular mortality decline. Of the 27 years of life expectancy gained in England and Wales between 1871 and 1951, TB accounts for about 40 percent of the improvement, a 12-year gain. Modern medicine, the usual suspect used to explain this mortality decline, could not have been the culprit. As Thomas McKeown famously pointed out, TB mortality started its decline long before the tubercle bacillus was identified and long before  an  effective treatment was provided (Figure 1). McKeown viewed improvements in economic and social conditions, especially improved diets, as the principal factor arresting the combatting tuberculosis. A healthy diet, however, is not the only factor behind nutritional status. Infections, no matter how mild, reduce nutritional status and increase susceptibility to infection.

Figure 1. Mortality rate from TB.

fig01

Source: as per original article

In “Airborne Diseases: Tuberculosis in the Union Army” I studied the determinants of diagnosis, discharge, and mortality from tuberculosis in the past. I examined the medical histories of 25,000 soldiers and veterans in the Union Army using data collected under the direction of Robert Fogel. The Civil War brought together soldiers from many socioeconomic conditions and ecological backgrounds into an environment which was ideal for the spread of this disease. The war also provided a unique setting to examine many of the factors which were  likely responsible for the decline in TB mortality. Before enlistment, individuals had differential exposure to harmful dust and fumes. They also faced different disease environments and living conditions. By housing recruits in confined spaces, the war exposed soldiers to a host of waterborne and airborne infections. In the Civil War, disease was far more deadly than battle.

The Union Army data contains detailed medical records and measures of nutritional status. Height at enlistment measures net nutritional experiences at early ages. Weight, needed to measure current nutritional status using the Body Mass Index (BMI), is available for war veterans. My estimates use a hazard model and a variety of controls aligned with existing explanations proposed for the decline in TB prevalence and fatality rates. By how much would the diagnosis of TB have declined if the average Union Army soldier had the height of the current U.S. male population, and if all his relevant infections diagnosed prior to TB were eradicated?  Figure 2 presents the contribution of the predictors of TB diagnosis in soldiers who did not engage in battle, and  Figure 3 reports soldiers discharged because of TB.  Nutritional experiences in early life provided a protective effect against TB.  Between 25 and 50 per cent of the predictable decline in tuberculosis could be associated with the modern increase in height. Declines in the risk of waterborne and airborne diseases are as important as the predicted changes in height

 

Figure 2. Contribution of various factors to the decline in TB diagnosis

fig02
Source: as per original article

 

Figure 3. Contribution of various factors to the decline in discharges because of TB.

fig03
Source: as per original article

My analysis showed that a wartime diagnosis of TB increased the risk of tuberculosis mortality. Because of the chronic nature of the disease, infected soldiers likely developed a latent or persistent infection that remained active until resistance failed at old age. Nutritional status provided some protection against mortality. For veterans, height was not as robust as BMI. If a veteran’s BMI increased from its historical value of 23 to current levels of 27, his mortality risk from tuberculosis would have been reduced by 50 per cent. Overall, the contribution of changes in `pure’ diets and changes in infectious disease exposure, was probably equal.

What lessons can be drawn for the current covid-19 pandemic? Covid-19 is also an airborne disease. Airborne diseases (e.g., influenza, measles, smallpox, and tuberculosis) are difficult to control. In unfamiliar populations, they often break wreak havoc. But influenza, measles, smallpox, and tuberculosis are mostly killers from the past. The findings in my paper suggest that the conquest of tuberculosis happened through both individual and public health efforts. Improvements in diets and public health worked simultaneously and synergistically. There was no silver bullet to defeat the great white plague, tuberculosis. Diets are no longer as inadequate as in the past. Still, Covid-19 has exposed differential susceptibility to the disease. Success in combatting Covid-19 is likely to require simultaneous and synergistic private and public efforts.

Economic History Review – An introduction to the history of infectious diseases, epidemics and the early phases of the long-run decline in mortality

by Leigh Shaw-Taylor.

Below is the abstract for the Economic History Review virtual issue, Epidemics, Diseases and Mortality in Economic History.

The full introduction and the virtual issue are both available online for free for a limited time.

 

This article, written during the COVID-19 epidemic, provides a general introduction to the long-term history of infectious diseases, epidemics and the early phases of the spectacular long-term improvements in life expectancy since 1750, primarily with reference to English history. The story is a fundamentally optimistic one. In 2019 global life expectancy was approaching 73 years. In 1800 it was probably about 30. To understand the origins of this transition, we have to look at the historical sequence by which so many causes of premature death have been vanquished over time. In England that story begins much earlier than often supposed, in the years around 1600.  The first two ‘victories’ were over famine and plague. However, economic changes with negative influences on mortality meant that, despite this, life expectancies were either falling or stable between the late sixteenth and mid eighteenth centuries. The late eighteenth and early nineteenth century saw major declines in deaths from smallpox, malaria and typhus and the beginnings of the long-run increases in life expectancy. The period also saw urban areas become capable of demographic growth without a constant stream of migrants from the countryside: a necessary precondition for the global urbanization of the last two centuries and for modern economic growth. Since 1840 the highest national life expectancy globally has increased by three years in every decade.

Are university endowments really long-term investors?

by David Chambers, Charikleia Kaffe & Elroy Dimson (Cambridge Judge Business School)

This blog is part of our EHS 2020 Annual Conference Blog Series.

 

 

Flags of the Ivy League
Flags of the Ivy League fly at Columbia’s Wien Stadium. Available at Wikimedia Commons.

 

Endowments are investment funds aiming to meet the needs of their beneficiaries over multiple generations and adhering to the principle of intergenerational equity. University endowments such as Harvard, Yale and Princeton, in particular, have been at the forefront of developments in long-horizon investing over the last three decades.

But little is known about how these funds invested before the recent past. While scholars have previously examined the history of insurance companies and investment trusts, very little historical analysis has been undertaken of such important and innovative long-horizon investors. This is despite the tremendous influence of the so-called ‘US endowment model’ of long-horizon investing – attributed to Yale University and its chief investment officer, David Swensen – on other investors.

Our study exploits a new long-run hand-collected data set of the investments belonging to the 12 wealthiest US university endowments from the early twentieth century up to the present: Brown University, Columbia University, Cornell University, Dartmouth College, Harvard University, Princeton University, the University of Pennsylvania, Yale University, the Massachusetts Institute of Technology, the University of Chicago, Johns Hopkins University and Stanford University.

All are large private doctoral institutions that were among the wealthiest university endowments in the early decades of the twentieth century and which made sufficient disclosures about how their funds were invested. From the latter, we estimate the annual time series of allocations across major asset classes (stocks, bonds, real estate, alternative assets, etc.), endowment market values and investment returns.

Our study has two main findings. First, we document two major shifts in the allocation of the institutions’ portfolios from predominantly bonds to predominantly stocks beginning in the 1930s and then again from stocks to alternative assets beginning in the 1980s. Moreover, the Ivy League schools (notably, Harvard, Yale and Princeton) led the way in these asset allocation moves in both eras.

Second, we examine whether these funds invest in a manner consistent with their mission as long-term investors, namely, behaving countercyclically – selling when prices are high and buying when low. Prior studies show that pension funds and mutual funds behave procyclically during crises – buying when prices are high and selling when low.

In contrast, our analysis finds that the leading university endowments on average behave countercyclically across the six ‘worst’ financial crises during the last 120 years in the United States: 1906-1907, 1929, 1937, 1973-74, 2000 and 2008. Hence, typically, during the pre-crisis price run-up, they decrease their allocation to risky assets but increase this allocation in the post-crisis price decline.

In addition, we find that this countercyclical behaviour became more pronounced in the two most recent crises – the Dot-Com Bubble and the 2008 Global Financial Crisis.

UK investment trust portfolio strategies before the first world war

by Janette Rutterford and Dimitris P. Sotiropoulos (The Open University Business School)

The full article from this blog is forthcoming in the Economic History Review

_72560439_72560401
Mary Evans Picture Library

UK investment trusts (the British name for closed-end funds) were at the forefront of financial innovation in the global era before World War I. Soon after the increase in investment choice facilitated by Companies Acts in the 1850s and 1860s – which allowed investors limited liability – investment trusts emerged to invest in a diverse range of securities across the globe, thereby offering asset management services to individual investors. They rapidly became a low-cost financial vehicle for so-called “averaging” of risk across a portfolio of marketable securities without having to sacrifice return. UK investment trusts were the first genuine historical paradigm of a sophisticated asset management industry.

Formed as trusts from the the late 1860s, by the 1880s, the vast majority of UK investment trusts had acquired limited liability company status and issued shares and bonds traded in London and elsewhere. They used the proceeds to construct global investment portfolios made up of a multitude of different securities whose yields were higher than could be achieved by investing solely in British securities, an approach subsequently termed the ‘geographical diversification of risk’.

A recent study of ours examines UK investment trust portfolio strategies between 1886 and 1914, for those investment  trusts  that disclosed their portfolios.  Our dataset comprises 30 different investment trust companies, 115 firm portfolio observations, and 32,708 portfolio holdings, sampled every five years prior to WWI. Our results reveal a sophisticated approach to asset management by these investment trusts. The average trust in our sample had a portfolio with a nominal value of £1.7 million – equivalent to around £1.7bn today – invested in an average of 284 different securities. Their size and the large number of holdings are both evidence that asset management before WWI was a serious business.

01
Figure 1. Investment trust regional allocation (% of portfolio nominal value). Source: Sotiropoulos, D. P., Rutterford, J., and C. Keber, ‘UK investment trust portfolio strategies before the First World War’, Economic History Review, forthcoming.

Investment trusts evolved a unique asset allocation strategy: globally diversified, skewed in favour of preferred regions, sectors and security types, and with numerous holdings. Figure 1 shows the flow of investment from Europe to the emerging North and Latin American markets over the period, although the box plots reveal significant differences between individual investment trust portfolios. The preference for overseas investments is clear: on average, domestic investment never exceeded 26 percent of portfolio value. Railways was the preferred sector, averaging 40 percent of portfolio value throughout the period. Government and municipal securities fell out of favour from a high of 40 percent in 1886 to a low of sic percent of portfolio value by 1914. Investment trusts switched instead to the Utilities and the Industrial, Commercial and Agriculture sectors which, combined, made up 48 percent of portfolio value by 1914.

02
Figure 2. Investment trust allocation by security type (% of portfolio nominal value). Source: as Figure 1.

Figure 2 shows the types of securities held in investment trust portfolios. Fixed-interest securities dominated before WWI, though there was a growing interest in ordinary and preferred shares over time. Perhaps surprisingly, an increasing number of investment trusts were willing to embrace the ‘cult of equity’, far earlier than, say, insurance companies.

We find that investment trust directors adopted a mixture of a buy-and-hold investment and active portfolio management strategies. The scale of holdings of a wide variety of different types of securities required efficient administration. The average portfolio holding represented only 0.35 percent of portfolio value, while 75 percent of holdings had individual weights of less than 0.43 percent of portfolio value. Although not concentrated, these portfolios were skewed. The top 10 percent of holdings per portfolio represented on average 35.7 percent of total portfolio value, and the top 25 percent of holdings represented 60.0 percent.

Investment trust directors did not radically reorganize their portfolios on an annual basis; neither did they stick rigidly to the same securities over time. They were not passive investors.  Annual turnover was in excess of 10 percent (measured as the lower of sales and purchases to nominal portfolio value). Nor were they sheep. There was a wide variety of focus between different UK investment trusts; each tended to have its own specific investment areas of interest, and there was considerable cross-sectional variation with respect to diversification strategies, even though joint directorships were common.

Was this approach good for the investor? We compared the returns and risk-adjusted returns of three unweighted samples of companies: investment trusts, banks and ‘other’ financial firms and found that investing in investment trust shares surpassed the other alternatives, whether risk-adjusted or not. Our results offer evidence that the specific goal of investment trusts – the global distribution of risk – was certainly beneficial to their investors in the period up to WWI.

This early foray into fund management by UK investment trusts was deemed a success, but UK investment trusts only represented around one percent  of total London Stock Exchange capitalization by 1914. It is an interesting open question as to why it took decades for the asset management industry to take-off. A focus on different episodes in the history of investment trusts can help shed more light on the – under-researched – evolution of the asset management industry. This will allow economic historians,  fund managers, and policy-makers to draw lessons from how history affects the evolutionary path of modern financial practices.

 

To contact the authors:

Janette Rutterford, j.rutterford@open.ac.uk
@jrutterford

Dimitris P. Sotiropoulos, dimitris.sotiropoulos@open.ac.uk
@dpsotiropoulos

Fascistville: Mussolini’s new towns and the persistence of neo-fascism

by Mario F. Carillo (CSEF and University of Naples Federico II)

This blog is part of our EHS 2020 Annual Conference Blog Series.


 

Carillo3
March on Rome, 1922. Available at Wikimedia Commons.

Differences in political attitudes are prevalent in our society. People with the same occupation, age, gender, marital status, city of residence and similar background may have very different, and sometimes even opposite, political views. In a time in which the electorate is called to make important decisions with long-term consequences, understanding the origins of political attitudes, and then voting choices, is key.

My research documents that current differences in political attitudes have historical roots. Public expenditure allocation made almost a century ago help to explain differences in political attitudes today.

During the Italian fascist regime (1922-43), Mussolini undertook enormous investments in infrastructure by building cities from scratch. Fascistville (Littoria) and Mussolinia are two of the 147 new towns (Città di Fondazione) built by the regime on the Italian peninsula.

Carillo1

Towers shaped like the emblem of fascism (Torri Littorie) and majestic buildings as headquarters of the fascist party (Case del Fascio) dominated the centres of the new towns. While they were modern centres, their layout was inspired by the cities of the Roman Empire.

Intended to stimulate a process of identification of the masses based on the collective historical memory of the Roman Empire, the new towns were designed to instil the idea that fascism was building on, and improving, the imperial Roman past.

My study presents three main findings. First, the foundation of the new towns enhanced local electoral support for the fascist party, facilitating the emergence of the fascist regime.

Second, such an effect persisted through democratisation, favouring the emergence and persistence of the strongest neo-fascist party in the advanced industrial countries — the Movimento Sociale Italiano (MSI).

Finally, survey respondents near the fascist new towns are more likely today to have nationalistic views, prefer a stronger leader in politics and exhibit sympathy for the fascists. Direct experience of life under the regime strengthens this link, which appears to be transmitted across generations inside the family.

Carillo2

Thus, the fascist new towns explain differences in current political and cultural attitudes that can be traced back to the fascist ideology.

These findings suggest that public spending may have long-lasting effects on political and cultural attitudes, which persist across major institutional changes and affect the functioning of future institutions. This is a result that may inspire future research to study whether policy interventions may be effective in promoting the adoption of growth-enhancing cultural traits.

After the Black Death: labour legislation and attitudes towards labour in late-medieval western Europe

by Samuel Cohn Jr. (University of Glasgow)

This blog forms part F in the EHS series: The Long View on Epidemics, Disease and Public Health:Research from Economic History

The full article from this blog post was published on the Economic History Review, and it is available for members at this link 

The_plague_of_ashdod_1630
Poussin, The plague of Ashdod, 1630. Available at <https://en.wikipedia.org/wiki/Plague_of_Ashdod_(Poussin)#/media/File:The_plague_of_ashdod_1630.jpg&gt;

In the summer, 1999, I presented the kernel of this article at a conference in San Miniato al Tedesco in memory of David Herlihy. It was then limited to labour decrees I had found in the Florentine archives from the Black Death to the end of the fourteenth century. A few years later I thought of expanding the paper into a comparative essay. The prime impetus came from teaching the Black Death at the University of Glasgow. Students (and I would say many historians) think that England was unique in promulgating price and wage legislation after the Black Death, the famous Ordinance and Statute of Labourers in1349 and 1351. In fact, I did not then know how extensive this legislation was, and details of its geographical distribution remains unknown today.

A second impetus for writing the essay concerned a consensus in the literature on this wage legislation principally in England: that these decrees followed the logic of the laws of supply and demand. In short, with the colossal mortalities of the Black Death, 1347-51, the greatly diminished supply of labour meant that wage earners in cities and the countryside could demand excessive increases that threatened the livelihoods of elite rentiers — the church, nobility, and merchants. After all, this is what chroniclers and other contemporary commentators across Europe — Henry Knighton, Matteo Villani, William Langland, Giovanni Boccaccio, and many others — tell us in their scorning reproaches to greedy labourers. As ‘Hunger’ in book IV of Piers the Ploughman sneered:

And draught-ale was not good enough for them, nor a hunk of bacon, but they must have fresh meat or fish, fried or baked and chaud or plus chaud at that.

In addition, across the political spectrum from Barry Dobson to Rodney Hilton, Bertha Putnam’s study of the English laws (published in 1908) continued to be proclaimed as the definitive authority on these laws, despite her lack of quantitative analysis and central conclusion: the peasants were guilty of ‘extortionate greed’ and for this reason ‘these laws were necessary and just’ (Enforcement of the Statutes of Labourers, pp. 219–20.) Yet, across Europe, while nominal wages may have trebled through the 1370s, prices for basic commodities rose faster, leaving the supposed heartless labourers worse-off than they had been before the Black Death. As George Holmes discovered in 1957, the class to profit most in England from the post-plague demographics was the supposed victimized nobility.

Through primary and secondary sources, my article then researched wage and price legislation across a wide ambit of Europe — England, the Ile de France, the Low Countries, Provence, Aragon, Castile, Catalonia, Florence, Bologna, Siena, Orvieto, Milan, and Venice. Certainly, research needs to be extended further, to places where these laws were enacted and to where they appear not to have been, as in Scotland and the Low Countries, and to ask what difference the laws may have meant for economic development. However, from the places I examined, no simple logic arose, whether of supply and demand or the aims that might have been expected given differences in political regimes. Instead, municipal and royal efforts to control labour and artisans’ prices splintered in numerous and often contradictory directions, paralleling anxieties and needs to attribute blame as seen in other Black Death horrors: the burning of Jews and the murder of Catalans, beggars, and priests.

In conclusion, a history of the Black Death and its long-term consequences for labour can provide insights for perceiving our present predicament with Covid-19. We can  anticipate that the outcomes of the present pandemic will not be same across countries or continents. Similarly, for the Black Death and successive waves of plague through the fourteenth century, there were winners and losers. Yet, surprisingly, few historians have attempted to chart these differences, and fewer still to explain them. Instead, medievalists have concentrated more on the Black Death’s grand transformations, and these may serve as a welcome tonic for our present pandemic despair, especially as concerns labour. Eventually, post-Black-Death populations experienced increases in caloric intact, greater varieties in diet, better housing, consumption of more luxury goods, increased leisure time, and leaps in economic equality. Moreover, governments such as Florence, even before revolts or regime change, could learn from their initial economic missteps. With the second plague in 1363, they abandoned their laws entitled ‘contra laboratores’ that had endangered the supplies of their thin resources of labour and passed new decrees granting tax exemptions to attract labourers into their territories. Moreover, other regions–the Ile de France, Aragon, Provence, and Siena abandoned these initial prejudicial laws almost immediately. Even in England, despite legislating ever more stringent laws against labourers that last well into the fifteenth century, its law enforcers learnt to turn a blind eye to the law, allowing landlords and peasants to cut mutually beneficial deals that enabled steady work, wages to rise, working conditions to improve, and greater freedom of movement.

Let us remember that these grand transformations did not occur overnight. The switch in economic policies to benefit labourers and the narrowing of the gap between rich and poor did not begin to show effects until a generation after the Black Death; in some regions not until the early fifteenth century.

from VoxEU.org — Coronavirus from the perspective of 17th century plague

by Neil Cummins (LSE), Morgan Kelly (University College Dublin), Cormac Ó Gráda (University College Dublin)

A repost from VoxEU.org

Between 1563 and 1665, London experienced four plagues that each killed one fifth of the city’s inhabitants. This column uses 790,000 burial records to track the plagues that recurred across London (epidemics typically endured for six months). Possibly carried and spread by body lice, plague always originated in the poorest parishes; self-segregation by the affluent gradually halved their death rate compared with poorer Londoners. The population rebounded within two years, as new migrants arrived in the city “to fill dead men’s shoes”.

Full article available here: Coronavirus from the perspective of 17th century plague — VoxEU.org: Recent Articles

cumulative