Inspired by the curated series available here, we are creating a collective bibliography on Pandemics in History to help us navigating the “uncharted territory” that we are currently living – it includes references from academic papers, book, and journals, as well as a section on primary resources in collaboration with Archives Portal Europe. Everyone is welcome to join & contribute: just add your references to the shared file, according to the existing categories, or create new categories if needed
by Mary Fraser (Associate, The Scottish Centre for Crime & Justice Research, University of Glasgow)
This blog is part of our EHS 2020 Annual Conference Blog Series.
That policemen across Britain were released to plough the fields in the food shortages of 1917 is currently unrecognised, although soldiers, prisoners of war, women and school children have been widely acknowledged as helping agriculture. A national project is seeking to redress the imbalance in our understanding.
In March 1917, Britain faced starvation. Massive losses of shipping, which brought around 80% of the population’s requirements of grain, mainly from America and Canada, were being sunk by enemy U-boats. Added to this, the harsh and lengthy winter rotted the potato crop in the ground. These factors largely removed two staple items from the diet: bread and potatoes. Together with soaring food prices, the poor faced starvation.
To overcome this threat, the campaign to change the balance from pasture to arable began in December 1916 (Ernle, 1961). Government took control of farming and demanded a huge increase in home-grown grain and potatoes, so that Britain could become self-sufficient in food.
But the land had been stripped of much of its skilled labour by the attraction of joining the army or navy, so that farmers felt helpless to respond. Also, equipment was idle due to lack of maintenance as mechanics had similarly signed up to war or had left for better-paid work in the munitions factories. The need to help farmers to produce home-grown food was so great that every avenue was explored.
When the severe winter broke around mid-March, not only were many hundreds of soldiers deployed to farms, but also local authorities were asked to help. One of the first groups to come forward was the police. Many had been skilled farm workers in their previous employment and so were ideal to operate the manual ploughs, which needed skill and strength to turn over heavy soil, some of which had not been ploughed for many years.
A police popular journal at the time revealed ‘Police as Ploughmen’ and gave many of the 18 locations across Britain (Fraser, 2019). Estimates are that between 500 and 600 policemen were released, some for around two months.
For example, Glasgow agreed to the release of 90 policemen while Berwick, Roxburgh and Selkirk agreed to release 40. These two areas were often held up as examples of how other police forces across Britain could help farmers: Glasgow being an urban police force while Berwick, Roxburgh and Selkirk was rural.
To release this number was a considerable contribution by police forces, as many of their young fit policemen had also been recruited into the army, to be partially replaced by part-time older Special Constables.
This help to farmers paid huge dividends. It prevented the food riots seen in other combatant nations, such as Austria-Hungary, Germany, Russia and France (Ziemann, 2014). By the harvest of 1917, the substitution of ploughmen allowed Britain to claim an increase of 1,000,000 acres of arable land, producing over 4,000,000 more tons of wheat, barley, oats and potatoes (Ernle, 1961). Britain was also able to send food to troops in France and Italy, supplementing their local failed harvests.
It is now time that policemen were recognised for their social conscience by helping their local populations. This example of ‘Police as Ploughmen’ shows that as well as investigations, cautions and arrests, the police in Britain also have a remit to help local people, particularly in times of dire need, such as in the food crisis of the First World War.
Ernle, Lord (RE Prothero) (1961) English Farming, Past and Present, 6th edition, Heinemann Educational Book Ltd.
Fraser, M (2019) Policing the Home Front, 1914-1918: The control of the British population at war, Routledge.
Ziemann, B (2014) The Cambridge History of the First World War. Volume 2: The State.
Over the last century, European historiography has debated whether industrialisation brought about an improvement in working class living standards. Multiple demographic, economic, anthropometric and wellbeing indicators have been examined in this regard, but it was Eric Hobsbawm (1957) who, in the late 1950s, incorporated food consumption patterns into the analysis.
Between the mid-19th and the first half of the 20th century, the diet of European populations underwent radical changes. Caloric intake increased significantly, and cereals were to a large extent replaced by animal proteins and fat, resulting from a substantial increase in meat, milk, eggs and fish consumption. This transformation was referred to by Popkin (1993) as the ‘Nutritional transition’.
These dietary changes were driven, inter alia, by the evolution of income levels which raises the possibility that significant inequalities between different social groups ensued. Dietary inequalities between different social groups are a key component in the analysis of inequality and living standards; they directly affect mortality, life expectancy, and morbidity. However, this hypothesis remains unproven, as historians are still searching for adequate sources and methods with which to measure the effects of dietary changes on living standards.
This study contributes to the debate by analysing a relatively untapped source: hospital diets. We have analysed the diet of psychiatric patients and members of staff in the main hospital of the city of Valencia (Spain) between 1852 and 1923. The diet of patients depended on their social status and the amounts they paid for their upkeep. ‘Poor psychiatric patients’ and abandoned children, who paid no fee, were fed according to hospital regulations, whereas ‘well-off psychiatric patients’ paid a daily fee in exchange for a richer and more varied diet. There were also differences among members of staff, with nuns receiving a richer diet than other personnel (launderers, nurses and wet-nurses). We think that our source broadly reflects dietary patterns of the Spanish population and the effect of income levels thereon.
Figure 2 illustrates some of these differences in terms of animal-based caloric intake in each of the groups under study. Three population groups can be clearly distinguished: ‘well-off psychiatric patients’ and nuns, whose diet already presented some of the features of the nutritional transition by the mid-19th century, including fewer cereals and a meat-rich diet, as well as the inclusion of new products, such as olive oil, milk, eggs and fish; hospital staff, whose diet was rich in calories,to compensate for their demanding jobs, but still traditional in structure, being largely based on cereals, legumes, meat and wine; and, finally, ‘poor psychiatric patients’ and abandoned children, whose diet was poorer and which, by the 1920, had barely joined the trends that characterised the nutritional transition.
In conclusion, the nutritional transition was not a homogenous process, affecting all diets at the time or at the same pace. On the contrary, it was a process marked by social difference, and the progress of dietary changes was largely determined by social factors. By the mid-19th century, the diet structure of well-to-do social groups resembled diets that were more characteristic of the 1930s, while less favoured and intermediate social groups had to wait until the early 20th century before they could incorporate new foodstuffs into their diet. As this sequence clearly indicates, less favoured social groups always lagged behind.
Medina-Albaladejo, F. J. and Calatayud, S., “Unequal access to food during the nutritional transition: evidence from Mediterranean Spain”, Economic History Review, (forthcoming).
Hobsbawm, E. J., “The British Standard of Living, 1790-1850”, Economic History Review, 2nd ser., X (1957), pp. 46-68.
Popkin B. M., “Nutritional Patterns and Transitions”, Population and Development Review, 19, 1 (1993), pp. 138-157.
Tuberculosis (TB) is one of the oldest and deadliest diseases. Traces of TB in humans can be found as early as 9,000 years ago, and written accounts date back 3,300 years in India. Untreated, TB’s case-fatality rate is as high as 50 percent. It was a dreaded disease. TB is an airborne disease caused by the bacteria Mycobacteriumtuberculosis. Tuberculosis spreads through the air when a person who has an active infection coughs, sneezes, speaks, or sings. Most cases remain latent and do not develop symptoms. Activation of tuberculosis is particularly influenced by undernutrition.
Tuberculosis played a prominent role in the secular mortality decline. Of the 27 years of life expectancy gained in England and Wales between 1871 and 1951, TB accounts for about 40 percent of the improvement, a 12-year gain. Modern medicine, the usual suspect used to explain this mortality decline, could not have been the culprit. As Thomas McKeown famously pointed out, TB mortality started its decline long before the tubercle bacillus was identified and long before an effective treatment was provided (Figure 1). McKeown viewed improvements in economic and social conditions, especially improved diets, as the principal factor arresting the combatting tuberculosis. A healthy diet, however, is not the only factor behind nutritional status. Infections, no matter how mild, reduce nutritional status and increase susceptibility to infection.
Figure 1. Mortality rate from TB.
Source: as per original article
In “Airborne Diseases: Tuberculosis in the Union Army” I studied the determinants of diagnosis, discharge, and mortality from tuberculosis in the past. I examined the medical histories of 25,000 soldiers and veterans in the Union Army using data collected under the direction of Robert Fogel. The Civil War brought together soldiers from many socioeconomic conditions and ecological backgrounds into an environment which was ideal for the spread of this disease. The war also provided a unique setting to examine many of the factors which were likely responsible for the decline in TB mortality. Before enlistment, individuals had differential exposure to harmful dust and fumes. They also faced different disease environments and living conditions. By housing recruits in confined spaces, the war exposed soldiers to a host of waterborne and airborne infections. In the Civil War, disease was far more deadly than battle.
The Union Army data contains detailed medical records and measures of nutritional status. Height at enlistment measures net nutritional experiences at early ages. Weight, needed to measure current nutritional status using the Body Mass Index (BMI), is available for war veterans. My estimates use a hazard model and a variety of controls aligned with existing explanations proposed for the decline in TB prevalence and fatality rates. By how much would the diagnosis of TB have declined if the average Union Army soldier had the height of the current U.S. male population, and if all his relevant infections diagnosed prior to TB were eradicated? Figure 2 presents the contribution of the predictors of TB diagnosis in soldiers who did not engage in battle, and Figure 3 reports soldiers discharged because of TB. Nutritional experiences in early life provided a protective effect against TB. Between 25 and 50 per cent of the predictable decline in tuberculosis could be associated with the modern increase in height. Declines in the risk of waterborne and airborne diseases are as important as the predicted changes in height
Figure 2. Contribution of various factors to the decline in TB diagnosis
Figure 3. Contribution of various factors to the decline in discharges because of TB.
My analysis showed that a wartime diagnosis of TB increased the risk of tuberculosis mortality. Because of the chronic nature of the disease, infected soldiers likely developed a latent or persistent infection that remained active until resistance failed at old age. Nutritional status provided some protection against mortality. For veterans, height was not as robust as BMI. If a veteran’s BMI increased from its historical value of 23 to current levels of 27, his mortality risk from tuberculosis would have been reduced by 50 per cent. Overall, the contribution of changes in `pure’ diets and changes in infectious disease exposure, was probably equal.
What lessons can be drawn for the current covid-19 pandemic? Covid-19 is also an airborne disease. Airborne diseases (e.g., influenza, measles, smallpox, and tuberculosis) are difficult to control. In unfamiliar populations, they often break wreak havoc. But influenza, measles, smallpox, and tuberculosis are mostly killers from the past. The findings in my paper suggest that the conquest of tuberculosis happened through both individual and public health efforts. Improvements in diets and public health worked simultaneously and synergistically. There was no silver bullet to defeat the great white plague, tuberculosis. Diets are no longer as inadequate as in the past. Still, Covid-19 has exposed differential susceptibility to the disease. Success in combatting Covid-19 is likely to require simultaneous and synergistic private and public efforts.
This article, written during the COVID-19 epidemic, provides a general introduction to the long-term history of infectious diseases, epidemics and the early phases of the spectacular long-term improvements in life expectancy since 1750, primarily with reference to English history. The story is a fundamentally optimistic one. In 2019 global life expectancy was approaching 73 years. In 1800 it was probably about 30. To understand the origins of this transition, we have to look at the historical sequence by which so many causes of premature death have been vanquished over time. In England that story begins much earlier than often supposed, in the years around 1600. The first two ‘victories’ were over famine and plague. However, economic changes with negative influences on mortality meant that, despite this, life expectancies were either falling or stable between the late sixteenth and mid eighteenth centuries. The late eighteenth and early nineteenth century saw major declines in deaths from smallpox, malaria and typhus and the beginnings of the long-run increases in life expectancy. The period also saw urban areas become capable of demographic growth without a constant stream of migrants from the countryside: a necessary precondition for the global urbanization of the last two centuries and for modern economic growth. Since 1840 the highest national life expectancy globally has increased by three years in every decade.
by David Chambers, Charikleia Kaffe & Elroy Dimson (Cambridge Judge Business School)
This blog is part of our EHS 2020 Annual Conference Blog Series.
Endowments are investment funds aiming to meet the needs of their beneficiaries over multiple generations and adhering to the principle of intergenerational equity. University endowments such as Harvard, Yale and Princeton, in particular, have been at the forefront of developments in long-horizon investing over the last three decades.
But little is known about how these funds invested before the recent past. While scholars have previously examined the history of insurance companies and investment trusts, very little historical analysis has been undertaken of such important and innovative long-horizon investors. This is despite the tremendous influence of the so-called ‘US endowment model’ of long-horizon investing – attributed to Yale University and its chief investment officer, David Swensen – on other investors.
Our study exploits a new long-run hand-collected data set of the investments belonging to the 12 wealthiest US university endowments from the early twentieth century up to the present: Brown University, Columbia University, Cornell University, Dartmouth College, Harvard University, Princeton University, the University of Pennsylvania, Yale University, the Massachusetts Institute of Technology, the University of Chicago, Johns Hopkins University and Stanford University.
All are large private doctoral institutions that were among the wealthiest university endowments in the early decades of the twentieth century and which made sufficient disclosures about how their funds were invested. From the latter, we estimate the annual time series of allocations across major asset classes (stocks, bonds, real estate, alternative assets, etc.), endowment market values and investment returns.
Our study has two main findings. First, we document two major shifts in the allocation of the institutions’ portfolios from predominantly bonds to predominantly stocks beginning in the 1930s and then again from stocks to alternative assets beginning in the 1980s. Moreover, the Ivy League schools (notably, Harvard, Yale and Princeton) led the way in these asset allocation moves in both eras.
Second, we examine whether these funds invest in a manner consistent with their mission as long-term investors, namely, behaving countercyclically – selling when prices are high and buying when low. Prior studies show that pension funds and mutual funds behave procyclically during crises – buying when prices are high and selling when low.
In contrast, our analysis finds that the leading university endowments on average behave countercyclically across the six ‘worst’ financial crises during the last 120 years in the United States: 1906-1907, 1929, 1937, 1973-74, 2000 and 2008. Hence, typically, during the pre-crisis price run-up, they decrease their allocation to risky assets but increase this allocation in the post-crisis price decline.
In addition, we find that this countercyclical behaviour became more pronounced in the two most recent crises – the Dot-Com Bubble and the 2008 Global Financial Crisis.
by Janette Rutterford and Dimitris P. Sotiropoulos (The Open University Business School)
The full article from this blog is forthcoming in the Economic History Review
UK investment trusts (the British name for closed-end funds) were at the forefront of financial innovation in the global era before World War I. Soon after the increase in investment choice facilitated by Companies Acts in the 1850s and 1860s – which allowed investors limited liability – investment trusts emerged to invest in a diverse range of securities across the globe, thereby offering asset management services to individual investors. They rapidly became a low-cost financial vehicle for so-called “averaging” of risk across a portfolio of marketable securities without having to sacrifice return. UK investment trusts were the first genuine historical paradigm of a sophisticated asset management industry.
Formed as trusts from the the late 1860s, by the 1880s, the vast majority of UK investment trusts had acquired limited liability company status and issued shares and bonds traded in London and elsewhere. They used the proceeds to construct global investment portfolios made up of a multitude of different securities whose yields were higher than could be achieved by investing solely in British securities, an approach subsequently termed the ‘geographical diversification of risk’.
A recent study of ours examines UK investment trust portfolio strategies between 1886 and 1914, for those investment trusts that disclosed their portfolios. Our dataset comprises 30 different investment trust companies, 115 firm portfolio observations, and 32,708 portfolio holdings, sampled every five years prior to WWI. Our results reveal a sophisticated approach to asset management by these investment trusts. The average trust in our sample had a portfolio with a nominal value of £1.7 million – equivalent to around £1.7bn today – invested in an average of 284 different securities. Their size and the large number of holdings are both evidence that asset management before WWI was a serious business.
Investment trusts evolved a unique asset allocation strategy: globally diversified, skewed in favour of preferred regions, sectors and security types, and with numerous holdings. Figure 1 shows the flow of investment from Europe to the emerging North and Latin American markets over the period, although the box plots reveal significant differences between individual investment trust portfolios. The preference for overseas investments is clear: on average, domestic investment never exceeded 26 percent of portfolio value. Railways was the preferred sector, averaging 40 percent of portfolio value throughout the period. Government and municipal securities fell out of favour from a high of 40 percent in 1886 to a low of sic percent of portfolio value by 1914. Investment trusts switched instead to the Utilities and the Industrial, Commercial and Agriculture sectors which, combined, made up 48 percent of portfolio value by 1914.
Figure 2 shows the types of securities held in investment trust portfolios. Fixed-interest securities dominated before WWI, though there was a growing interest in ordinary and preferred shares over time. Perhaps surprisingly, an increasing number of investment trusts were willing to embrace the ‘cult of equity’, far earlier than, say, insurance companies.
We find that investment trust directors adopted a mixture of a buy-and-hold investment and active portfolio management strategies. The scale of holdings of a wide variety of different types of securities required efficient administration. The average portfolio holding represented only 0.35 percent of portfolio value, while 75 percent of holdings had individual weights of less than 0.43 percent of portfolio value. Although not concentrated, these portfolios were skewed. The top 10 percent of holdings per portfolio represented on average 35.7 percent of total portfolio value, and the top 25 percent of holdings represented 60.0 percent.
Investment trust directors did not radically reorganize their portfolios on an annual basis; neither did they stick rigidly to the same securities over time. They were not passive investors. Annual turnover was in excess of 10 percent (measured as the lower of sales and purchases to nominal portfolio value). Nor were they sheep. There was a wide variety of focus between different UK investment trusts; each tended to have its own specific investment areas of interest, and there was considerable cross-sectional variation with respect to diversification strategies, even though joint directorships were common.
Was this approach good for the investor? We compared the returns and risk-adjusted returns of three unweighted samples of companies: investment trusts, banks and ‘other’ financial firms and found that investing in investment trust shares surpassed the other alternatives, whether risk-adjusted or not. Our results offer evidence that the specific goal of investment trusts – the global distribution of risk – was certainly beneficial to their investors in the period up to WWI.
This early foray into fund management by UK investment trusts was deemed a success, but UK investment trusts only represented around one percent of total London Stock Exchange capitalization by 1914. It is an interesting open question as to why it took decades for the asset management industry to take-off. A focus on different episodes in the history of investment trusts can help shed more light on the – under-researched – evolution of the asset management industry. This will allow economic historians, fund managers, and policy-makers to draw lessons from how history affects the evolutionary path of modern financial practices.
by Mario F. Carillo (CSEF and University of Naples Federico II)
This blog is part of our EHS 2020 Annual Conference Blog Series.
Differences in political attitudes are prevalent in our society. People with the same occupation, age, gender, marital status, city of residence and similar background may have very different, and sometimes even opposite, political views. In a time in which the electorate is called to make important decisions with long-term consequences, understanding the origins of political attitudes, and then voting choices, is key.
My research documents that current differences in political attitudes have historical roots. Public expenditure allocation made almost a century ago help to explain differences in political attitudes today.
During the Italian fascist regime (1922-43), Mussolini undertook enormous investments in infrastructure by building cities from scratch. Fascistville (Littoria) and Mussolinia are two of the 147 new towns (Città di Fondazione) built by the regime on the Italian peninsula.
Towers shaped like the emblem of fascism (Torri Littorie) and majestic buildings as headquarters of the fascist party (Case del Fascio) dominated the centres of the new towns. While they were modern centres, their layout was inspired by the cities of the Roman Empire.
Intended to stimulate a process of identification of the masses based on the collective historical memory of the Roman Empire, the new towns were designed to instil the idea that fascism was building on, and improving, the imperial Roman past.
My study presents three main findings. First, the foundation of the new towns enhanced local electoral support for the fascist party, facilitating the emergence of the fascist regime.
Second, such an effect persisted through democratisation, favouring the emergence and persistence of the strongest neo-fascist party in the advanced industrial countries — the Movimento Sociale Italiano (MSI).
Finally, survey respondents near the fascist new towns are more likely today to have nationalistic views, prefer a stronger leader in politics and exhibit sympathy for the fascists. Direct experience of life under the regime strengthens this link, which appears to be transmitted across generations inside the family.
Thus, the fascist new towns explain differences in current political and cultural attitudes that can be traced back to the fascist ideology.
These findings suggest that public spending may have long-lasting effects on political and cultural attitudes, which persist across major institutional changes and affect the functioning of future institutions. This is a result that may inspire future research to study whether policy interventions may be effective in promoting the adoption of growth-enhancing cultural traits.
The full article from this blog post was published on the Economic History Review, and it is available for members at this link
In the summer, 1999, I presented the kernel of this article at a conference in San Miniato al Tedesco in memory of David Herlihy. It was then limited to labour decrees I had found in the Florentine archives from the Black Death to the end of the fourteenth century. A few years later I thought of expanding the paper into a comparative essay. The prime impetus came from teaching the Black Death at the University of Glasgow. Students (and I would say many historians) think that England was unique in promulgating price and wage legislation after the Black Death, the famous Ordinance and Statute of Labourers in1349 and 1351. In fact, I did not then know how extensive this legislation was, and details of its geographical distribution remains unknown today.
A second impetus for writing the essay concerned a consensus in the literature on this wage legislation principally in England: that these decrees followed the logic of the laws of supply and demand. In short, with the colossal mortalities of the Black Death, 1347-51, the greatly diminished supply of labour meant that wage earners in cities and the countryside could demand excessive increases that threatened the livelihoods of elite rentiers — the church, nobility, and merchants. After all, this is what chroniclers and other contemporary commentators across Europe — Henry Knighton, Matteo Villani, William Langland, Giovanni Boccaccio, and many others — tell us in their scorning reproaches to greedy labourers. As ‘Hunger’ in book IV of Piers the Ploughman sneered:
And draught-ale was not good enough for them, nor a hunk of bacon, but they must have fresh meat or fish, fried or baked and chaud or plus chaud at that.
In addition, across the political spectrum from Barry Dobson to Rodney Hilton, Bertha Putnam’s study of the English laws (published in 1908) continued to be proclaimed as the definitive authority on these laws, despite her lack of quantitative analysis and central conclusion: the peasants were guilty of ‘extortionate greed’ and for this reason ‘these laws were necessary and just’ (Enforcement of the Statutes of Labourers, pp. 219–20.) Yet, across Europe, while nominal wages may have trebled through the 1370s, prices for basic commodities rose faster, leaving the supposed heartless labourers worse-off than they had been before the Black Death. As George Holmes discovered in 1957, the class to profit most in England from the post-plague demographics was the supposed victimized nobility.
Through primary and secondary sources, my article then researched wage and price legislation across a wide ambit of Europe — England, the Ile de France, the Low Countries, Provence, Aragon, Castile, Catalonia, Florence, Bologna, Siena, Orvieto, Milan, and Venice. Certainly, research needs to be extended further, to places where these laws were enacted and to where they appear not to have been, as in Scotland and the Low Countries, and to ask what difference the laws may have meant for economic development. However, from the places I examined, no simple logic arose, whether of supply and demand or the aims that might have been expected given differences in political regimes. Instead, municipal and royal efforts to control labour and artisans’ prices splintered in numerous and often contradictory directions, paralleling anxieties and needs to attribute blame as seen in other Black Death horrors: the burning of Jews and the murder of Catalans, beggars, and priests.
In conclusion, a history of the Black Death and its long-term consequences for labour can provide insights for perceiving our present predicament with Covid-19. We can anticipate that the outcomes of the present pandemic will not be same across countries or continents. Similarly, for the Black Death and successive waves of plague through the fourteenth century, there were winners and losers. Yet, surprisingly, few historians have attempted to chart these differences, and fewer still to explain them. Instead, medievalists have concentrated more on the Black Death’s grand transformations, and these may serve as a welcome tonic for our present pandemic despair, especially as concerns labour. Eventually, post-Black-Death populations experienced increases in caloric intact, greater varieties in diet, better housing, consumption of more luxury goods, increased leisure time, and leaps in economic equality. Moreover, governments such as Florence, even before revolts or regime change, could learn from their initial economic missteps. With the second plague in 1363, they abandoned their laws entitled ‘contra laboratores’ that had endangered the supplies of their thin resources of labour and passed new decrees granting tax exemptions to attract labourers into their territories. Moreover, other regions–the Ile de France, Aragon, Provence, and Siena abandoned these initial prejudicial laws almost immediately. Even in England, despite legislating ever more stringent laws against labourers that last well into the fifteenth century, its law enforcers learnt to turn a blind eye to the law, allowing landlords and peasants to cut mutually beneficial deals that enabled steady work, wages to rise, working conditions to improve, and greater freedom of movement.
Let us remember that these grand transformations did not occur overnight. The switch in economic policies to benefit labourers and the narrowing of the gap between rich and poor did not begin to show effects until a generation after the Black Death; in some regions not until the early fifteenth century.
by Neil Cummins (LSE), Morgan Kelly (University College Dublin), Cormac Ó Gráda (University College Dublin)
A repost from VoxEU.org
Between 1563 and 1665, London experienced four plagues that each killed one fifth of the city’s inhabitants. This column uses 790,000 burial records to track the plagues that recurred across London (epidemics typically endured for six months). Possibly carried and spread by body lice, plague always originated in the poorest parishes; self-segregation by the affluent gradually halved their death rate compared with poorer Londoners. The population rebounded within two years, as new migrants arrived in the city “to fill dead men’s shoes”.
In a recent article[i] we reviewed research on preindustrial epidemics. We focused on large-scale, lethal events: those that have a deeper and more long-lasting impact on economy and society, thereby producing the historical documentation that allows for systematic study. Almost all these lethal pandemics have been caused by plague: from the “Justinian’s plague” (540-41) and the Black Death (1347-52) to the last great European plagues of the seventeenth century (1623-32 and 1647-57). These epidemics were devastating. The Black Death, killed between 35 and 60 per cent of the population of Europe and the Mediterranean (approximately 50 million victims).
These epidemics also had large-scale and persistent consequences. The Black Death might have positively influenced the development of Europe, even playing a role in the Great Divergence.[ii] Conversely, it is arguable that seventeenth-century plagues in Southern Europe (especially Italy), precipitated the Little Divergence.[iii] Clearly, epidemics can have asymmetric economic effects. The Black Death, for example, had negative long-term consequences for relatively under-populated areas of Europe, such as Spain or Ireland.[iv] More generally, the effects of an epidemic depend upon the context in which it happens. Below we focus on how institutions shaped the spread and the consequences of plagues.
Preindustrial epidemics and institutions
In preindustrial times, as today, institutions played a crucial role in determining the final intensity of epidemics. When the Black Death appeared, European societies were unprepared for the threat. But, when it became apparent that plague was a recurrent scourge, institutional adaptation commenced — typical of human reaction to a changing biological environment. From the late fourteenth century permanent health boards were established, able to take quicker action than the ad-hoc commissions created during the emergency of 1348. These boards monitored constantly the international situation, and provided the early warning necessary for implementing measures to contain epidemics[v]. From the late fourteenth century, quarantine procedures for suspected cases were developed, and in 1423 Venice built the first permanent lazzaretto (isolation hospital) on a lagoon island. By the early sixteenth century, at least in Italy, central and local government had implemented a broad range of anti-plague policies, including health controls at river and sea harbours, mountain passes, and political boundaries. Within each Italian state, infected communities or territories were isolated, and human contact was limited by quarantines.[vi] These, and other instruments developed against the plague, are the direct ancestors of those currently employed to contain Covid-19. However, such policies are not always successful: In 1629, for example, plague entered Northern Italy as infected armies from France and Germany arrived to fight in the War of the Mantuan Succession. Nobody has ever been able to quarantine an enemy army.
It is no accident that these policies were first developed in Italian trading cities which, because of their commercial networks, had good reason to fear infection. Such policies were quickly imitated in Spain and France.[vii] However, England in particular, “was unlike many other European countries in having no public precautions against plague at all before 1518”.[viii] Even in the seventeenth century, England was still trying to introduce institutions that had long-since been consolidated in Mediterranean Europe.
The development of institutions and procedures to fight plague has been extensively researched. Nonetheless, other aspects of preindustrial epidemics are less well-known. For example, how institutions tended to shift mortality towards specific socio-economic groups, especially the poor. Once doctors and health officials noticed that plague mortality was higher in the poorest parts of the city, they began to see the poor themselves as being responsible for the spread of the infection. As a result, during the early modern period their presence in cities was increasingly resented,[ix] and as a precautionary measure, vagrants and beggars were expelled. The death of many poor people was even regarded by some as one of the few positive consequences of plague. The friar, Antero Maria di San Bonaventura, wrote immediately after the 1656-57 plague in Genoa:
“What would the world be, if God did not sometimes touch it with the plague? How could he feed so many people? God would have to create new worlds, merely destined to provision this one […]. Genoa had grown so much that it no longer seemed a big city, but an anthill. You could neither take a walk without knocking into one another, nor was it possible to pray in church on account of the multitude of the poor […]. Thus it is necessary to confess that the contagion is the effect of divine providence, for the good governance of the universe”.[x]
While it seems certain that the marked socio-economic gradient of plague mortality was partly due to the action of health institutions, there is no clear evidence that officials were actively trying to kill the poor by infection. Sometimes, the anti-poor behaviour of the elites might have backfired. Our initial research on the 1630 epidemic in the Italian city of Carmagnola suggests that while poor households were more prone to being all interned in the lazzaretto for isolation at the mere suspicion of plague, this might have reduced, not increased, their individual risk of death compared to richer strata. Possibly, this was the combined result of effective isolation of the diseased, assured provisioning of victuals, basic care, and forced within-household distancing[xi].
Different health treatment reserved to rich and poor and economic elites making wrong and self-harming decisions: it would be nice if, occasionally, we learned something from history!
[i] Alfani, G. and T. Murphy. “Plague and Lethal Epidemics in the Pre-Industrial World.” Journal of Economic History 77 (1), 2017, 314–343.
[ii] Clark, G. A Farewell to the Alms: A Brief Economic History of the World. Princeton: Princeton University Press, 2007; Broadberry, S. Accounting for the Great Divergence, LSE Economic History Working Papers No. 184, 2013.
[iii] Alfani, G. “Plague in Seventeenth Century Europe and the Decline of Italy: An Epidemiological Hypothesis.” European Review of Economic History 17 (3), 2013, 408–430; Alfani, G. and M. Percoco. “Plague and Long-Term Development: the Lasting Effects of the 1629-30 Epidemic on the Italian Cities.” Economic History Review 72 (4), 2019, 1175–1201.
[v] Cipolla, C.M. Public Health and the Medical Profession in the Renaissance. Cambridge: CUP, 1976; Cohn, S.H. Cultures of Plague. Medical Thought at the End of the Renaissance. Oxford: OUP, 2009. Alfani, G. Calamities and the Economy in Renaissance Italy. The Grand Tour of the Horsemen of the Apocalypse. Basingstoke: Palgrave, 2013.
[vi] Alfani, G. Calamities and the Economy, cit.; Cipolla, C.M, Public Health and the Medical Profession, cit.; Henderson, J., Florence Under Siege: Surviving Plague in an Early Modern City, Yale University Press, 2019.
[vii] Cipolla, C.M, Public Health and the Medical Profession, cit..
[viii] Slack, Paul. The Impact of Plague in Tudor and Stuart England. London: Routledge, 1985, 201–26.
[ix] Pullan, B. “Plague and Perceptions of the Poor in Early Modern Italy.” In T. Ranger and P. Slack (eds.), Epidemics and Ideas. Essays on the Historical Perception of Pestilence. Cambridge: CUP, 1992, 101-23; Alfani, G., Calamities and the Economy.