The Growth Pattern of British Children, 1850-1975

By Pei Gao (NYU Shanghai) & Eric B. Schneider (LSE)

The full article from this blog is forthcoming in the Economic History Review and is currently available on Early View.

 

Gao4
HMS Indefatigable with HMS Diadem (1898) in the Gulf of St. Lawrence 1901. Available at Wikimedia Commons.

Since the mid-nineteenth century, the average height of adult British men increased by 11 centimetres. This increase in final height reflects improvements in living standards and health, and provides insights on the growth pattern of children which has been comparatively neglected. Child growth is very sensitive to economic and social conditions: children with limited nutrition or who suffer from chronic disease, grow more slowly than healthy children. Thus, to achieve such a large increase in adult height, health conditions must have improved dramatically for children since the mid-nineteenth century.

Our paper seeks to understand how child growth changed over time as adult height was increasing. Child growth follows a typical pattern shown in Figure 1.  The graph on the left shows the height by age curve for modern healthy children, and the graph on the right shows the change in height at each age (height velocity). We look at three dimensions of the growth pattern of children: the final adult height that children achieve, i.e. what historians have predominantly focused on to date; the timing (age) when the growth velocity peaks during puberty,  and, finally,   the overall speed of maturation which affects the velocity of growth across all ages and the length of the growing years.

 

Figure 1.         Weights and Heights for boys who trained on HMS Indefatigable, 1860s-1990s.

Gao1
Source: as per article

 

To understand how growth changed over time, we collected information about 11,548 boys who were admitted to the training ship Indefatigable from the 1860s to 1990s (Figure 2).  This ship was located on the River Mersey near Liverpool for much of its history and it trained boys for careers in the merchant marine and navy. Crucially, the administrators recorded the boys’ heights and weights at admission and discharge, allowing us to calculate growth velocities for each individual.

 

Figure 2.         HMS Indefatigable

Gao2
Source: By permission, the Indefatigable Old Boys Society

 

We trace the boys’ heights over time (grouping them by birth decade) and find that they grew most rapidly during the interwar period. In addition, the most novel finding was that for boys born in the nineteenth century there is little evidence that they experienced a strong pubertal growth spurt unlike healthy boys today. Their growth velocity was relatively flat across puberty.  However, starting with the 1910 birth decade, boys began experiencing more rapid pubertal growth similar to the right-hand graph in Figure 1. The appearance of rapid pubertal growth is a product of two factors: an increase in the speed of maturation, which meant that boys grew more rapidly during puberty than before and, secondly,  a decrease in the variation in the timing of the pubertal growth spurt, which meant that boys were experiencing their pubertal growth at more similar ages.

 

Figure 3.         Adjusted height-velocity for boys who trained on HMS Indefatigable.

Gao3
Source: as per article

 

This sudden change in the growth pattern of children is a new finding that is not predicted by the historical or medical literature.  In the paper, we show that this change cannot be explained by improvements in living standards on the ship and that it is robust to a number of potential alternative explanations.   We argue that reductions in disease exposure and illness were likely the biggest contributing factor. Infant mortality rates, an indicator of chronic illness in childhood, declined only after 1900 in England and Wales, so a decline in illness in childhood could have mattered. In addition, although general levels of nutrition were more than adequate by the turn of the twentieth century, the introduction of free school meals and the milk-in-schools programme in the early twentieth century,  likely also helped ensure that children had access to key protein and nutrients necessary for growth.

Our findings matter for two reasons. First, they help complete the fragmented picture in the existing historical literature on how children’s growth changed over time. Second, they highlight the importance of the 1910s and the interwar period as a turning point in child growth. Existing research on adult heights has already shown that the interwar period was a period of rapid growth for children, but our results further explain how and why child growth accelerated in that period.

 


Pei Gao

p.gao@nyu.edu

 

Eric B. Schneider

e.b.schneider@lse.ac.uk

Twitter: @ericbschneider

 

 

Overcoming the Egyptian cotton crisis in the interwar period: the role of irrigation, drainage, new seeds and access to credit

By Ulas Karakoc (TOBB ETU, Ankara & Humboldt University Berlin) & Laura Panza (University of Melbourne)

The full article from this blog is forthcoming in the Economic History Review.

 

Panza1
A study of diversity in Egyptian cotton, 1909. Available at Wikimedia Commons.

By 1914, Egypt’s large agricultural sector was negatively hit by declining yields in cotton production. Egypt at the time was a textbook case of export-led development.  The decline in cotton yields — the ‘cotton crisis’ — was coupled with two other constraints: land scarcity and high population density. Nonethless, Egyptian agriculture was able to overcome this crisis in the interwar period, despite unfavourable price shocks. The output stagnation between 1900 and the 1920s clearly contrasts with the following recovery (Figure 1). In this paper, we empirically examine how this happened, by focusing on the role of government investment in irrigation infrastructure, farmers crop choices (intra-cotton shifts), and access to credit.

 

Figure 1: Cotton output, acreage and yields, 1895-1940

Panza2
Source: Annuaire Statistique (various issues)

 

The decline in yields was caused by expanded irrigation without sufficient drainage, leading to a higher water table, increased salination, and increased pest attacks on cotton (Radwan, 1974; Owen, 1968; Richards, 1982).  The government introduced an extensive public works programme, to reverse soil degradation and restore production. Simultaneously, Egypt’s farmers changed the type of cotton they were cultivating, shifting from the long staple and low yielding Sakellaridis to the medium-short staple and high yielding Achmouni, which reflected income maximizing preferences (Goldberg 2004 and 2006). Another important feature of the Egyptian economy between the 1920s and 1940s, was the expansion of credit facilities and the connected increase in farmers’ accessibility to agricultural loans. The interwar years witnessed the establishment of cooperatives to facilitate small landowners’ access to inputs (Issawi,1954), and the foundation of the Crèdit Agricole in 1931, offering small loans (Eshag and Kamal, 1967). These credit institutions coexisted with a number of mortgage banks, among which the Credit Foncièr was the largest, servicing predominantly large owners. Figure 2 illustrates the average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939).

 

Figure 2: Average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939)

Panza3
Source: Annuaire Statistique (various issues)

 

Our work investigates the extent to which these factors contributed to the recovery of the raw cotton industry. Specifically: to what extent can intra-cotton shifts explain changes in total output? How did the increase in public works, mainly investment in the canal and drainage network, help boost production? And what role did differential access to credit play? To answer these questions, we construct a new dataset by exploiting official statistics (Annuaire Statistique de l’Egypte) covering 11 provinces and 17 years during 1923-1939. These data allow us to provide the first empirical estimates of Egyptian cotton output at the province level.

Access to finance and improved seeds significantly increased cotton output. The declining price premium of Sakellaridis led to a large-scale switch to Achmouni, which indicates that farmers responded to market incentives in their cultivation choices. Our study shows that cultivators’ response to market changes was fundamental in the recovery of the cotton sector. Access to credit was also a strong determinant of cotton output, especially to the benefit of large landowners. That access to credit plays a vital role in enabling the adoption of productivity-enhancing innovations is consonant with the literature on the Green Revolution, (Glaeser, 2010).

Our results show that the expansion of irrigation and drainage did not have a direct effect on output. However, we cannot rule out completely the role played by improved irrigation infrastructure because we do not observe investment in private drains, so we cannot assess complementarities between private and public drainage. Further, we find some evidence of a cumulative effect of drainage pipes, two to three years after installation.

The structure of land ownership, specifically the presence of large landowners, contributed to output recovery. Thus, despite institutional innovations designed to give small farmers better access to credit, large landowners benefitted disproportionally from credit availability. This is not a surprising finding: extreme inequality of land holdings had been a central feature of the country’s agricultural system for centuries.

 

References

Eshag, Eprime, and M. A. Kamal. “A Note on the Reform of the Rural Credit System in U.A.R (Egypt).” Bulletin of the Oxford University Institute of Economics & Statistics 29, no. 2 (1967): 95–107. https://doi.org/10.1111/j.1468-0084.1967.mp29002001.x.

Glaeser, Bernhard. The Green Revolution Revisited: Critique and Alternatives. Taylor & Francis, 2010.

Goldberg, Ellis. “Historiography of Crisis in the Egyptian Political Economy.” In Middle Eastern Historiographies: Narrating the Twentieth Century, edited by I. Gershoni, Amy Singer, and Hakan Erdem, 183–207. University of Washington Press, 2006.

———. Trade, Reputation and Child Labour in the Twentieth-Century Egypt. Palgrave Macmillan, 2004.

Issawi, Charles. Egypt at Mid-Century. Oxford University Press, 1954.

Owen, Roger. “Agricultural Production in Historical Perspective: A Case Study of the Period 1890-1939.” In Egypt Since the Revolution, edited by P. Vatikiotis, 40–65, 1968.

Radwan, Samir. Capital Formation in Egyptian Industry and Agriculture, 1882-1967. Ithaca Press, 1974.

Richards, Alan Egypt’s Agricultural Development, 1800-1980: Technical and Social Change. Westview Press, 1982.

 


Ulas Karakoc

ulaslar@gmail.com

 

Laura Panza

lpanza@unimelb.edu.au

 

 

 

 

 

Patents and Invention in Jamaica and the British Atlantic before 1857

By Aaron Graham (Oxford University)

This article will be published in the Economic History Review and is currently available on Early View.

 

Cardiff Hall, St. Ann's.
A Picturesque Tour of the Island of Jamaica, by James Hakewill (1875). Available at Wikimedia Commons.

For a long time the plantation colonies of the Americas were seen as backward and undeveloped, dependent for their wealth on the grinding enslavement of hundreds of thousands of people.  This was only part of the story, albeit a major one. Sugar, coffee, cotton, tobacco and indigo plantations were also some of the largest and most complex economic enterprises of the early industrial revolution, exceeding many textile factories in size and relying upon sophisticated technologies for the processing of raw materials.  My article looks at the patent system of Jamaica and the British Atlantic which supported this system, arguing that it facilitated a process of transatlantic invention, innovation and technological diffusion.

The first key finding concerns the nature of the patent system in Jamaica.  As in British America, patents were granted by colonial legislatures rather than by the Crown, and besides merely registering the proprietary right to an invention they often included further powers, to facilitate the process of licensing and diffusion.  They were therefore more akin to industrial subsidies than modern patents.  The corollary was that inventors had to demonstrate not just novelty but practicality and utility; in 1786, when two inventors competed to patent the same invention, the prize went to the one who provided a successful demonstration (Figure 1).   As a result, the bar was higher, and only about sixty patents were passed in Jamaica between 1664 and 1857, compared to the many thousands in Britain and the United States.

 

Figure 1. ‘Elevation & Plan of an Improved SUGAR MILL by Edward Woollery Esq of Jamaica’

Graham1
Source: Bryan Edwards, The History, Civil and Commercial, of the British Colonies of the West Indies (London, 1794).

 

However, the second key finding is that this ‘bar’ was enough to make Jamaica one of the centres of colonial technological innovation before 1770, along with Barbados and South Carolina, which accounted for about two-thirds of the patents passed in that period.  All three were successful plantation colonies, where planters earned large amounts of money and had both the incentive and the means to invest heavily in technological innovations intended to improve efficiency and profits.  Patenting peaked in Jamaica between the 1760s and 1780s, as the island adapted to sudden economic change, as part of a package of measures that included opening up new lands, experimenting with new cane varieties, engaging in closer accounting, importing more slaves and developing new ways of working them harder.

A further finding of the article is that the English and Jamaican patent systems until 1852 were complementary.  Inventors in Britain could purchase an English patent with a ‘colonial clause’ extending it to colonial territories, but a Jamaican patent offered them additional powers and flexibility as they brought their inventions to Jamaica and adapted it to local conditions.  Inventors in Jamaica could obtain a local patent to protect their invention while they perfected it and prepared to market it in Britain.  The article shows how inventors used varies strategies within the two systems to help support the process of turning their inventions into viable technologies.

Finally, the colonial patents operated alongside a system of grants, premiums and prizes operated by the Jamaican Assembly, which helped to support innovation by plugging the gaps left by the patent system.  Inventors who felt that their designs were too easily pirated, or that they themselves lacked the capacity to develop them properly, could ask for a grant instead that recompensed them for the costs of invention and made the new technology widely available.  Like the imperial and colonial patents, the grants were part of the strategies used to promote invention.

Indeed, sometimes the Assembly stepped in directly.  In 1799, Jean Baptiste Brouet asked the House for a patent for a machine for curing coffee.  The committee agreed that the invention was novel, useful and practical, ‘but as the petitioner has not been naturalised and is totally unable to pay the fees for a private bill’, they suggested granting him £350 instead, ‘as a full reward for his invention; [and] the machines constructed according to the model whereof may then be used by any person desirous of the same, without any license from or fee paid to the petitioner’.

The article therefore argues that Jamaican patents were part of wider transatlantic system that acted to facilitate invention, innovation and technological diffusion in support of the plantation economy and slave society.

 


 

Aaron Graham

aaron.graham@history.ox.ac.uk

Unequal access to food during the nutritional transition: evidence from Mediterranean Spain

by Francisco J. Medina-Albaladejo & Salvador Calatayud (Universitat de València).

This article is forthcoming in the Economic History Review.

 

Medina1
Figure 1 – General pathology ward, Hospital General de Valencia (Spain), 1949. Source: Consejo General de Colegios Médicos de España. Banco de imágenes de la medicina española. Real Academia Nacional de Medicina de España. Available here.

Over the last century, European historiography has debated whether industrialisation brought about an improvement in working class living standards.  Multiple demographic, economic, anthropometric and wellbeing indicators have been examined in this regard, but it was Eric Hobsbawm (1957) who, in the late 1950s, incorporated food consumption patterns into the analysis.

Between the mid-19th and the first half of the 20th century, the diet of European populations underwent radical changes. Caloric intake increased significantly, and cereals were to a large extent replaced by animal proteins and fat, resulting from a substantial increase in meat, milk, eggs and fish consumption. This transformation was referred to by Popkin (1993) as the ‘Nutritional transition’.

These dietary changes were  driven, inter alia,  by the evolution of income levels which raises the possibility  that significant inequalities between different social groups ensued. Dietary inequalities between different social groups are a key component in the analysis of inequality and living standards; they directly affect mortality, life expectancy, and morbidity. However, this hypothesis  remains unproven, as historians are still searching for adequate sources and methods with which to measure the effects of dietary changes on living standards.

This study contributes to the debate by analysing a relatively untapped source: hospital diets. We have analysed the diet of psychiatric patients and members of staff in the main hospital of the city of Valencia (Spain) between 1852 and 1923. The diet of patients depended on their social status and the amounts they paid for their upkeep. ‘Poor psychiatric patients’ and abandoned children, who paid no fee, were fed according to hospital regulations, whereas ‘well-off psychiatric patients’ paid a daily fee in exchange for a richer and more varied diet. There were also differences among members of staff, with nuns receiving a richer diet than other personnel (launderers, nurses and wet-nurses). We think that our source  broadly  reflects dietary patterns of the Spanish population and the effect of income levels thereon.

Figure 2 illustrates some of these differences in terms of animal-based caloric intake in each of the groups under study. Three population groups can be clearly distinguished: ‘well-off psychiatric patients’ and nuns, whose diet already presented some of the features of the nutritional transition by the mid-19th century, including fewer cereals and a meat-rich diet, as well as the inclusion of new products, such as olive oil, milk, eggs and fish; hospital staff, whose diet was rich in calories,to compensate for their demanding jobs, but still traditional in structure, being largely based on cereals, legumes, meat and wine; and, finally, ‘poor psychiatric patients’ and abandoned children, whose diet was poorer and which, by the 1920, had barely joined the trends that characterised the nutritional transition.

 

Medina2
Figure 2. Percentage of animal calories in the daily average diet by population groups in the Hospital General de Valencia, 1852-1923 (%). Source: as per original article.

 

In conclusion, the nutritional transition was not a homogenous process, affecting all diets at the time or at the same pace. On the contrary, it was a process marked by social difference, and the progress of dietary changes was largely determined by social factors. By the mid-19th century, the diet structure of well-to-do social groups resembled diets that were more characteristic of the 1930s, while less favoured and intermediate social groups had to wait until the early 20th century before they could incorporate new foodstuffs into their diet. As this sequence clearly indicates, less favoured social groups always lagged behind.

 

References

Medina-Albaladejo, F. J. and Calatayud, S., “Unequal access to food during the nutritional transition: evidence from Mediterranean Spain”, Economic History Review, (forthcoming).

Hobsbawm, E. J., “The British Standard of Living, 1790-1850”, Economic History Review, 2nd ser., X (1957), pp. 46-68.

Popkin B. M., “Nutritional Patterns and Transitions”, Population and Development Review, 19, 1 (1993), pp. 138-157.

Economic History Review – An introduction to the history of infectious diseases, epidemics and the early phases of the long-run decline in mortality

by Leigh Shaw-Taylor.

Below is the abstract for the Economic History Review virtual issue, Epidemics, Diseases and Mortality in Economic History.

The full introduction and the virtual issue are both available online for free for a limited time.

 

This article, written during the COVID-19 epidemic, provides a general introduction to the long-term history of infectious diseases, epidemics and the early phases of the spectacular long-term improvements in life expectancy since 1750, primarily with reference to English history. The story is a fundamentally optimistic one. In 2019 global life expectancy was approaching 73 years. In 1800 it was probably about 30. To understand the origins of this transition, we have to look at the historical sequence by which so many causes of premature death have been vanquished over time. In England that story begins much earlier than often supposed, in the years around 1600.  The first two ‘victories’ were over famine and plague. However, economic changes with negative influences on mortality meant that, despite this, life expectancies were either falling or stable between the late sixteenth and mid eighteenth centuries. The late eighteenth and early nineteenth century saw major declines in deaths from smallpox, malaria and typhus and the beginnings of the long-run increases in life expectancy. The period also saw urban areas become capable of demographic growth without a constant stream of migrants from the countryside: a necessary precondition for the global urbanization of the last two centuries and for modern economic growth. Since 1840 the highest national life expectancy globally has increased by three years in every decade.

Pandemics and Institutions: Lessons from Plague

by Guido Alfani (Bocconi University, Milan) & Tommy Murphy (Universidad de San Andrés, Buenos Aires)

This blog forms part E in the EHS series: The Long View on Epidemics, Disease and Public Health:Research from Economic History


 

 

BlackDeath
The plague of Florence in 1348, as described in Boccaccio’s Decameron. Available at the Wellcome Library.

In a recent article[i] we reviewed research on preindustrial epidemics. We focused on large-scale, lethal events: those that have a deeper and more long-lasting impact on economy and society, thereby  producing the historical documentation that allows for systematic study. Almost all these lethal pandemics have been caused by plague: from the “Justinian’s plague” (540-41) and the Black Death (1347-52) to the last great European plagues of the seventeenth century (1623-32 and 1647-57). These epidemics were devastating. The Black Death, killed between 35 and 60 per cent of the population of Europe and the Mediterranean (approximately  50 million victims).

These epidemics also had large-scale and persistent consequences. The Black Death might have  positively influenced the development of Europe, even playing a role in the Great Divergence.[ii] Conversely,  it is arguable that seventeenth-century plagues in  Southern Europe (especially Italy), precipitated the Little Divergence.[iii]  Clearly, epidemics can have asymmetric economic effects. The Black Death, for example, had negative long-term consequences for relatively under-populated areas of Europe, such as Spain or Ireland.[iv] More generally, the effects of an epidemic depend upon the context in which it happens. Below we focus on how institutions shaped the spread and the consequences of plagues.

 

Preindustrial epidemics and institutions

In preindustrial times, as today, institutions played a crucial role in determining the final intensity of epidemics. When the Black Death appeared, European societies were unprepared for the threat. But, when it became apparent that plague was a recurrent scourge, institutional adaptation commenced — typical of  human reaction to a changing biological environment. From the late fourteenth century permanent health boards were established, able to take quicker action than the ad-hoc commissions created during the emergency of 1348. These boards monitored constantly the international situation, and provided the early warning necessary for implementing measures to contain epidemics[v]. From the late fourteenth century, quarantine procedures for suspected cases were developed, and in 1423 Venice built the first permanent lazzaretto (isolation hospital) on a lagoon island. By the early sixteenth century, at least in Italy, central and local government had implemented a broad range of anti-plague policies, including  health controls at river and sea harbours, mountain passes, and political boundaries. Within each Italian state, infected communities or territories were isolated, and human contact was limited by quarantines.[vi]  These, and other instruments developed against the plague, are the direct ancestors of those currently employed to contain Covid-19. However, such policies are not always successful:  In 1629, for example, plague entered Northern Italy as infected armies  from France and Germany arrived to fight in the War of the Mantuan Succession.  Nobody has ever been able to quarantine an enemy army.

It is no accident that these policies were first developed in Italian trading cities which, because of their commercial networks, had good reason to fear infection. Such policies were quickly imitated in Spain and France.[vii]  However, England in particular, “was unlike many other European countries in having no public precautions against plague at all before 1518”.[viii] Even in the seventeenth century, England was still trying to introduce institutions that had long-since been consolidated in Mediterranean Europe.

The development of institutions and procedures to fight plague has been extensively researched.  Nonetheless, other aspects of preindustrial epidemics are less well-known.  For example, how institutions tended to shift mortality towards  specific socio-economic groups, especially the poor.  Once doctors and health officials noticed that plague mortality was higher in the poorest parts of the city, they began to see the poor themselves as being responsible for the spread of the infection. As a result, during the early modern period their presence in cities was increasingly resented,[ix] and as a precautionary measure, vagrants and beggars were expelled. The death of many poor people was even regarded by some as one of the few positive consequences of plague. The friar, Antero Maria di San Bonaventura, wrote immediately after the 1656-57 plague in Genoa:

“What would the world be, if God did not sometimes touch it with the plague? How could he feed so many people? God would have to create new worlds, merely destined to provision this one […]. Genoa had grown so much that it no longer seemed a big city, but an anthill. You could neither take a walk without knocking into one another, nor was it possible to pray in church on account of the multitude of the poor […]. Thus it is necessary to confess that the contagion is the effect of divine providence, for the good governance of the universe”.[x]

 

While it seems certain that the marked socio-economic gradient of plague mortality was partly due to the action of health institutions, there is no clear evidence that officials were actively trying to kill the poor by infection.  Sometimes, the anti-poor behaviour of the elites might have backfired. Our initial research on the 1630 epidemic in the Italian city of Carmagnola suggests that while poor households were more prone to being all interned in the lazzaretto for isolation at the mere suspicion of plague, this might have reduced, not increased, their individual risk of death compared to richer strata. Possibly, this was the combined result of effective isolation of the diseased, assured provisioning of victuals, basic care, and forced within-household distancing[xi].

Different health treatment reserved to rich and poor and economic elites making wrong and self-harming decisions: it would be nice if, occasionally, we learned something from history!

 

[i] Alfani, G. and T. Murphy. “Plague and Lethal Epidemics in the Pre-Industrial World.” Journal of Economic History 77 (1), 2017, 314–343.

[ii] Clark, G. A Farewell to the Alms: A Brief Economic History of the World. Princeton: Princeton University Press, 2007; Broadberry, S. Accounting for the Great Divergence, LSE Economic History Working Papers No. 184, 2013.

[iii] Alfani, G. “Plague in Seventeenth Century Europe and the Decline of Italy: An Epidemiological Hypothesis.” European Review of Economic History 17 (3), 2013, 408–430; Alfani, G. and M. Percoco. “Plague and Long-Term Development: the Lasting Effects of the 1629-30 Epidemic on the Italian Cities.” Economic History Review 72 (4), 2019, 1175–1201.

[iv] For a recent synthesis of the asymmetric economic consequences of plague, Alfani, G. Pandemics and asymmetric shocks: Lessons from the history of plagues, VoxEu, 9 April 2020, https://voxeu.org/article/pandemics-and-asymmetric-shocks

[v] Cipolla, C.M. Public Health and the Medical Profession in the Renaissance. Cambridge: CUP, 1976; Cohn, S.H. Cultures of Plague. Medical Thought at the End of the Renaissance. Oxford: OUP, 2009. Alfani, G. Calamities and the Economy in Renaissance Italy. The Grand Tour of the Horsemen of the Apocalypse. Basingstoke: Palgrave, 2013.

[vi] Alfani, G. Calamities and the Economy, cit.; Cipolla, C.M, Public Health and the Medical Profession, cit.; Henderson, J., Florence Under Siege: Surviving Plague in an Early Modern City, Yale University Press, 2019.

[vii] Cipolla, C.M, Public Health and the Medical Profession, cit..

[viii] Slack, Paul. The Impact of Plague in Tudor and Stuart England. London: Routledge, 1985, 201–26.

[ix] Pullan, B. “Plague and Perceptions of the Poor in Early Modern Italy.” In T. Ranger and P. Slack (eds.), Epidemics and Ideas. Essays on the Historical Perception of Pestilence. Cambridge: CUP, 1992, 101-23; Alfani, G., Calamities and the Economy.

[x] Alfani,  Calamities p.106.

[xi] Alfani, G., M. Bonetti and M. Fochesato, Pandemics and socio-economic status. Evidence from the plague of 1630 in northern Italy, Mimeo.


 

Guido Alfani – guido.alfani@unibocconi.it

Tommy Murphy – tmurphy@udesa.edu.ar

The Long View on Epidemics, Disease and Public Health: Research from Economic History, Part A

This piece is the result of a collaboration between the Economic History Review, the Journal of Economic History, Explorations in Economic History and the European Review of Economic History. More details and special thanks below. Part B can be found here

 

Blackdeath,_tourmai
Exhibit depicting a miniature from a 14th century Belgium manuscript at the Diaspora Museum, Tel Aviv. Available at Wikimedia Commons.

As the world grapples with a pandemic, informed views based on facts and evidence have become all the more important. Economic history is a uniquely well-suited discipline to provide insights into the costs and consequences of rare events, such as pandemics, as it combines the tools of an economist with the long perspective and attention to context of historians. The editors of the main journals in economic history have thus gathered a selection of the recently-published articles on epidemics, disease and public health, generously made available by publishers to the public, free of access, so that we may continue to learn from the decisions of humans and policy makers confronting earlier episodes of widespread disease and pandemics.

Emergency_hospital_during_Influenza_epidemic,_Camp_Funston,_Kansas_-_NCP_1603
Emergency hospital during influenza epidemic, Camp Funston, Kansas. Available at Wikimedia Commons.

Generations of economic historians have studied disease and its impact on societies across history. However, as the discipline has continued to evolve with improvements in both data and methods, researchers have uncovered new evidence about episodes from the distant past, such as the Black Death, as well as more recent global pandemics, such as the Spanish Influenza of 1918. We begin with a recent overview of scholarship on the history of premodern epidemics, and group the remaining articles thematically, into two short reading lists. The first consists of research exploring the impact of diseases in the most direct sense: the patterns of mortality they produce. The second group of articles explores the longer-term consequences of diseases for people’s health later in life.

L0025221 Plague doctor
Plague doctor. Available at Wellcome Collection.

 

L0001879 Two men discovering a dead woman in the street during the gr
Two men discovering a dead woman in the street during the Great Plague of London, 1665. Available at Wellcome Collection.

 

Patterns of Mortality

Emblems_of_mortality_-_representing,_in_upwards_of_fifty_cuts,_Death_seizing_all_ranks_and_degrees_of_people_-_imitated_from_a_painting_in_the_cemetery_of_the_Dominican_church_at_Basil
Emblems of mortality: death seizing all ranks and degrees of people, 1789. Available at Wikimedia Commons.

The rich and complex body of historical work on epidemics is carefully surveyed by Guido Alfani and Tommy Murphy who provide an excellent  guide to the economic, social, and  demographic impact of plagues in human history: ‘Plague and Lethal Epidemics in the Pre-Industrial World’.  The Journal of Economic History 77, no. 1 (2017): 314–43. https://doi.org/10.1017/S0022050717000092.  The impact of epidemics varies over time and few studies have shown this so clearly as the penetrating article by Neil Cummins, Morgan Kelly and Cormac  Ó Gráda, who provide a finely-detailed map of how the plague evolved  in 16th and 17th century London to reveal who was most heavily burdened by this contagion.  ‘Living Standards and Plague in London, 1560–1665’. Economic History Review 69, no. 1 (2016): 3-34. https://dx.doi.org/10.1111/ehr.12098 .  Plagues shaped the history of nations  and, indeed, global history, but we must not assume that the impact of  plagues was as devastating as we might assume: in a classic piece of historical detective work, Ann  Carlos and Frank Lewis show that mortality among native Americans in the Hudson Bay area  was much lower than historians had suggested: ‘Smallpox and Native American Mortality: The 1780s Epidemic in the Hudson Bay Region’.  Explorations in Economic History 49, no. 3 (2012): 277-90. https://doi.org/10.1016/j.eeh.2012.04.003

The effects of disease reflect a complex interaction of individual and social factors.  A paper by Karen Clay, Joshua Lewis and Edson Severnini  explains  how the combination of air pollution and influenza was particularly deadly in the 1918 epidemic, and that  cities in the US which were heavy users of coal had all-age mortality  rates that were approximately  10 per cent higher than  those with lower rates of coal use:  ‘Pollution, Infectious Disease, and Mortality: Evidence from the 1918 Spanish Influenza Pandemic’.  The Journal of Economic History 78, no. 4 (2018): 1179–1209. https://doi.org/10.1017/S002205071800058X.  A remarkable analysis of how one of the great killers, smallpox, evolved during the 18th century, is provided by Romola Davenport, Leonard Schwarz and Jeremy Boulton, who concluded that it was a change in the transmissibility of the disease itself that mattered most for its impact: “The Decline of Adult Smallpox in Eighteenth‐century London.” Economic History Review 64, no. 4 (2011): 1289-314. https://dx.doi.org/10.1111/j.1468-0289.2011.00599.x   The question of which sections of society experienced the heaviest burden of sickness during outbreaks of disease outbreaks has long troubled historians and epidemiologists. Outsiders and immigrants have often been blamed for disease outbreaks. Jonathan Pritchett and Insan Tunali show that poverty and immunisation, not immigration, explain who was infected during the Yellow Fever epidemic in 1853 New Orleans: ‘Strangers’ Disease: Determinants of Yellow Fever Mortality during the New Orleans Epidemic of 1853’. Explorations in Economic History 32, no. 4 (1995): 517. https://doi.org/10.1006/exeh.1995.1022

 

The Long Run Consequences of Disease

Nuremberg_chronicles_-_Dance_of_Death_(CCLXIIIIv)
‘Dance of Death’. Illustrations from the Nuremberg Chronicle, by Hartmann Schedel (1440-1514). Available at Wikipedia.

The way epidemics affects families is complex. John Parman wrestles wit h one of the most difficult issues – how parents respond to the harms caused by exposure to an epidemic. Parman  shows that parents chose to concentrate resources on the children who were not affected by exposure to influenza in 1918, which reinforced the differences between their children: ‘Childhood Health and Sibling Outcomes: Nurture Reinforcing Nature during the 1918 Influenza Pandemic’, Explorations in Economic History 58 (2015): 22-43. https://doi.org/10.1016/j.eeh.2015.07.002.  Martin Saavedra addresses a related question: how did exposure to disease in early childhood affect life in the long run? Using late 19th century census data from the US, Saavedra  shows that children of immigrants who were exposed to yellow fever in the womb or early infancy, did less well in later life than their peers,  because they were only able to secure lower-paid  employment: ‘Early-life Disease Exposure and Occupational Status: The Impact of Yellow Fever during the 19th Century’.  Explorations in Economic History 64, no. C (2017): 62-81.  https://doi.org/10.1016/j.eeh.2017.01.003.  One of the great advantages of historical research is its  ability to reveal how the experiences of disease over a lifetime generates cumulative harms. Javier Birchenall’s extraordinary paper shows how soldiers’ exposure to disease during the American Civil War increased the probability  they would  contract tuberculosis later in life: ‘Airborne Diseases: Tuberculosis in the Union Army’. Explorations in Economic History 48, no. 2 (2011): 325-42. https://doi.org/10.1016/j.eeh.2011.01.004

 

V0010604 A street during the plague in London with a death cart and m
“Bring Out Your Dead” A street during the Great Plague in London, 1665. Available at Wellcome Collection.

 

Patrick Wallis, Giovanni Federico & John Turner, for the Economic History Review;

Dan Bogart, Karen Clay, William Collins, for the Journal of Economic History;

Kris James Mitchener, Carola Frydman, and Marianne Wanamaker, for Explorations in Economic History;

Joan Roses, Kerstin Enflo, Christopher Meissner, for the European Review of Economic History.

 

If you wish to read further, other papers on this topic are available on the journal websites:

https://onlinelibrary.wiley.com/doi/toc/10.1111/(ISSN)1468-0289.epidemics-disease-mortality

https://www.cambridge.org/core/journals/journal-of-economic-history/free-articles-on-pandemics

https://www.journals.elsevier.com/explorations-in-economic-history/featured-articles/contagious-disease-on-economics

 

* Thanks to Leigh Shaw-Taylor, Cambridge University Press, Elsevier, Oxford University Press, and Wiley, for their advice and support.

Uncertainty and the Great Slump

by Jason Lennard (London School of Economics and Lund University)

This article has now been published in the Economic History Review and is available on Early View

Holland House library after an air raid
Holland House library after an air raid, 1940. Available at Wikimedia Commons.

A key challenge in economic history is to understand the macroeconomics of interwar Britain. This was a time of high unemployment, depressed economic activity and significant macroeconomic volatility. Economic historians have identified a number of causes, from the demise of the old staple industries to the shortening of the working week (Richardson, 1965; Broadberry, 1986).

Yet the historiography has not explored an important source of modern economic fluctuations: uncertainty (Bloom, 2009; Jurado et al., 2015; Baker et al., 2016). A fog of uncertainty may well have hung over Britain between the wars given the volume of extraordinary events. Economically, there was the return to and break from the gold standard, the fiscal aftermath of the First World War and the slide to protection. Politically, there were snap general elections in 1923 and 1931, hung parliaments following the elections of 1923 and 1929 and national governments during the 1930s.

In new research (Lennard, forthcoming EcHR), I investigated how economic policy uncertainty affected the British economy in the interwar period. As a nebulous concept, the first step was to measure uncertainty. In order to do so, I constructed an index based on the frequency of articles reporting uncertainty over economic policy in the Daily Mail, Guardian and The Times (Figure 1).

Figure 1. New economic policy uncertainty index for the United Kingdom, 1920-38 (average 1920-38 = 100)

Lennard 1
Source: as per original article.

The new index is plotted in Figure 1, which shows that there was significant variation in uncertainty in the United Kingdom between 1920 and 1938. Uncertainty spiked around recurring events in the calendar such as budget announcements and general elections but also in response to more specific factors such as the General Strike, looming inter-allied debt payments and changes in the likelihood of war.

The second step was to account for the macroeconomic effects of uncertainty. Using a vector autoregression, I found that uncertainty was associated with reduced credit (a ‘financial frictions effect’), fewer imports (a ‘magnification effect’), lost jobs, and lower economic activity. Overall, economic policy uncertainty accounted for more than 20 per cent of macroeconomic volatility.

A wealth of narrative evidence backs up the significance of uncertainty shocks. At the microeconomic level, there were regular reports of disruption to a number of industries from pianos to textiles. In the car industry, for example, Sir William Letts, chairman and managing director of Willys Overland Crossley, told shareholders at the annual general meeting that uncertainty is,  ‘crippling business and holding back activity and energy in our great industry’ (Guardian, 25 Feb. 1930, p. 6).

At the macroeconomic level, there were frequent descriptions of depressed employment and output. In 1920, for example, the Daily Mail (30 Dec. 1920, p. 4) reported that, ‘among the main causes of unemployment at the present moment […] is uncertainty in the business world’. In 1930 Sir William Morris wrote that Britain was ‘floundering in a sea of uncertainty […] the result being colossal unemployment’ (Daily Mail, 29 Aug. 1930, p. 8). In 1932 the Economist (30 Jan. 1932, p. 1) summarised that ‘business this year has been overshadowed by the economic and political uncertainty at home and abroad’.

In summary, uncertainty has been a forgotten, albeit incomplete, explanation for macroeconomic instability in interwar Britain. How uncertainty affected economies in other historical contexts is an important and open question for future research.

To contact the author:

Email: jason.lennard@ekh.lu.se

Twitter: @jason_lennard

References

Baker, S. R., Bloom, N., and Davis, S. J., ‘Measuring economic policy uncertainty’, Quarterly Journal of Economics, 131 (2016), pp. 1593–636.

Bloom, N., ‘The impact of uncertainty shocks’, Econometrica, 77 (2009), pp. 623–85.

Broadberry, S., The British economy between the wars: a macroeconomic survey (Oxford, 1986).

Jurado, K., Ludvigson, S. C., and Ng, S., ‘Measuring uncertainty’, American Economic Review, 105 (2015), pp. 1177–216.

Lennard, J., ‘Uncertainty and the Great Slump’, Economic History Review, (forthcoming).

Richardson, H. W., ‘Over-commitment in Britain before 1930’, Oxford Economic Papers, 17 (1965), pp. 237–62.

Tawney Lecture 2019: Slavery and Anglo-American Capitalism Revisited

by Gavin Wright (Stanford University)

This research was presented as the Tawney Lecture at the EHS Annual Conference in 2019.

It will also appear in the Economic History Review later this year.

 

WrightCotton
Coloured lithograph of slaves picking cotton. Fort Sumter Museum Charleston. Available at Flickr.

My Tawney lecture reassessed the relationship between slavery and industrial capitalism in both Britain and the United States.  The thesis expounded by Eric Williams held that slavery and the slave trade were vital for the expansion of British industry and commerce during the 18th century but were no longer needed by the 19th.  My lecture confirmed both parts of the Williams thesis:  the 18th-century Atlantic economy was dominated by sugar, which required slave labor; but after 1815, British manufactured goods found diverse new international markets that did not need captive colonial buyers, naval protection, or slavery.  Long-distance trade became safer and cheaper, as freight rates fell, and international financial infrastructure developed.  Figure 1 (below) shows that the slave economies absorbed the majority of British cotton goods during the 18th century, but lost their centrality during the 19th, supplanted by a diverse array of global destinations.

Figure 1.

Wright1
Source: see article published in the Review.

 

I argued that this formulation applies with equal force to the upstart economy across the Atlantic.  The mainland North American colonies were intimately connected to the larger slave-based imperial economy.  The northern colonies, holding relatively few slaves themselves, were nonetheless beneficiaries of the trading regime,  protected against outsiders by British naval superiority.  Between 1768 and 1772, the British West Indies were the largest single market for commodity exports from New England and the Middle Atlantic, dominating sales of wood products, fish and meat, and accounting for significant shares of whale products, grains and grain products.  The prominence of slave-based commerce explains the arresting connections reported by C. S. Wilder, associating early American universities with slavery.  Thus, part one of the Williams thesis also holds for 18th-century colonial America.

Insurgent scholars known as New Historians of Capitalism argue that slavery, specifically slave-grown cotton, was critical for the rise of the U.S. economy in the 19th century.  In contrast, I argued that although industrial capitalism needed cheap cotton, cheap cotton did not need slavery.  Unlike sugar, cotton required no large investments of fixed capital and could be cultivated efficiently at any scale, in locations that would have been settled by free farmers in the absence of slavery.  Early mainland cotton growers deployed slave labour not because of its productivity or aptness for the new crop, but because they were already slave owners, searching for profitable alternatives to tobacco, indigo, and other declining crops.  Slavery was, in effect, a ‘pre-existing condition’ for the 19th-century American South.

To be sure, U.S. cotton did indeed rise ‘on the backs of slaves’, and no cliometric counterfactual can gainsay this brute fact of history.  But it is doubtful that this brutal system served the long-run interests of textile producers in Lancashire and New England, as many of them recognized at the time.  As argued here, the slave South underperformed as a world cotton supplier, for three distinct though related reasons:  in 1807 the region  closed the African slave trade, yet failed to recruit free migrants, making labour supply inelastic; slave owners neglected transportation infrastructure, leaving large sections of potential cotton land on the margins of commercial agriculture; and because of the fixed-cost character of slavery, even large plantations aimed at self-sufficiency in foodstuffs, limiting the region’s overall degree of market specialization.  The best evidence that slavery was not essential for cotton supply is demonstrated by what happened when slavery ended. After war and emancipation, merchants and railroads flooded into the southeast, enticing previously isolated farm areas into the cotton economy.  Production in plantation areas gradually recovered, but the biggest source of new cotton came from white farmers in the Piedmont.  When the dust settled in the 1880s, India, Egypt, and slave-using Brazil had retreated from world markets, and the price of cotton in Liverpool returned to its antebellum level. See Figure 2.

Figure 2.

Wright2
Source: see article published in the Review.

The New Historians of Capitalism also exaggerate the importance of the slave South for accelerated U.S. growth.  The Cotton Staple Growth hypothesis advanced by Douglass North was decisively refuted by economic historians a generation ago.  The South was not a major market for western foodstuffs and consumed only a small and declining share of northern manufactures.   International and interregional financial connections were undeniably important, but thriving capital markets in northeastern cities clearly predated the rise of cotton, and connections to slavery were remote at best. Investments in western canals and railroads were in fact larger, accentuating the expansion of commerce along East-West lines.

It would be excessive to claim that Anglo-American industrial and financial interests recognized the growing dysfunction of the slave South, and in response fostered or encouraged the antislavery campaigns that culminated in the Civil War.  A more appropriate conclusion is that because of profound changes in technologies and global economic structures, slavery — though still highly profitable to its practitioners — no longer seemed essential for the capitalist economies of the 19th-century world.

Turkey’s Experience with Economic Development since 1820

by Sevket Pamuk, University of Bogazici (Bosphorus) 

This research is part of a broader article published in the Economic History Review.

A podcast of Sevket’s Tawney lecture can be found here.

 

Pamuk 1

New Map of Turkey in Europe, Divided into its Provinces, 1801. Available at Wikimedia Commons.

The Tawney lecture, based on my recent book – Uneven centuries: economic development of Turkey since 1820, Princeton University Press, 2018 – examined the economic development of Turkey from a comparative global perspective. Using GDP per capita and other data, the book showed that Turkey’s record in economic growth and human development since 1820 has been close to the world average and a little above the average for developing countries. The early focus of the lecture was on the proximate causes — average rates of investment, below average rates of schooling, low rates of total productivity growth, and low technology content of production —which provide important insights into why improvements in GDP per capita were not higher. For more fundamental explanations I emphasized the role of institutions and institutional change. Since the nineteenth century Turkey’s formal economic institutions were influenced by international rules which did not always support economic development. Turkey’s elites also made extensive changes in formal political and economic institutions. However, these institutions provide only part of the story:  the direction of institutional change also depended on the political order and the degree of understanding between different groups and their elites. When political institutions could not manage the recurring tensions and cleavages between the different elites, economic outcomes suffered.

There are a number of ways in which my study reflects some of the key trends in the historiography in recent decades.  For example, until fairly recently, economic historians focused almost exclusively on the developed economies of western Europe, North America, and Japan. Lately, however, economic historians have been changing their focus to developing economies. Moreover, as part of this reorientation, considerable effort has been expended on constructing long-run economic series, especially GDP and GDP per capita, as well as series on health and education.  In this context, I have constructed long-run series for the area within the present-day borders of Turkey. These series rely mostly on official estimates for the period after 1923 and make use of a variety of evidence for the Ottoman era, including wages, tax revenues and foreign trade series. In common with the series for other developing countries, many of my calculations involving Turkey  are subject to larger margins of error than similar series for developed countries. Nonetheless, they provide insights into the developmental experience of Turkey and other developing countries that would not have been possible two or three decades ago. Finally, in recent years, economists and economic historians have made an important distinction between the proximate causes and the deeper determinants of economic development. While literature on the proximate causes of development focuses on investment, accumulation of inputs, technology, and productivity, discussions of the deeper causes consider the broader social, political, and institutional environment. Both sets of arguments are utilized in my book.

I argue that an interest-based explanation can address both the causes of long-run economic growth and its limits. Turkey’s formal economic institutions and economic policies underwent extensive change during the last two centuries. In each of the four historical periods I define, Turkey’s economic institutions and policies were influenced by international or global rules which were enforced either by the leading global powers or, more recently, by international agencies. Additionally, since the nineteenth century, elites in Turkey made extensive changes to formal political institutions.  In response to European military and economic advances, the Ottoman elites adopted a programme of institutional changes that mirrored European developments; this programme  continued during the twentieth century. Such fundamental  changes helped foster significant increases in per capita income as well as  major improvements in health and education.

But it is also necessary to examine how these new formal institutions interacted with the process of economic change – for example, changing social structure and variations in the distribution of power and expectations — to understand the scale and characteristics of growth that the new institutional configurations generated.

These interactions were complex. It is not easy to ascribe the outcomes created in Turkey during these two centuries to a single cause. Nonetheless, it is safe to state that in each of the four periods, the successful development of  new institutions depended on the state making use of the different powers and capacities of the various elites. More generally, economic outcomes depended closely on the nature of the political order and the degree of understanding between different groups in society and the elites that led them. However, one of the more important characteristics of Turkey’s social structure has been the recurrence of tensions and cleavages between its elites. While they often appeared to be based on culture, these tensions overlapped with competing economic interests which were, in turn, shaped by the economic institutions and policies generated by the global economic system. When political institutions could not manage these tensions well, Turkey’s economic outcomes remained close to the world average.