Since the mid-nineteenth century, the average height of adult British men increased by 11 centimetres. This increase in final height reflects improvements in living standards and health, and provides insights on the growth pattern of children which has been comparatively neglected. Child growth is very sensitive to economic and social conditions: children with limited nutrition or who suffer from chronic disease, grow more slowly than healthy children. Thus, to achieve such a large increase in adult height, health conditions must have improved dramatically for children since the mid-nineteenth century.
Our paper seeks to understand how child growth changed over time as adult height was increasing. Child growth follows a typical pattern shown in Figure 1. The graph on the left shows the height by age curve for modern healthy children, and the graph on the right shows the change in height at each age (height velocity). We look at three dimensions of the growth pattern of children: the final adult height that children achieve, i.e. what historians have predominantly focused on to date; the timing (age) when the growth velocity peaks during puberty, and, finally, the overall speed of maturation which affects the velocity of growth across all ages and the length of the growing years.
Figure 1. Weights and Heights for boys who trained on HMS Indefatigable, 1860s-1990s.
To understand how growth changed over time, we collected information about 11,548 boys who were admitted to the training ship Indefatigable from the 1860s to 1990s (Figure 2). This ship was located on the River Mersey near Liverpool for much of its history and it trained boys for careers in the merchant marine and navy. Crucially, the administrators recorded the boys’ heights and weights at admission and discharge, allowing us to calculate growth velocities for each individual.
Figure 2. HMS Indefatigable
We trace the boys’ heights over time (grouping them by birth decade) and find that they grew most rapidly during the interwar period. In addition, the most novel finding was that for boys born in the nineteenth century there is little evidence that they experienced a strong pubertal growth spurt unlike healthy boys today. Their growth velocity was relatively flat across puberty. However, starting with the 1910 birth decade, boys began experiencing more rapid pubertal growth similar to the right-hand graph in Figure 1. The appearance of rapid pubertal growth is a product of two factors: an increase in the speed of maturation, which meant that boys grew more rapidly during puberty than before and, secondly, a decrease in the variation in the timing of the pubertal growth spurt, which meant that boys were experiencing their pubertal growth at more similar ages.
Figure 3. Adjusted height-velocity for boys who trained on HMS Indefatigable.
This sudden change in the growth pattern of children is a new finding that is not predicted by the historical or medical literature. In the paper, we show that this change cannot be explained by improvements in living standards on the ship and that it is robust to a number of potential alternative explanations. We argue that reductions in disease exposure and illness were likely the biggest contributing factor. Infant mortality rates, an indicator of chronic illness in childhood, declined only after 1900 in England and Wales, so a decline in illness in childhood could have mattered. In addition, although general levels of nutrition were more than adequate by the turn of the twentieth century, the introduction of free school meals and the milk-in-schools programme in the early twentieth century, likely also helped ensure that children had access to key protein and nutrients necessary for growth.
Our findings matter for two reasons. First, they help complete the fragmented picture in the existing historical literature on how children’s growth changed over time. Second, they highlight the importance of the 1910s and the interwar period as a turning point in child growth. Existing research on adult heights has already shown that the interwar period was a period of rapid growth for children, but our results further explain how and why child growth accelerated in that period.
In a recent article[i] we reviewed research on preindustrial epidemics. We focused on large-scale, lethal events: those that have a deeper and more long-lasting impact on economy and society, thereby producing the historical documentation that allows for systematic study. Almost all these lethal pandemics have been caused by plague: from the “Justinian’s plague” (540-41) and the Black Death (1347-52) to the last great European plagues of the seventeenth century (1623-32 and 1647-57). These epidemics were devastating. The Black Death, killed between 35 and 60 per cent of the population of Europe and the Mediterranean (approximately 50 million victims).
These epidemics also had large-scale and persistent consequences. The Black Death might have positively influenced the development of Europe, even playing a role in the Great Divergence.[ii] Conversely, it is arguable that seventeenth-century plagues in Southern Europe (especially Italy), precipitated the Little Divergence.[iii] Clearly, epidemics can have asymmetric economic effects. The Black Death, for example, had negative long-term consequences for relatively under-populated areas of Europe, such as Spain or Ireland.[iv] More generally, the effects of an epidemic depend upon the context in which it happens. Below we focus on how institutions shaped the spread and the consequences of plagues.
Preindustrial epidemics and institutions
In preindustrial times, as today, institutions played a crucial role in determining the final intensity of epidemics. When the Black Death appeared, European societies were unprepared for the threat. But, when it became apparent that plague was a recurrent scourge, institutional adaptation commenced — typical of human reaction to a changing biological environment. From the late fourteenth century permanent health boards were established, able to take quicker action than the ad-hoc commissions created during the emergency of 1348. These boards monitored constantly the international situation, and provided the early warning necessary for implementing measures to contain epidemics[v]. From the late fourteenth century, quarantine procedures for suspected cases were developed, and in 1423 Venice built the first permanent lazzaretto (isolation hospital) on a lagoon island. By the early sixteenth century, at least in Italy, central and local government had implemented a broad range of anti-plague policies, including health controls at river and sea harbours, mountain passes, and political boundaries. Within each Italian state, infected communities or territories were isolated, and human contact was limited by quarantines.[vi] These, and other instruments developed against the plague, are the direct ancestors of those currently employed to contain Covid-19. However, such policies are not always successful: In 1629, for example, plague entered Northern Italy as infected armies from France and Germany arrived to fight in the War of the Mantuan Succession. Nobody has ever been able to quarantine an enemy army.
It is no accident that these policies were first developed in Italian trading cities which, because of their commercial networks, had good reason to fear infection. Such policies were quickly imitated in Spain and France.[vii] However, England in particular, “was unlike many other European countries in having no public precautions against plague at all before 1518”.[viii] Even in the seventeenth century, England was still trying to introduce institutions that had long-since been consolidated in Mediterranean Europe.
The development of institutions and procedures to fight plague has been extensively researched. Nonetheless, other aspects of preindustrial epidemics are less well-known. For example, how institutions tended to shift mortality towards specific socio-economic groups, especially the poor. Once doctors and health officials noticed that plague mortality was higher in the poorest parts of the city, they began to see the poor themselves as being responsible for the spread of the infection. As a result, during the early modern period their presence in cities was increasingly resented,[ix] and as a precautionary measure, vagrants and beggars were expelled. The death of many poor people was even regarded by some as one of the few positive consequences of plague. The friar, Antero Maria di San Bonaventura, wrote immediately after the 1656-57 plague in Genoa:
“What would the world be, if God did not sometimes touch it with the plague? How could he feed so many people? God would have to create new worlds, merely destined to provision this one […]. Genoa had grown so much that it no longer seemed a big city, but an anthill. You could neither take a walk without knocking into one another, nor was it possible to pray in church on account of the multitude of the poor […]. Thus it is necessary to confess that the contagion is the effect of divine providence, for the good governance of the universe”.[x]
While it seems certain that the marked socio-economic gradient of plague mortality was partly due to the action of health institutions, there is no clear evidence that officials were actively trying to kill the poor by infection. Sometimes, the anti-poor behaviour of the elites might have backfired. Our initial research on the 1630 epidemic in the Italian city of Carmagnola suggests that while poor households were more prone to being all interned in the lazzaretto for isolation at the mere suspicion of plague, this might have reduced, not increased, their individual risk of death compared to richer strata. Possibly, this was the combined result of effective isolation of the diseased, assured provisioning of victuals, basic care, and forced within-household distancing[xi].
Different health treatment reserved to rich and poor and economic elites making wrong and self-harming decisions: it would be nice if, occasionally, we learned something from history!
[i] Alfani, G. and T. Murphy. “Plague and Lethal Epidemics in the Pre-Industrial World.” Journal of Economic History 77 (1), 2017, 314–343.
[ii] Clark, G. A Farewell to the Alms: A Brief Economic History of the World. Princeton: Princeton University Press, 2007; Broadberry, S. Accounting for the Great Divergence, LSE Economic History Working Papers No. 184, 2013.
[iii] Alfani, G. “Plague in Seventeenth Century Europe and the Decline of Italy: An Epidemiological Hypothesis.” European Review of Economic History 17 (3), 2013, 408–430; Alfani, G. and M. Percoco. “Plague and Long-Term Development: the Lasting Effects of the 1629-30 Epidemic on the Italian Cities.” Economic History Review 72 (4), 2019, 1175–1201.
[v] Cipolla, C.M. Public Health and the Medical Profession in the Renaissance. Cambridge: CUP, 1976; Cohn, S.H. Cultures of Plague. Medical Thought at the End of the Renaissance. Oxford: OUP, 2009. Alfani, G. Calamities and the Economy in Renaissance Italy. The Grand Tour of the Horsemen of the Apocalypse. Basingstoke: Palgrave, 2013.
[vi] Alfani, G. Calamities and the Economy, cit.; Cipolla, C.M, Public Health and the Medical Profession, cit.; Henderson, J., Florence Under Siege: Surviving Plague in an Early Modern City, Yale University Press, 2019.
[vii] Cipolla, C.M, Public Health and the Medical Profession, cit..
[viii] Slack, Paul. The Impact of Plague in Tudor and Stuart England. London: Routledge, 1985, 201–26.
[ix] Pullan, B. “Plague and Perceptions of the Poor in Early Modern Italy.” In T. Ranger and P. Slack (eds.), Epidemics and Ideas. Essays on the Historical Perception of Pestilence. Cambridge: CUP, 1992, 101-23; Alfani, G., Calamities and the Economy.
by Victoria Baranov (University of Melbourne), Ralph De Haas (EBRD, CEPR, and Tilburg University) and Pauline Grosjean (University of New South Wales). More information on the authors below.
The content of this article was originally published on VOX and has been published here with the authors’ consent.
Why are men three times as likely than women to die from suicide? And why do many unemployed men refuse to apply for jobs that are typically done by women? This column argues that a better understanding of masculinity norms – the rules and standards that guide and constrain men’s behavior in society – can help answer important questions like these. We present evidence from Australia on how historical circumstances have instilled strong and persistent masculine identities that continue to influence outcomes related to male health; violence, suicide, and bullying; attitudes towards homosexuals; and occupational gender segregation.
What makes a ‘real’ man? According to traditional gender norms, men ought to be self-reliant, assertive, competitive, violent when needed, and in control of their emotions (Mahalik et al., 2003). Two current debates illustrate how such masculinity norms have profound economic and social impacts. First, in many countries, men die younger than women and are consistently less healthy. Masculinity norms, especially a penchant for violence and risk taking, are an important cultural driver of this gender health gap (WHO, 2013). A second debate links masculinity norms to occupational gender segregation. Technological progress and globalization have disproportionately affected male employment. Yet, many newly unemployed men refuse to fill jobs that do not match their self-perceived gender identity (Akerlof and Kranton, 2000).
The extent to which men are expected to conform to stereotypical masculinity norms nevertheless differs across societies. This raises the question: where do masculinity norms come from? The origins of gender norms about women have been the focus of a vibrant literature (Giuliano, 2018). By contrast, the origins of norms that guide and constrain the behavior of men have received no attention in the economics literature.
In recent research, we argue that strict masculinity norms can emerge in response to highly skewed sex ratios (the number of males relative to females) which intensify competition among men (Baranov, De Haas and Grosjean, 2020). When the sex ratio is more male biased, male-male competition for scarce females is more intense. This competition can intensify violence, bullying, and intimidating behavior (e.g. bravado), which, once entrenched in local culture, continue to manifest themselves in present-day outcomes long after sex ratios have normalized. We test this hypothesis using data from a unique natural experiment: the convict colonization of Australia.
Australia as a historical experiment
To establish a causal link from sex ratios to the manifestation of masculinity norms, we exploit the convict colonization of Australia. Between 1787 and 1868, Britain transported 132,308 convict men but only 24,960 convict women to Australia. Convicts were not confined to prisons but allocated across the colonies in a highly centralized manner. This created a variegated spatial pattern in sex ratios, and consequently in local male-to-male competition, in an otherwise homogeneous setting.
Convicts and ex-convicts represented the majority of the colonial population in Australia well into the mid-19th century. Voluntary migration was limited and mainly involved men migrating in response to male-biased economic opportunities available in agriculture and, after the discovery of gold in the 1850s, mining. Because of the predominance of male convicts and migrants, biased population sex ratios endured for over a century (Figure 1).
Identifying the lasting impact of skewed sex ratios
We regress present-day manifestations of masculinity norms, including violent behavior, bullying, and stereotypically male occupational choice on historical sex ratios, collected from the first reliable census in each Australian state (see also Grosjean and Khattar, 2019). An empirical challenge is that variation in historical sex ratios could reflect unobservable characteristics. To tackle this, we instrument the historical sex ratio by the sex ratio among convicts only. This instrument is highly relevant since most of the white Australian population initially consisted of convicts. Moreover, convicts were not free to move: a centralized assignment scheme determined their location as a function of labor needs, which we control for by initial economic specialization. Throughout the analysis, we also control for time-invariant geographic and historic characteristics as well as key present-day controls (sex ratio, population, and urbanization).
Masculinity norms among Australian men today
Using the above empirical strategy, we derive four sets of results:
1. Violence, suicide, and health
We first assess the impact of historically skewed sex ratios on present-day violence and health outcomes. Evidence suggests that men adhering to traditional masculinity norms attach a stronger stigma to mental health problems and tend to avoid health services. As a proxy for the avoidance of preventative health care we use local suicide and prostate cancer rates. Prostate cancer is often curable if treated early, but avoidance of diagnosis is a public health concern. The endorsement of strict masculinity norms is also associated with aggression, excessive drinking, and smoking.
Our estimates show that today, the rates of assault and sexual assault are higher in parts of Australia that were more male biased in the past. A one unit increase in the historical sex ratio (defined as the ratio of the number of men over the number of women) is associated with an 11 percent increase in the rate of assault and a 16 percent increase in sexual assaults. We also find strong evidence of elevated rates of male suicide, prostate cancer, and lung disease in these areas. For male suicide – the leading cause of death for Australian men under 45 – a one unit increase in the historical sex ratio is associated with a staggering 26 percent increase.
2. Occupational gender segregation
A second manifestation of male identity is occupational choice. Our results paint a striking picture. A one unit increase in the sex ratio is associated with a nearly 1 percentage point shift from the share of men employed in neutral (e.g. real estate, retail) or stereotypically female occupations (e.g. teachers, receptionists) to stereotypically male occupations (e.g. carpenters, metal workers).
3. Support for same-sex marriage
We capture the political expression of masculine identity by opposition against same-sex marriage, which we measure using voting records from the nation-wide referendum on same-sex marriage in 2017. Our results show that the share of votes in favor of marriage equality is substantially lower in areas where sex ratios were more male biased in the past. A one unit increase in the historical sex ratio is associated with a nearly 3 percentage point decrease in support for same-sex marriage. This is slightly over 6 percent of the mean.
Lastly, we find that boys, but not girls, are more likely to be bullied at school in areas that used to be more male biased in the past. The magnitude of the results is considerable and in line with the magnitude of the results for assaults (measured in adults). A one unit increase in the historical sex ratio is associated with a higher likelihood of parents (teachers) reporting bullying of boys by 13.7 (5.2) percentage points. This suggests that masculinity norms are perpetuated through horizontal transmission: peer pressure, starting at a young age in the playground.
We find that historically male-biased sex ratios forged a culture of male violence, help avoidance, and self-harm that persists to the present day in Australia. While our experimental setting is unique, we believe that our findings can inform the debate about the long-term socioeconomic consequences and risks of skewed sex ratios in many developing countries such as China, India, and parts of the Middle East. In these settings, sex-selective abortion and mortality as well as the cultural relegation and seclusion of women have created societies with highly skewed sex ratios. Our results suggest that the masculinity norms that develop as a result may not only be detrimental to (future generations of) men themselves but can also have important repercussions for other groups in society, in particular women and sexual minorities.
Our findings also align with an extensive psychological and medical literature that connects traditional masculinity norms to an unwillingness among men to seek timely medical help or to engage in preventive health care and protective health measures (e.g. Himmelstein and Sanchez (2016) and Salgado et al. (2016)). This suggests that voluntary observance of health measures, such as social distancing during the COVID-19 pandemic, may be considerably lower among men who adhere to traditional masculinity norms.
Akerlof, George A., and Rachel E. Kranton (2000), Economics and Identity, Quarterly Journal of Economics 115(3), 715–753.
Giuliano, Paola (2018), Gender: A Historical Perspective, The Oxford Handbook of Women and the Economy, Ed. Susan Averett, Laura Argys and Saul Hoffman. Oxford University Press, New York.
Grosjean, Pauline, and Rose Khattar (2019), It’s Raining Men! Hallelujah? The Long-Run Consequences of Male-Biased Sex Ratios, The Review of Economic Studies, 86(2), 723–754.
Himmelstein, M.S. and D.T. Sanchez (2016), Masculinity Impediments: Internalized Masculinity Contributes to Healthcare Avoidance in Men and Women, Journal of Health Psychology, 21, 1283–1292.
Mahalik, J.R., B.D. Locke, L.H. Ludlow, M.A. Diemer, R.P.J. Scott, M. Gottfried, and G. Freitas (2003), Development of the Conformity to Masculine Norms Inventory, Psychology of Men & Masculinity, 4(1), 3–25.
Salgado, D.M., A.L. Knowlton, and B.L. Johnson (2019), Men’s Health-Risk and Protective Behaviors: The Effects of Masculinity and Masculine Norms, Psychology of Men & Masculinities, 20(2), 266–275.
WHO (2013), Review of Social Determinants and the Health Divide in the WHO European Region, World Health Organization, Regional Office for Europe, Copenhagen.
Ralph De Haas, a Dutch national, is the Director of Research at the European Bank for Reconstruction and Development (EBRD) in London. He is also a part-time Associate Professor of Finance at Tilburg University, a CEPR Research Fellow, a Fellow at the European Banking Center, a Visiting Senior Fellow at the Institute of Global Affairs at the London School of Economics and Political Science, and a Research Associate at the ZEW–Leibniz Centre for European Economic Research. Ralph earned a PhD in economics from Utrecht University and is the recipient of the 2014 Willem F. Duisenberg Fellowship Prize. He has published in the Journal of Financial Economics;Review of Financial Studies; Review of Finance; Journal of International Economics, American Economic Journal: Applied Economics; the Journal of the European Economic Association and various other peer-reviewed journals. Ralph’s research interests include global banking, development finance and financial intermediation more broadly. He is currently working on randomized controlled trials related to financial inclusion in Morocco and Turkey.
Pauline Grosjean is a Professor in the School of Economics at UNSW. Previously at the University of San Francisco and the University of California at Berkeley, she has also worked as an Economist at the European Bank for Reconstruction and Development. She completed her PhD in economics at Toulouse School in Economics in 2006 after graduating from the Ecole Normale Supérieure. Her research studies the historical and dynamic context of economic development. In particular, she focuses on how culture and institutions interact and shape long-term economic development and individual behavior. She has published research that studies the historical process of a wide range of factors that are crucial for economic development, including cooperation and violence, trust, gender norms, support for democracy and for market reforms, immigration, preferences for education, and conflict.
Victoria Baranov’s research explores how health, psychological factors, and norms interact with poverty and economic development. Her recent work has focused on maternal depression and its implications for the intergenerational transmission of disadvantage. Her work has been published in the American Economic Review, American Economic Journal: Applied Economics, the Journal of Health Economics and other peer-reviewed journals across multiple disciplines. Victoria received her PhD in Economics from the University of Chicago after graduating from Barnard College. She is currently a Senior Lecturer in the Economics Department at the University of Melbourne and has affiliations with the Centre for Market Design, the Life Course Centre, and the Institute of Labor Studies (IZA).
This piece is the result of a collaboration between the Economic History Review, the Journal of Economic History, Explorations in Economic History and the European Review of Economic History. More details and special thanks below. Part B can be found here
As the world grapples with a pandemic, informed views based on facts and evidence have become all the more important. Economic history is a uniquely well-suited discipline to provide insights into the costs and consequences of rare events, such as pandemics, as it combines the tools of an economist with the long perspective and attention to context of historians. The editors of the main journals in economic history have thus gathered a selection of the recently-published articles on epidemics, disease and public health, generously made available by publishers to the public, free of access, so that we may continue to learn from the decisions of humans and policy makers confronting earlier episodes of widespread disease and pandemics.
Generations of economic historians have studied disease and its impact on societies across history. However, as the discipline has continued to evolve with improvements in both data and methods, researchers have uncovered new evidence about episodes from the distant past, such as the Black Death, as well as more recent global pandemics, such as the Spanish Influenza of 1918. We begin with a recent overview of scholarship on the history of premodern epidemics, and group the remaining articles thematically, into two short reading lists. The first consists of research exploring the impact of diseases in the most direct sense: the patterns of mortality they produce. The second group of articles explores the longer-term consequences of diseases for people’s health later in life.
Patterns of Mortality
The rich and complex body of historical work on epidemics is carefully surveyed by Guido Alfani and Tommy Murphy who provide an excellent guide to the economic, social, and demographic impact of plagues in human history: ‘Plague and Lethal Epidemics in the Pre-Industrial World’. The Journal of Economic History 77, no. 1 (2017): 314–43. https://doi.org/10.1017/S0022050717000092. The impact of epidemics varies over time and few studies have shown this so clearly as the penetrating article by Neil Cummins, Morgan Kelly and Cormac Ó Gráda, who provide a finely-detailed map of how the plague evolved in 16th and 17th century London to reveal who was most heavily burdened by this contagion. ‘Living Standards and Plague in London, 1560–1665’. Economic History Review 69, no. 1 (2016): 3-34. https://dx.doi.org/10.1111/ehr.12098 . Plagues shaped the history of nations and, indeed, global history, but we must not assume that the impact of plagues was as devastating as we might assume: in a classic piece of historical detective work, Ann Carlos and Frank Lewis show that mortality among native Americans in the Hudson Bay area was much lower than historians had suggested: ‘Smallpox and Native American Mortality: The 1780s Epidemic in the Hudson Bay Region’. Explorations in Economic History 49, no. 3 (2012): 277-90. https://doi.org/10.1016/j.eeh.2012.04.003
The effects of disease reflect a complex interaction of individual and social factors. A paper by Karen Clay, Joshua Lewis and Edson Severnini explains how the combination of air pollution and influenza was particularly deadly in the 1918 epidemic, and that cities in the US which were heavy users of coal had all-age mortality rates that were approximately 10 per cent higher than those with lower rates of coal use: ‘Pollution, Infectious Disease, and Mortality: Evidence from the 1918 Spanish Influenza Pandemic’. The Journal of Economic History 78, no. 4 (2018): 1179–1209. https://doi.org/10.1017/S002205071800058X. A remarkable analysis of how one of the great killers, smallpox, evolved during the 18th century, is provided by Romola Davenport, Leonard Schwarz and Jeremy Boulton, who concluded that it was a change in the transmissibility of the disease itself that mattered most for its impact: “The Decline of Adult Smallpox in Eighteenth‐century London.” Economic History Review 64, no. 4 (2011): 1289-314. https://dx.doi.org/10.1111/j.1468-0289.2011.00599.x The question of which sections of society experienced the heaviest burden of sickness during outbreaks of disease outbreaks has long troubled historians and epidemiologists. Outsiders and immigrants have often been blamed for disease outbreaks. Jonathan Pritchett and Insan Tunali show that poverty and immunisation, not immigration, explain who was infected during the Yellow Fever epidemic in 1853 New Orleans: ‘Strangers’ Disease: Determinants of Yellow Fever Mortality during the New Orleans Epidemic of 1853’. Explorations in Economic History 32, no. 4 (1995): 517. https://doi.org/10.1006/exeh.1995.1022
The Long Run Consequences of Disease
The way epidemics affects families is complex. John Parman wrestles wit h one of the most difficult issues – how parents respond to the harms caused by exposure to an epidemic. Parman shows that parents chose to concentrate resources on the children who were not affected by exposure to influenza in 1918, which reinforced the differences between their children: ‘Childhood Health and Sibling Outcomes: Nurture Reinforcing Nature during the 1918 Influenza Pandemic’, Explorations in Economic History 58 (2015): 22-43. https://doi.org/10.1016/j.eeh.2015.07.002. Martin Saavedra addresses a related question: how did exposure to disease in early childhood affect life in the long run? Using late 19th century census data from the US, Saavedra shows that children of immigrants who were exposed to yellow fever in the womb or early infancy, did less well in later life than their peers, because they were only able to secure lower-paid employment: ‘Early-life Disease Exposure and Occupational Status: The Impact of Yellow Fever during the 19th Century’. Explorations in Economic History 64, no. C (2017): 62-81. https://doi.org/10.1016/j.eeh.2017.01.003. One of the great advantages of historical research is its ability to reveal how the experiences of disease over a lifetime generates cumulative harms. Javier Birchenall’s extraordinary paper shows how soldiers’ exposure to disease during the American Civil War increased the probability they would contract tuberculosis later in life: ‘Airborne Diseases: Tuberculosis in the Union Army’. Explorations in Economic History 48, no. 2 (2011): 325-42. https://doi.org/10.1016/j.eeh.2011.01.004
In 1918 the Entente forces defeated the Central Powers on the Western Front. The First World War, with countless brutal battles and over 40 million casualties, had finally ended.
During the war, all governments substantially increased their national debt and promised to hand the bill to the losers. They also promised to return to the pre-war gold parity rather than inflating and devaluing their currency. Since the outcome of the war was expected to severely affect currency values, particularly for the losers, foreign exchange traders had an incentive to closely follow war events to update their beliefs on who was more likely to win.
According to Ferguson’s (1998) The Pity of War, the lost morale of the German troops — reflected in higher numbers of prisoners of war and of soldiers surrendering on the Western Front — was the ultimate reason for their defeat. Complementing this argument, Hall (2004) provided evidence that military casualties on the Western Front — the key front to finally winning the war — can help explain contemporary fluctuations in the exchange rates between belligerents’ currencies.
Although finally decided in the West, historians have emphasized the relevance of the global dimension of the First World War and the importance of the Eastern Front in understanding its complex evolution. Imagine it is 1914. Russia has just entered the war (earlier than expected), upsetting the plans of the Central Powers to circumvent a two-front war. Events on one front affected those on the other. But did contemporary traders, like historians today, consider the Eastern Front to be of relevance?
In our forthcoming article, we provide the first empirical insights into the relative importance of the Eastern Front during the First World War from the perspective of contemporary foreign exchange traders. Building on Hall’s study, the article indicates when and to what extent military casualties from both the Western and Eastern Fronts were linked to exchange rate fluctuations during the First World War, and suggest that traders used this information as an indicator as to which side was more likely to win.
To analyze the link between exchange rates and casualties we have introduced a novel dataset: the German Reichsarchiv and the Austrian War Office. Merging our dataset with that for the Western Front employed by Hall (2004), we have been able to construct a rich dataset on war casualties for France, Britain, and Russia as well as Germany and Austria-Hungary, for both Fronts.
Figure 1. 15,000 Russian Prisoners of war in Germany.
Using the digital archives of the Neue Zürcher Zeitung (a Swiss newspaper), we have further documented information on casualties, specifically the number of prisoners of war (Figure 1). The following quote from December 1914 makes this finding explicit:
Berlin, Dec. 31  (Wolff. Authorized) The overall number of prisoners of war (no civilian prisoners) in Germany at the end of the year is 8,138 officers and 577,875 men. This number does not include a portion of those captured on the run in Russian Poland nor any of those still in transit. The overall number is comprised of the following: French 3,159 officers and 215,905 men, including 7 generals; Russians 3,575 officers and 306,294 men, including 3 generals; British 492 officers and 18,824 men (Neue Zürcher Zeitung, 1 Jan. 1915, p. A1.).
In summary, our forthcoming article provides evidence that foreign exchange traders recognized the global dimension of the war, especially the Eastern and Western Fronts. Casualties on both Fronts were associated with exchange rate fluctuations. The number of soldiers captured on the Eastern Front affected exchange rates in the early war years. Foreign exchange traders gave additional weight to the Eastern Front during the first year of the war because Russia’s attack came as a surprise and the number of casualties was substantially higher than on the Western Front.
From autumn 1916 onwards, even though Russia had not yet left the war, our findings indicate that traders believed that the key to winning the war was in the west. The Brusilov offensive, a massive Russian attack (from June to September 1916), had proven that the Central Powers would face substantial opposition in the East. Moreover, the Allied forces on the Western Front had started to coordinate joint offenses.
Festschriften are usually produced at or around retirement, and to celebrate long academic careers. This collection, tragically, marks the end of a foreshortened career, that of Francesca Carnevali, who died in 2013 at the age of 48.
The chapters of the book have all been written by historian colleagues and friends of Francesca. The authors come from a diverse set of academic backgrounds, including the prominent medievalist Chris Wickham and the social and cultural historian Matthew Hilton. But most of the contributors come, as did Francesca, from the broadly-defined subject of business history.
Francesca’s own work provided a broad and variegated set of concerns and approaches that enables the contributors to link her work to their own diverse areas of expertise. Thus, for example, Leslie Hannah (who supervised Francesca’s PhD, and co-authored an article on banking with her), provides a new approach to the old question of the comparative performance of British banking before 1914. He stresses the paradox (at least for those who think competition is always the key to efficiency), that by any standards Britain at that time had a highly competitive banking system, yet suffered a growth ‘climacteric’. More broadly, Hannah, like Francesca herself, adheres to a broadly declinist view of British economic history, whilst clearly identifying the unsatisfactory nature of many declinist stories.
Francesca’s own work on banking contrasted Italy and Britain, and the financing of Italian small business is the concern of Alberto Rinaldi and Anna Spadavecchia’s chapter. The conclusion of this analysis emphasizes the embeddedness of financial institutions in legal, social and political conditions as well as economic circumstances, a conclusion that links to Francesca’s broadening concerns after her early work on banking. Key to this broadening was an examination of social capital and trust, as key, if problematic, concepts for understanding business behaviour.
This behaviour is examined in a variety of contexts in this book, ranging from Andrew Popp’s study of Liverpool cotton brokers and their ‘public staging of business life’ to Lucy Newton’s study(jointly authored with Francesca) of making and selling pianos in Victorian and Edwardian England. This concern with consumer goods is linked by Peter Scott and James Walker to an innovative study of how mass consumption and mass marketing, to some degree at least, blurred class demarcations on interwar Britain.
These empirical studies are complemented by more conceptually focussed chapter, by Chris Wickham on the genealogy of ‘micro-history’, by Kenneth Lipartito on the concept of social capital and its limits, and by Andrea Colli on the problems of doing comparative European history. Last, but very far from least, there is a characteristically wide-ranging and insightful chapter by Mathew Hilton on the problems of writing the economic and social history of twentieth-century Britain in the light of the recent ‘turns’ in how that history is being written.
The diversity of this book’s contents is a strength not a weakness. Business historians of almost any bent will find something interesting and important to engage with. The breadth of analytical and empirical concerns, allied with the close attention to important conceptual puzzles, makes this book a fitting reflection of, and tribute to, Francesca’s productive and well-lived life.
SAVE 25% when you order direct from the publisher using the offer code BB500 online here. Offer ends 2nd April 2019. Discount applies to print and eBook editions. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291, and quote the same code. Any queries please email firstname.lastname@example.org
Here are ten reasons to know more about women’s work and read our article on ‘The gender division of labour in early modern England’. We have collected evidence about work tasks in order to quantify the differences between women’s and men’s work in the period from 1500-1700. This research allows us to dispel some common misconceptions.
Men did most of the work didn’t they? This is unlikely, when both paid and unpaid work are counted, modern time-use studies show that women do the majority of work – 55% of rural areas of developing countries and 51% in modern industrial countries (UN Human Development report 1995). There is no reason why the pattern would have been markedly different in preindustrial England.
But we know about occupational structure in the past don’t we? Documents from the medieval period onwards describe men by their occupations, but women by their marital status. As a result we know quite a lot about male occupations but very little about women’s.
But women worked in households headed by their father, husband or employer. Surely, if we know what these men did, then we know what women were doing too? Recent research undertaken by Amy Erickson, Alex Shepard and Jane Whittle shows that married women often had different occupations from their husbands. If we do not know what women did, we are missing an important part of the economy.
But we have evidence of women working for wages. It shows that around 20% of agricultural workers were women, surely this demonstrates that women’s work wasn’t as important as men’s in the wider economy? This evidence only relates to labourers paid by the day, and before 1700 most agricultural labour was not carried out by day labourers, so this isn’t a very good measure. Our article shows that women carried out a third of agricultural work tasks, not 20%.
But women mostly did domestic stuff – cooking, housework and childcare – didn’t they, and that type of work doesn’t change much across history? Women did do most cooking, housework and childcare, but our research suggests it did not take up the majority of their working time. These forms of work did change markedly over time. A third of early modern housework took place outside, and our data suggests the majority was done for other households, not as unpaid work for one’s own family.
But women only worked in a narrow range occupations, didn’t they? Our research shows that women worked in all the major sectors of the economy, but often doing slightly different tasks from men. They undertook a third of work tasks in agriculture, around half of the work in everyday commerce and almost two thirds of work tasks in textile production. But women also did forms of work we might not expect, such as shearing sheep, dealing in second-hand iron, and droving cattle.
Women’s work was all low skilled wasn’t it? Women very rarely benefitted from formal apprenticeship in the way that men did, but that does not mean the tasks they undertook were unskilled. Women undertook many tasks, such as making lace and providing medical care, which required a great deal of skill.
But this was all in the past, what relevance does it have now? Many gendered patterns of work are remarkably persistent over time. Analysis by the Office of National Statistics states that one third of the gender pay gap in modern Britain can be explained by men and women working in different occupations, and by the lower rates of pay for part-time work, which is more commonly undertaken by women than men.
So nothing ever changes …? Well, not necessarily. In fact looking carefully at patterns of women’s work in the past shows some noticeably shifts over time. For instance, women worked as tailors and weavers in the medieval period and in the eighteenth century, but not in the sixteenth century.
But we know why women work differently from men, particularly in preindustrial societies – isn’t it because they are less physically strong and all the child-bearing stuff? Physical strength does not explain why women did some physically taxing forms of work and not others (why they walked for miles carrying heavy loads on their heads rather than driving carts). And not all women were married or had children. Neither physical strength nor child-bearing can explain why women were excluded from tailoring between 1500 and 1650, but worked successfully and skilfully in this and other closely related crafts in other periods.
We now have data which allows us to look more carefully at these issues, but there is still much more to uncover.
Going multilateral? Financial Markets’ Access and the League of Nations Loans, 1923-8
Juan Flores (The Paul Bairoch Institute of Economic History, University of Geneva) and
Yann Decorzant (Centre Régional d’Etudes des Populations Alpines)
Abstract: Why are international financial institutions important? This article reassesses the role of the loans issued with the support of the League of Nations. These long-term loans constituted the financial basis of the League’s strategy to restore the productive basis of countries in central and eastern Europe in the aftermath of the First World War. In this article, it is argued that the League’s loans accomplished the task for which they were conceived because they allowed countries in financial distress to access capital markets. The League adopted an innovative system of funds management and monitoring that ensured the compliance of borrowing countries with its programmes. Empirical evidence is provided to show that financial markets had a positive view of the League’s role as an external, multilateral agent, solving the credibility problem of borrowing countries and allowing them to engage in economic and institutional reforms. This success was achieved despite the League’s own lack of lending resources. It is also demonstrated that this multilateral solution performed better than the bilateral arrangements adopted by other governments in eastern Europe because of its lower borrowing and transaction costs.
Review by Vincent Bignon (Banque de France, France)
Flores and Decorzant’s paper deals with the achievements of the League of Nations in helping some central and Eastern European sovereign states to secure market access during in the Interwar years. Its success is assessed by measuring the financial performance of the loans of those countries and is compared with the performance of the loans issued by a control group made of countries of the same region that did not received the League’s support. The comparison of the yield at issue and fees paid to issuing banks allows the authors to conclude that the League of Nations did a very good job in helping those countries, hence the suggestion in the title to go multilateral.
The authors argue that the loans sponsored by the League of Nation – League’s loan thereafter – solved a commitment issue for borrowing governments, which consisted in the non-credibility when trying to signal their willingness to repay. The authors mention that the League brought financial expertise related to the planning of the loan issuance and in the negotiations of the clauses of contracts, suggesting that those countries lacked the human capital in their Treasuries and central banks. They also describe that the League support went with a monitoring of the stabilization program by a special League envoy.
Empirical results show that League loans led to a reduction of countries’ risk premium, thus allowing relaxing the borrowing constraint, and sometimes reduced quantity rationing for countries that were unable to issue directly through prestigious private bankers. Yet the interests rates of League loans were much higher than those of comparable US bond of the same rating, suggesting that the League did not create a free lunch.
Besides those important points, the paper is important by dealing with a major post war macro financial management issue: the organization of sovereign loans issuance to failed states since their technical administrative apparatus were too impoverished by the war to be able to provide basic peacetime functions such as a stable exchange rate, a fiscal policy with able tax collection. Comparison is made of the League’s loans with those of the IMF, but the situation also echoes the unilateral post WW 2 US Marshall plan. The paper does not study whether the League succeeded in channeling some other private funds to those countries on top of the proceeds of the League loans and does not study how the funds were used to stabilize the situation.
The paper belongs to the recent economic history tradition that aims at deciphering the explanations for sovereign debt repayment away from the gunboat diplomacy explanation, to which Juan Flores had previously contributed together with Marc Flandreau. It is also inspired by the issue of institutional fixes used to signal and enforce credible commitment, suggesting that multilateral foreign fixes solved this problem. This detailed study of financial conditions of League loans adds stimulating knowledge to our knowledge of post WW1 stabilization plans, adding on Sargent (1984) and Santaella (1993). It’s also a very nice complement to the couple of papers on multilateral lending to sovereign states by Tunker and Esteves (2016a, 2016b) that deal with 19th century style multilateralism, when the main European powers guaranteed loans to help a few states secured market access, but without any founding of an international organization.
But the main contribution of the paper, somewhat clouded by the comparison with the IMF, is to lead to a questioning of the functions fulfilled by the League of Nations in the Interwar political system. This bigger issue surfaced at two critical moments. First in the choice of the control group that focus on the sole Central and Eastern European countries, but does not include Germany and France despite that they both received external funding to stabilize their financial situation at the exact moment of the League’s loans. This brings a second issue, one of self-selection of countries into the League’s loans program. Indeed, Germany and France chose to not participate to the League’s scheme despite the fact that they both needed a similar type of funding to stabilize their macro situation. The fact that they did not apply for financial assistance means either that they have the qualified staff and the state apparatus to signal their commitment to repay, or that the League’s loan came with too harsh a monitoring and external constraint on financial policy. It is as if the conditions attached with League’ loans self-selected the good-enough failed states (new states created out of the demise of the Austro-Hungarian Empire) but discouraged more powerful states to apply to the League’ assistance.
Now if one reminds that the promise of the League of Nations was the preservation of peace, the success of the League loans issuance was meager compared to the failure in preserving Europe from a second major war. This of course echoes the previous research of Juan Flores with Marc Flandreau on the role of financial market microstructure in keeping the world in peace during the 19th century. By comparison, the League of Nations failed. Yet a successful League, which would have emulated Rothschild’s 19th century role in peace-keeping would have designed a scheme in which all states in need -France and Germany included – would have borrowed through it.
This leads to wonder the function assigned by their political brokers to the program of financial assistance of the League. As the IMF, the League was only able to design a scheme attractive to the sole countries that had no allies ready or strong-enough to help them secure market access. Also why did the UK and the US chose to channel funds through the League rather than directly? Clearly they needed the League as a delegated agent. Does that means that the League was another form of money doctors or that it acts as a coalition of powerful countries made of those too weak to lend and those rich but without enforcement power? This interpretation is consistent with the authors’ view “the League (…) provided arbitration functions in case of disputes.”
In sum the paper opens new connections with the political science literature on important historical issues dealing with the design of international organization able to provide public goods such as peace and not just helping the (strategic) failed states.
Esteves, R. and Tuner, C. (2016a) “Feeling the blues. Moral hazard and debt dilution in eurobonds before 1914”, Journal of International Money and Finance 65, pp. 46-68.
Esteves, R. and Tuner, C. (2016b) “Eurobonds past and present: A comparative review on debt mutualization in Europe”, Review of Law & Economics (forthcoming).
Flandreau, M. and Flores, J. (2012) “The peaceful conspiracy: Bond markets and international relations during the Pax Britannica”, International Organization, 66, pp. 211-41.
Santaella, J. A (1993) ‘Stabilization programs and external enforcement: experience from the 1920s’, Staff Papers—International Monetary Fund (J. IMF Econ Rev), 40, pp. 584–621
Sargent, T. J., (1983) ‘The ends of four big inflations’, in R. E. Hall, ed., Inflation: Causes and Effects (Chicago, Ill.: University of Chicago Press, pp. 41–97