By Francisco J. Beltrán Tapia (Norwegian University of Science and Technology)
This blog is part of our EHS Annual Conference 2020 Blog Series.
Gender discrimination – in the form of sex-selective abortion, female infanticide and the mortal neglect of young girls – constitutes a pervasive feature of many contemporary developing countries, especially in South and East Asia. Son preference stems from economic and cultural factors that have long influenced the perceived relative value of women in these regions and resulted in millions of ‘missing girls’.
But were there ‘missing girls’ in historical Europe? Although the conventional narrative argues that there is little evidence for this kind of behaviour (here), my research shows that this issue was much more important than previously thought, especially (but not exclusively) in Southern and Eastern Europe.
It should be noted first that historical sex ratios cannot be compared directly to modern ones. The biological survival advantage of girls was more visible in the high-mortality environments that characterised pre-industrial Europe. Subsequently, boys suffered higher mortality rates both in utero and during infancy and early childhood. Historical infant and child sex ratios were therefore relatively low, even in the presence of gender-discriminatory practices.
This is illustrated in Figure 1 below, which plots the relationship between child sex ratios (the number of boys per 100 girls) and infant mortality rates using information from European countries between 1750 and 2001. In particular, in societies where infant mortality rates were around 250 deaths (per 1,000 live births), a gender-neutral child sex ratio should have been slightly below parity (around 99.5 boys per 100 girls).
Figure 1: Infant mortality rates and child sex ratios in Europe, 1750-2001
Compared with this benchmark, infant and child sex ratios were abnormally high in some European regions (see Map 1 below), suggesting that some sort of gender discrimination was unduly increasing female mortality rates at those ages.
Interestingly, the observed differences in sex ratios are also visible throughout childhood. In fact, the evolution of sex ratios by age shows stark disparities across countries. Figure 2 shows how the number of boys per 100 girls changes as children grew older for a sample of countries, both in levels and in the observed trends.
In Bulgaria, Greece and France, for example, sex ratios increased with age, providing evidence that gender discrimination continued to increase female mortality rates as girls grew older. Importantly, the unbalanced sex ratios observed in some regions are not due to random noise, female under-registration or sex-specific migratory flows.
Likewise, although geography, climate and population density contributed to shaping infant and child sex ratios due to their impact on the disease environment, these factors cannot explain away the patterns of gender discrimination reported here.
Map 1: Child sex ratios in Europe, c.1880
Figure 2: Sex ratios by age in a sample of countries, c.1880
This evidence indicates that discriminatory practices with lethal consequences for girls constituted a veiled feature of our European past. But the actual nature of discrimination remains unclear and surely varies by region.
Excess female mortality was then not necessarily the result of ill treatment of young girls, but could have been just based on an unequal allocation of resources within the household, a circumstance that probably cumulated as infants grew older.
In contexts where infant and child mortality rates are high, a slight discrimination in the way that young girls were fed or treated when ill, as well as in the amount of work with which they were entrusted, was likely to have resulted in more girls dying from the combined effect of undernutrition and illness.
Although female infanticide or other extreme versions of mistreatment of young girls may not have been a systematic feature of historical Europe, this line of research would point to more passive, but pervasive, forms of gender discrimination that also resulted in a significant fraction of missing girls.
This blog is based on research funded by a bursary from the Economic History Society. More information here
Seventeen years passed between Edison patenting his revolutionary incandescent light bulb in 1880, and Poul la Cour’s first test of a wind turbine for generating electricity. Yet it would be another hundred years before wind power would become an established industry in the 2000s. How can we explain the delay in harvesting the cheapest source of electricity generation?
In the early twentieth century wind power emerged to fill the gaps of nascent electricity grids. This technology was first adopted in rural areas. The incentive was purely economic: the need for decentralised access to electricity. In this early stage there were no concerns about the environmental implications of wind power.
The Jacobs Wind Electricity Company delivered 30,000 three-blade wind turbines in the US between 1927 and 1957. The basic mechanics of these units did not differ too much from their modern counterparts. Once the standard electrical grid reached rural areas, however, the business case for wind power weakened. Soon it became more economic to buy electricity from centralised utilities, which benefited from significant economics of scale.
It was not until the late 1970s that wind power became a potential substitute for electricity generated by fossil fuels or hydropower. Academic literature agrees on two main triggers for this change: the oil crises in the 1970s, and the politicisation of Climate Change. When the price of oil quadrupled in 1973, rising to nearly US $12 per barrel, industrialised countries’ dependency on foreign producers of oil was exposed. The reaction was to find new domestic sources of energy. Considerable effort was devoted to nuclear power, but technologies like wind power were also revived.
In the late 1980s Climate Change became more politicised, and interest in wind energy as a technology that could mitigate environmental damage, was renewed. California’s governor, Jerry Brown, was aligned with these ideals and in 1978, in a move ahead of its time, he provided extra tax incentives to renewable energy producers in his tate. This soon created a ‘California Wind Rush’ which saw both local and European turbine manufacturers burst onto the market, with $1 billion US dollars being invested in the region of Altamont Pass between 1981 and 1986.
The California Wind Rush ended suddenly when central government support was withdrawn. However, the European Union (EU) accepted the challenge to maintain the industry. In 2001, the EU introduced Directive 2001/77/EC for the promotion of renewable energy sources. This Directive required Member States to set renewable energy targets. Many directives followed which triggered renewable energy programmes throughout the EU. Following the first directive in 2001, the installed capacity of wind power in the EU increased thirteen-fold, from 13GW to 169GW in 2017.
Whilst there is no doubt that the EU regulatory framework played a key role in the development of wind power, other factors were also at play. Nicolas Rochon, a green investment manager, published a memoir in 2020 in which he argued that clean energy development was also enabled by a change in the investment community. As interest rates decreased during the first two decades of the twenty-first century, investment managers revised downwards their expectations on future returns – which fostered more attention to clean energy assets offering lower profitability. Growing competition in the sector reduced the price of electricity obtained from renewable energy.
My research aims to understand the macroeconomic conditions that enabled wind power to develop to national scale. In particular, how wind power developers accessed capital, and how bankers and investors took a leap of faith to invest in the technology. My research will utilise oral history interviews with subjects like Nicolas Rochon, who made financial decisions on wind power projects.
This blog is part of a series of New Researcher blogs.
Technological advancements within the British cotton industry have widely been acknowledged as the beginning of industrialisation in eighteenth and nineteenth century Britain. My research reveals that these advances were driven by a desire to match the quality of handmade cotton textiles from India.
I highlight how the introduction of Indian printed cottons into British markets created a frenzy of demand for these exotic goods. This led to immediate imitations by British textile manufacturers, keen to gain footholds in the domestic and world markets where Indian cottons were much desired.
The process of imitation soon revealed that British spinners could not spin the fine cotton yarn required to hand make the fine cotton cloth needed for fine printing. And British printers could not print cloth in the multitudes of colourfast colours that the Indian artisans had mastered over centuries.
These two key limitations in British textile manufacturing spurred demand-induced technological innovations to match the quality of Indian handmade printed cottons.
In order to test this, I chart the quality of English cotton textiles from 1740-1820 and compare them with Indian cottons of the same period. Thread per inch count is used as the measure of quality, and digital microscopy is deployed to establish their yarn composition to determine whether they are all-cotton textiles or mixed linen-cottons.
My findings show that the earliest British ‘cotton’ textiles were mixed linen-cottons and not all-cottons. Technological evolution in the British cotton industry was a pursuit of first the coarse, yet all-cotton cloth, followed by the fine all-cotton cloth such as muslin.
The evidence shows that British cotton cloth quality improved by 60% between 1747 and 1782 during the decades of the famous inventions of James Hargreaves’ spinning jenny, Richard Arkwright’s waterframe and Samuel Crompton’s mule. It further improved by 24% between 1782 and 1816. Overall, cloth quality improved by a staggering 99% between 1747 and 1816.
My research challenges our current understanding of industrialisation as a British and West European phenomenon, commonly explained using rationales such as high wages, availability of local energy sources or access to New World resources. Instead, it reveals that learning from material goods and knowledge brought into Britain and Europe from the East directly and substantially affected the foundations of the modern world as we know it.
The results also pose a more fundamental question: how does technological change take place? Based on my findings, learning from competitor products – especially imitation of novel goods using indigenous processes – may be identified as one crucial pathway for the creation of new ideas that shape technological change.
Is high inequality destiny? The established view is that societies naturally converge towards high inequality in the absence of catastrophes (world wars or revolutions) or the progressive taxation of the rich. Yet, I show that rural Japan, 1700-1870, is an unexpected historical case in which a stable equality was sustained without such aids. Most peasants owned land, the most valuable asset in an agricultural economy, and Japan remained a society of land-owning peasants. This contrasts with the landless laborer societies of contemporary Western Europe which were highly unequal. Why were the outcomes so different?
My research shows that the relative equality of pre-industrial Japan can partly be explained by the widespread use of adoptions in Japan, which was used as a means of securing a male heir. The reasoning becomes clear if we first consider the case of the Earls Cowper in 18th century England where adoption was not practiced. The first Earl Cowper was a modest landowner and married Mary Clavering in 1706. When Mary’s brother subsequently died, she became the heiress and the couple inherited the Clavering estate. Similar (miss)fortunes for their heirs led the Cowpers to become one of the greatest landed families of England. The Cowpers were not particularly lucky, as one quarter of families were heirless during this era of high child mortality. The outcome of this death lottery was inequality.
Had the Cowpers lived in contemporary Japan, they would have remained modest landowners. An heirless household in Japan would adopt a son. Hence, the Claverings would have an adopted son and the family estate would have remained in the family. To keep the blood in the family, the adopted son may have married a daughter if available. If unavailable, the next generation could be formed by total strangers but they would continue the family line. Amassing a fortune in Japan was unrelated to demographic luck.
Widespread adoptions were not a peculiarity of Japan and this mechanism can also explain why East Asian societies were landowning peasant societies. China also had high rates of adoption in addition to equal distributions of land according to surveys from the 1930s. Perhaps more surprisingly, adoptions were common in ancient Europe where the Greeks and Romans practiced adoptions to secure heirs. For example, Augustus, the first emperor of the Roman Empire, was adopted. Adoptions were a natural means of keeping wealth under the control of the family.
Europe changed due to the church discouraging adoptions from the early middle ages, leading to adoptions becoming rarities by the 11th century. The church was partially motivated by theology but also by the possibility that heir-less wealth would get willed to the church. They almost certainly did not foresee that their policies would lead to greater wealth inequality during the subsequent eras.
Figure 1. Land Distribution under Differing Adoption Regimes and Impartible Inheritance
My study shows by simulation that a large portion of the difference in wealth inequality outcomes between east and west can be explained by adoption (see figure 1). Societies without adoption have wealth distribution that are heavily skewed with many landless households unlike those with adoptions. Therefore, family institutions played a key role in determining inequality which had huge implications for the way society was organized in these two regions.
Interestingly, East Asian societies still have greater equality in wealth distributions today. Moreover, adoptions still amount to 10% of marriages in Japan which is a remarkably large share. Adoption may have continued creating a relatively equal society in Japan up to today.
While it plays a key role in theories of the transition to modern economic growth, there are few estimates of the quantity-quality trade-off from before the demographic transition. Using a uniquely suitable new dataset of vital records, I use two instrumental variable (IV) strategies to estimate the trade-off in Quebec between 1620 and 1850. I find that one additional child who survived past age one decreased the literacy rate (proxied by signatures) of their older siblings by 5 percentage points.
The first strategy exploits the fact that twin births, conditional on mother’s age and parity, are a random increase in family size. While twins are often used to identify the trade-off in contemporary studies, sufficiently large and reliable historical datasets containing twins are rare. I compare two families, one whose mother gave birth to twins and one whose mother gave birth to a singleton, both at the same parity and age. I then look at the probability that each older non-twin sibling signed their marriage record.
For the second strategy, I posit that aggregate, province-wide infant mortality rate during the year a younger child was born is exogenous to individual family characteristics. I compare two families, one whose mother gave birth during a year with relatively high infant mortality rate, both at the same parity and age. I then look at older siblings from both families who were born in the same year, controlling for potential time trends in literacy. As the two different IV techniques result in very similar estimates, I argue there is strong evidence of a modest trade-off.
By using two instruments, I am able to rule out one major source of potential bias. In many settings, IV estimates of the trade-off may be biased if parents reallocate resources towards (reinforcement) or away from (compensation) children with higher birth endowments. I show that both twins and children born in high mortality years have, on average, lower literacy rates than their older siblings. As one shock increases and one shock decreases family size, but both result in older siblings having relatively higher human capital, reinforcement or compensation would bias the estimates in different directions. As the estimates are very similar, I conclude there is no evidence that my estimates suffer from this bias.
Is the estimated trade-off economically significant? I compare Quebec to a society with similar culture and institutions: pre-Revolutionary rural France. Between 1628 and 1788, a woman surviving to age 40 in Quebec would expect to have 1.7 additional children surviving past age one compared to her rural French peers. The average literacy rate (again proxied by signatures) in France was about 9.5 percentage points higher than in Quebec. Assuming my estimate of the trade-off is a linear and constant effect (instead of just a local average), reducing family sizes to French levels would have increased literacy by 8.6 percentage points in the next generation, thereby eliminating most of the gap.
However, pre-Revolutionary France was hardly a human capital-rich society. Proxying for the presence of the primary educators of the period (clergy and members of religious orders) with unmarried adults, I find plausible evidence that the trade-off was steeper in boroughs and decades with greater access to education. Altogether, I interpret my results as evidence that a trade-off existed which explains some of the differences across societies.
Henry, Louis, 1978. “Fécondité des mariages dans le quart Sud-Est de la France de 1670 a 1829,” Population (French Edition), 33 (4/5), 855–883.
IMPQ. 2019. Infrastructure intégrée des microdonnées historiques de la population du Québec (XVIIe – XXe siècle) (IMPQ). [Dataset].Centre interuniversitaires d’études québécoises (CIEQ).
Programme de recherche en démographie historique (PRDH). 2019. Registre de la population du Québec ancien (RPQA). [Dataset]. Département de Démographie, Université de Montréal.
Projet BALSAC. 2019. Le fichier BALSAC. [Dataset]. L’Université du Québec à Chicoutimi.
by Stefania Galli and Klas Rönnbäck (University of Gothenburg)
The full article from this blog is published on the European Review of Economic History and is available as open source at this link
Land distribution has been identified as a key contributor to economic inequality in pre-industrial societies. Historical evidence on the link between land distribution and inequality for the African continent is scant, unlike the large body of research available for Europe and the Americas. Our article examines inequality in land ownership in Sierra Leone during the early nineteenth century. Our contribution is unique because it studies land inequality at a particularly early stage for African economic history.
In 1787 the Sierra Leone colony was born, the first British colony to be founded after the American War of Independence. The colony had some peculiar features. Although populated by settlers, they were not of European origin, as in most settler colonies founded at the time. Rather, Sierra Leone came to be populated by people of African descent — a mix of former and liberated slaves from America, Europe and Africa. Furthermore, Sierra Leone had deeply egalitarian foundations, which rendered it more similar to a utopian society, than to other colonies founded on the African continent in subsequent decades. The founders of the colony intended egalitarian land distribution for all settlers, aiming to create a black yeoman settler society.
In our study, we rely on a new dataset constructed from multiple different sources pertaining to the early years of Sierra Leone, which provide evidence on household land distribution for three benchmark years: 1792, 1800 and 1831. The first two benchmarks refer to a time when demographic pressure in the Colony was limited, while the last benchmark represents a period of rapidly increasing demographic pressure due to the inflow of ‘liberated slaves’ from captured slave ships landed at Freetown.
Our findings show that, in its early days, the colony was characterized by highly egalitarian land distribution, possibly the most equal distribution calculated to date. All households possessed some land, in a distribution determined to a large extent by household size. Not only were there no landless households in 1792 and 1800, but land was normally distributed around the mean. Based on these results, we conclude that the ideological foundations of the colony were manifested in egalitarian distribution of land.
Such ideological convictions were, however, hard to maintain in the long run due to mounting demographic pressure and limited government funding. Land inequality thus increased substantially by the last benchmark year (Figure 1). In 1831, land distribution was positively skewed, with a substantial proportion of households in the sample being landless or owning plots much smaller than the median, while a few households held very large plots. We argue that these findings are consistent with an institutional shift in redistributive policy, which enabled inequality to grow rapidly. In the early days, all settlers received a set amount of land. However, by 1831, land could be appropriated freely by the settlers, enabling households to appropriate land according to their ability, but also according to their wish to participate in agricultural production. Specifically, households in more fertile regions appear to have specialized in agricultural production, whereas households in regions unsuitable to agriculture increasingly came to focus upon other economic activities.
Our results have two implications for the debate on the origin of inequality. First, Sierra Leone shows how idealist motives had important consequences for inequality. This is of key importance for wider discussions on the extent to which politics generates tangible changes in society. Second, our results show how difficult it was to effect idealism when confronted by mounting material challenges.
by Victoria Baranov (University of Melbourne), Ralph De Haas (EBRD, CEPR, and Tilburg University) and Pauline Grosjean (University of New South Wales). More information on the authors below.
The content of this article was originally published on VOX and has been published here with the authors’ consent.
Why are men three times as likely than women to die from suicide? And why do many unemployed men refuse to apply for jobs that are typically done by women? This column argues that a better understanding of masculinity norms – the rules and standards that guide and constrain men’s behavior in society – can help answer important questions like these. We present evidence from Australia on how historical circumstances have instilled strong and persistent masculine identities that continue to influence outcomes related to male health; violence, suicide, and bullying; attitudes towards homosexuals; and occupational gender segregation.
What makes a ‘real’ man? According to traditional gender norms, men ought to be self-reliant, assertive, competitive, violent when needed, and in control of their emotions (Mahalik et al., 2003). Two current debates illustrate how such masculinity norms have profound economic and social impacts. First, in many countries, men die younger than women and are consistently less healthy. Masculinity norms, especially a penchant for violence and risk taking, are an important cultural driver of this gender health gap (WHO, 2013). A second debate links masculinity norms to occupational gender segregation. Technological progress and globalization have disproportionately affected male employment. Yet, many newly unemployed men refuse to fill jobs that do not match their self-perceived gender identity (Akerlof and Kranton, 2000).
The extent to which men are expected to conform to stereotypical masculinity norms nevertheless differs across societies. This raises the question: where do masculinity norms come from? The origins of gender norms about women have been the focus of a vibrant literature (Giuliano, 2018). By contrast, the origins of norms that guide and constrain the behavior of men have received no attention in the economics literature.
In recent research, we argue that strict masculinity norms can emerge in response to highly skewed sex ratios (the number of males relative to females) which intensify competition among men (Baranov, De Haas and Grosjean, 2020). When the sex ratio is more male biased, male-male competition for scarce females is more intense. This competition can intensify violence, bullying, and intimidating behavior (e.g. bravado), which, once entrenched in local culture, continue to manifest themselves in present-day outcomes long after sex ratios have normalized. We test this hypothesis using data from a unique natural experiment: the convict colonization of Australia.
Australia as a historical experiment
To establish a causal link from sex ratios to the manifestation of masculinity norms, we exploit the convict colonization of Australia. Between 1787 and 1868, Britain transported 132,308 convict men but only 24,960 convict women to Australia. Convicts were not confined to prisons but allocated across the colonies in a highly centralized manner. This created a variegated spatial pattern in sex ratios, and consequently in local male-to-male competition, in an otherwise homogeneous setting.
Convicts and ex-convicts represented the majority of the colonial population in Australia well into the mid-19th century. Voluntary migration was limited and mainly involved men migrating in response to male-biased economic opportunities available in agriculture and, after the discovery of gold in the 1850s, mining. Because of the predominance of male convicts and migrants, biased population sex ratios endured for over a century (Figure 1).
Identifying the lasting impact of skewed sex ratios
We regress present-day manifestations of masculinity norms, including violent behavior, bullying, and stereotypically male occupational choice on historical sex ratios, collected from the first reliable census in each Australian state (see also Grosjean and Khattar, 2019). An empirical challenge is that variation in historical sex ratios could reflect unobservable characteristics. To tackle this, we instrument the historical sex ratio by the sex ratio among convicts only. This instrument is highly relevant since most of the white Australian population initially consisted of convicts. Moreover, convicts were not free to move: a centralized assignment scheme determined their location as a function of labor needs, which we control for by initial economic specialization. Throughout the analysis, we also control for time-invariant geographic and historic characteristics as well as key present-day controls (sex ratio, population, and urbanization).
Masculinity norms among Australian men today
Using the above empirical strategy, we derive four sets of results:
1. Violence, suicide, and health
We first assess the impact of historically skewed sex ratios on present-day violence and health outcomes. Evidence suggests that men adhering to traditional masculinity norms attach a stronger stigma to mental health problems and tend to avoid health services. As a proxy for the avoidance of preventative health care we use local suicide and prostate cancer rates. Prostate cancer is often curable if treated early, but avoidance of diagnosis is a public health concern. The endorsement of strict masculinity norms is also associated with aggression, excessive drinking, and smoking.
Our estimates show that today, the rates of assault and sexual assault are higher in parts of Australia that were more male biased in the past. A one unit increase in the historical sex ratio (defined as the ratio of the number of men over the number of women) is associated with an 11 percent increase in the rate of assault and a 16 percent increase in sexual assaults. We also find strong evidence of elevated rates of male suicide, prostate cancer, and lung disease in these areas. For male suicide – the leading cause of death for Australian men under 45 – a one unit increase in the historical sex ratio is associated with a staggering 26 percent increase.
2. Occupational gender segregation
A second manifestation of male identity is occupational choice. Our results paint a striking picture. A one unit increase in the sex ratio is associated with a nearly 1 percentage point shift from the share of men employed in neutral (e.g. real estate, retail) or stereotypically female occupations (e.g. teachers, receptionists) to stereotypically male occupations (e.g. carpenters, metal workers).
3. Support for same-sex marriage
We capture the political expression of masculine identity by opposition against same-sex marriage, which we measure using voting records from the nation-wide referendum on same-sex marriage in 2017. Our results show that the share of votes in favor of marriage equality is substantially lower in areas where sex ratios were more male biased in the past. A one unit increase in the historical sex ratio is associated with a nearly 3 percentage point decrease in support for same-sex marriage. This is slightly over 6 percent of the mean.
Lastly, we find that boys, but not girls, are more likely to be bullied at school in areas that used to be more male biased in the past. The magnitude of the results is considerable and in line with the magnitude of the results for assaults (measured in adults). A one unit increase in the historical sex ratio is associated with a higher likelihood of parents (teachers) reporting bullying of boys by 13.7 (5.2) percentage points. This suggests that masculinity norms are perpetuated through horizontal transmission: peer pressure, starting at a young age in the playground.
We find that historically male-biased sex ratios forged a culture of male violence, help avoidance, and self-harm that persists to the present day in Australia. While our experimental setting is unique, we believe that our findings can inform the debate about the long-term socioeconomic consequences and risks of skewed sex ratios in many developing countries such as China, India, and parts of the Middle East. In these settings, sex-selective abortion and mortality as well as the cultural relegation and seclusion of women have created societies with highly skewed sex ratios. Our results suggest that the masculinity norms that develop as a result may not only be detrimental to (future generations of) men themselves but can also have important repercussions for other groups in society, in particular women and sexual minorities.
Our findings also align with an extensive psychological and medical literature that connects traditional masculinity norms to an unwillingness among men to seek timely medical help or to engage in preventive health care and protective health measures (e.g. Himmelstein and Sanchez (2016) and Salgado et al. (2016)). This suggests that voluntary observance of health measures, such as social distancing during the COVID-19 pandemic, may be considerably lower among men who adhere to traditional masculinity norms.
Akerlof, George A., and Rachel E. Kranton (2000), Economics and Identity, Quarterly Journal of Economics 115(3), 715–753.
Giuliano, Paola (2018), Gender: A Historical Perspective, The Oxford Handbook of Women and the Economy, Ed. Susan Averett, Laura Argys and Saul Hoffman. Oxford University Press, New York.
Grosjean, Pauline, and Rose Khattar (2019), It’s Raining Men! Hallelujah? The Long-Run Consequences of Male-Biased Sex Ratios, The Review of Economic Studies, 86(2), 723–754.
Himmelstein, M.S. and D.T. Sanchez (2016), Masculinity Impediments: Internalized Masculinity Contributes to Healthcare Avoidance in Men and Women, Journal of Health Psychology, 21, 1283–1292.
Mahalik, J.R., B.D. Locke, L.H. Ludlow, M.A. Diemer, R.P.J. Scott, M. Gottfried, and G. Freitas (2003), Development of the Conformity to Masculine Norms Inventory, Psychology of Men & Masculinity, 4(1), 3–25.
Salgado, D.M., A.L. Knowlton, and B.L. Johnson (2019), Men’s Health-Risk and Protective Behaviors: The Effects of Masculinity and Masculine Norms, Psychology of Men & Masculinities, 20(2), 266–275.
WHO (2013), Review of Social Determinants and the Health Divide in the WHO European Region, World Health Organization, Regional Office for Europe, Copenhagen.
Ralph De Haas, a Dutch national, is the Director of Research at the European Bank for Reconstruction and Development (EBRD) in London. He is also a part-time Associate Professor of Finance at Tilburg University, a CEPR Research Fellow, a Fellow at the European Banking Center, a Visiting Senior Fellow at the Institute of Global Affairs at the London School of Economics and Political Science, and a Research Associate at the ZEW–Leibniz Centre for European Economic Research. Ralph earned a PhD in economics from Utrecht University and is the recipient of the 2014 Willem F. Duisenberg Fellowship Prize. He has published in the Journal of Financial Economics;Review of Financial Studies; Review of Finance; Journal of International Economics, American Economic Journal: Applied Economics; the Journal of the European Economic Association and various other peer-reviewed journals. Ralph’s research interests include global banking, development finance and financial intermediation more broadly. He is currently working on randomized controlled trials related to financial inclusion in Morocco and Turkey.
Pauline Grosjean is a Professor in the School of Economics at UNSW. Previously at the University of San Francisco and the University of California at Berkeley, she has also worked as an Economist at the European Bank for Reconstruction and Development. She completed her PhD in economics at Toulouse School in Economics in 2006 after graduating from the Ecole Normale Supérieure. Her research studies the historical and dynamic context of economic development. In particular, she focuses on how culture and institutions interact and shape long-term economic development and individual behavior. She has published research that studies the historical process of a wide range of factors that are crucial for economic development, including cooperation and violence, trust, gender norms, support for democracy and for market reforms, immigration, preferences for education, and conflict.
Victoria Baranov’s research explores how health, psychological factors, and norms interact with poverty and economic development. Her recent work has focused on maternal depression and its implications for the intergenerational transmission of disadvantage. Her work has been published in the American Economic Review, American Economic Journal: Applied Economics, the Journal of Health Economics and other peer-reviewed journals across multiple disciplines. Victoria received her PhD in Economics from the University of Chicago after graduating from Barnard College. She is currently a Senior Lecturer in the Economics Department at the University of Melbourne and has affiliations with the Centre for Market Design, the Life Course Centre, and the Institute of Labor Studies (IZA).
This piece is the result of a collaboration between the Economic History Review, the Journal of Economic History, Explorations in Economic History and the European Review of Economic History. More details and special thanks below. Part A is available at this link
As the world grapples with a pandemic, informed views based on facts and evidence have become all the more important. Economic history is a uniquely well-suited discipline to provide insights into the costs and consequences of rare events, such as pandemics, as it combines the tools of an economist with the long perspective and attention to context of historians. The editors of the main journals in economic history have thus gathered a selection of the recently-published articles on epidemics, disease and public health, generously made available by publishers to the public, free of access, so that we may continue to learn from the decisions of humans and policy makers confronting earlier episodes of widespread disease and pandemics.
Generations of economic historians have studied disease and its impact on societies across history. However, as the discipline has continued to evolve with improvements in both data and methods, researchers have uncovered new evidence about episodes from the distant past, such as the Black Death, as well as more recent global pandemics, such as the Spanish Influenza of 1918. In this second instalment of The Long View on Epidemics, Disease and Public Health: Research from Economic History, the editors present a review of two major themes that have featured in the analysis of disease. The first includes articles that discuss the economic impacts of historical epidemics and the official responses they prompted. The second turns to the more optimistic story of the impact of public health regulation and interventions, and the benefits thereby generated.
My Tawney lecture reassessed the relationship between slavery and industrial capitalism in both Britain and the United States. The thesis expounded by Eric Williams held that slavery and the slave trade were vital for the expansion of British industry and commerce during the 18th century but were no longer needed by the 19th. My lecture confirmed both parts of the Williams thesis: the 18th-century Atlantic economy was dominated by sugar, which required slave labor; but after 1815, British manufactured goods found diverse new international markets that did not need captive colonial buyers, naval protection, or slavery. Long-distance trade became safer and cheaper, as freight rates fell, and international financial infrastructure developed. Figure 1 (below) shows that the slave economies absorbed the majority of British cotton goods during the 18th century, but lost their centrality during the 19th, supplanted by a diverse array of global destinations.
I argued that this formulation applies with equal force to the upstart economy across the Atlantic. The mainland North American colonies were intimately connected to the larger slave-based imperial economy. The northern colonies, holding relatively few slaves themselves, were nonetheless beneficiaries of the trading regime, protected against outsiders by British naval superiority. Between 1768 and 1772, the British West Indies were the largest single market for commodity exports from New England and the Middle Atlantic, dominating sales of wood products, fish and meat, and accounting for significant shares of whale products, grains and grain products. The prominence of slave-based commerce explains the arresting connections reported by C. S. Wilder, associating early American universities with slavery. Thus, part one of the Williams thesis also holds for 18th-century colonial America.
Insurgent scholars known as New Historians of Capitalism argue that slavery, specifically slave-grown cotton, was critical for the rise of the U.S. economy in the 19th century. In contrast, I argued that although industrial capitalism needed cheap cotton, cheap cotton did not need slavery. Unlike sugar, cotton required no large investments of fixed capital and could be cultivated efficiently at any scale, in locations that would have been settled by free farmers in the absence of slavery. Early mainland cotton growers deployed slave labour not because of its productivity or aptness for the new crop, but because they were already slave owners, searching for profitable alternatives to tobacco, indigo, and other declining crops. Slavery was, in effect, a ‘pre-existing condition’ for the 19th-century American South.
To be sure, U.S. cotton did indeed rise ‘on the backs of slaves’, and no cliometric counterfactual can gainsay this brute fact of history. But it is doubtful that this brutal system served the long-run interests of textile producers in Lancashire and New England, as many of them recognized at the time. As argued here, the slave South underperformed as a world cotton supplier, for three distinct though related reasons: in 1807 the region closed the African slave trade, yet failed to recruit free migrants, making labour supply inelastic; slave owners neglected transportation infrastructure, leaving large sections of potential cotton land on the margins of commercial agriculture; and because of the fixed-cost character of slavery, even large plantations aimed at self-sufficiency in foodstuffs, limiting the region’s overall degree of market specialization. The best evidence that slavery was not essential for cotton supply is demonstrated by what happened when slavery ended. After war and emancipation, merchants and railroads flooded into the southeast, enticing previously isolated farm areas into the cotton economy. Production in plantation areas gradually recovered, but the biggest source of new cotton came from white farmers in the Piedmont. When the dust settled in the 1880s, India, Egypt, and slave-using Brazil had retreated from world markets, and the price of cotton in Liverpool returned to its antebellum level. See Figure 2.
The New Historians of Capitalism also exaggerate the importance of the slave South for accelerated U.S. growth. The Cotton Staple Growth hypothesis advanced by Douglass North was decisively refuted by economic historians a generation ago. The South was not a major market for western foodstuffs and consumed only a small and declining share of northern manufactures. International and interregional financial connections were undeniably important, but thriving capital markets in northeastern cities clearly predated the rise of cotton, and connections to slavery were remote at best. Investments in western canals and railroads were in fact larger, accentuating the expansion of commerce along East-West lines.
It would be excessive to claim that Anglo-American industrial and financial interests recognized the growing dysfunction of the slave South, and in response fostered or encouraged the antislavery campaigns that culminated in the Civil War. A more appropriate conclusion is that because of profound changes in technologies and global economic structures, slavery — though still highly profitable to its practitioners — no longer seemed essential for the capitalist economies of the 19th-century world.
by Anna Missiaia and Kersten Enflo (Lund University)
This research is due to be published in the Economic History Review and is currently available on Early View.
For a long time, scholars have thought about regional inequality merely as a by-product of modern economic growth: following a Kuznets-style interpretation, the front-running regions increase their income levels and regional inequality during industrialization; and it is only when the other regions catch-up that overall regional inequality decreases and completes the inverted-U shaped pattern. But early empirical research on this theme was largely focused on the the 20th century, ignoring industrial take-off of many countries (Williamson, 1965). More recent empirical studies have pushed the temporal boundary back to the mid-19th century, finding that inequality in regional GDP was already high at the outset of modern industrialization (see for instance Rosés et al., 2010 on Spain and Felice, 2018 on Italy).
The main constraint for taking the estimations well into the pre-industrial period is the availability of suitable regional sources. The exceptional quality of Swedish sources allowed us for the first time to estimate a dataset of regional GDP for a European economy going back to the 16th century (Enflo and Missiaia, 2018). The estimates used here for 1571 are largely based on a one-off tax proportional to the yearly production: the Swedish Crown imposed this tax on all Swedish citizens in order to pay a ransom for the strategic Älvsborg castle that had just been conquered by Denmark. For the period 1750-1850, the estimates rely on standard population censuses. By connecting the new series to the existing ones from 1860 onwards by Enflo et al. (2014), we obtain the longest regional GDP series for any given country.
We find that inequality increased dramatically between 1571 and 1750 and remained high until the mid-19th century. Thereafter, it declined during the modern industrialization of the country (Figure 1). Our results discard the traditional view that regional divergence can only originate during an industrial take-off.
Figure 1. Coefficient of variation of GDP per capita across Swedish counties, 1571-2010.
Figure 2 shows the relative disparities in four benchmark years. If the country appeared relatively equal in 1571, between 1750 and 1850 both the mining districts in central and northern Sweden and the port cities of Stockholm and Gothenburg emerged.
Figure 2. The relative evolution of GDP per capita, 1571-1850 (Sweden=100).
The second part of the paper is devoted to the study of the drivers of pre-industrial regional inequality. Decomposing the Theil index for GDP per worker, we show that regional inequality was driven by structural change, meaning that regions diverged because they specialized in different sectors. A handful of regions specialized in either early manufacturing or in mining, both with a much higher productivity per worker compared to agriculture.
To explain this different trajectory, we use a theoretical framework introduced by Strulik and Weisdorf (2008) in the context of the British Industrial Revolution: in regions with a higher share of GDP in agriculture, technological advancements lead to productivity improvements but also to a proportional increase in population, impeding the growth in GDP per capita as in a classic Malthusian framework. Regions with a higher share of GDP in industry, on the other hand, experienced limited population growth due to the increasing relative price of children, leading to a higher level of GDP per capita. Regional inequality in this framework arises from a different role of the Malthusian mechanism in the two sectors.
Our work speaks to a growing literature on the origin of regional divergence and represents the first effort to perform this type of analysis before the 19th century.