This blog is part of a series of New Researcher blogs.
Technological advancements within the British cotton industry have widely been acknowledged as the beginning of industrialisation in eighteenth and nineteenth century Britain. My research reveals that these advances were driven by a desire to match the quality of handmade cotton textiles from India.
I highlight how the introduction of Indian printed cottons into British markets created a frenzy of demand for these exotic goods. This led to immediate imitations by British textile manufacturers, keen to gain footholds in the domestic and world markets where Indian cottons were much desired.
The process of imitation soon revealed that British spinners could not spin the fine cotton yarn required to hand make the fine cotton cloth needed for fine printing. And British printers could not print cloth in the multitudes of colourfast colours that the Indian artisans had mastered over centuries.
These two key limitations in British textile manufacturing spurred demand-induced technological innovations to match the quality of Indian handmade printed cottons.
In order to test this, I chart the quality of English cotton textiles from 1740-1820 and compare them with Indian cottons of the same period. Thread per inch count is used as the measure of quality, and digital microscopy is deployed to establish their yarn composition to determine whether they are all-cotton textiles or mixed linen-cottons.
My findings show that the earliest British ‘cotton’ textiles were mixed linen-cottons and not all-cottons. Technological evolution in the British cotton industry was a pursuit of first the coarse, yet all-cotton cloth, followed by the fine all-cotton cloth such as muslin.
The evidence shows that British cotton cloth quality improved by 60% between 1747 and 1782 during the decades of the famous inventions of James Hargreaves’ spinning jenny, Richard Arkwright’s waterframe and Samuel Crompton’s mule. It further improved by 24% between 1782 and 1816. Overall, cloth quality improved by a staggering 99% between 1747 and 1816.
My research challenges our current understanding of industrialisation as a British and West European phenomenon, commonly explained using rationales such as high wages, availability of local energy sources or access to New World resources. Instead, it reveals that learning from material goods and knowledge brought into Britain and Europe from the East directly and substantially affected the foundations of the modern world as we know it.
The results also pose a more fundamental question: how does technological change take place? Based on my findings, learning from competitor products – especially imitation of novel goods using indigenous processes – may be identified as one crucial pathway for the creation of new ideas that shape technological change.
Is high inequality destiny? The established view is that societies naturally converge towards high inequality in the absence of catastrophes (world wars or revolutions) or the progressive taxation of the rich. Yet, I show that rural Japan, 1700-1870, is an unexpected historical case in which a stable equality was sustained without such aids. Most peasants owned land, the most valuable asset in an agricultural economy, and Japan remained a society of land-owning peasants. This contrasts with the landless laborer societies of contemporary Western Europe which were highly unequal. Why were the outcomes so different?
My research shows that the relative equality of pre-industrial Japan can partly be explained by the widespread use of adoptions in Japan, which was used as a means of securing a male heir. The reasoning becomes clear if we first consider the case of the Earls Cowper in 18th century England where adoption was not practiced. The first Earl Cowper was a modest landowner and married Mary Clavering in 1706. When Mary’s brother subsequently died, she became the heiress and the couple inherited the Clavering estate. Similar (miss)fortunes for their heirs led the Cowpers to become one of the greatest landed families of England. The Cowpers were not particularly lucky, as one quarter of families were heirless during this era of high child mortality. The outcome of this death lottery was inequality.
Had the Cowpers lived in contemporary Japan, they would have remained modest landowners. An heirless household in Japan would adopt a son. Hence, the Claverings would have an adopted son and the family estate would have remained in the family. To keep the blood in the family, the adopted son may have married a daughter if available. If unavailable, the next generation could be formed by total strangers but they would continue the family line. Amassing a fortune in Japan was unrelated to demographic luck.
Widespread adoptions were not a peculiarity of Japan and this mechanism can also explain why East Asian societies were landowning peasant societies. China also had high rates of adoption in addition to equal distributions of land according to surveys from the 1930s. Perhaps more surprisingly, adoptions were common in ancient Europe where the Greeks and Romans practiced adoptions to secure heirs. For example, Augustus, the first emperor of the Roman Empire, was adopted. Adoptions were a natural means of keeping wealth under the control of the family.
Europe changed due to the church discouraging adoptions from the early middle ages, leading to adoptions becoming rarities by the 11th century. The church was partially motivated by theology but also by the possibility that heir-less wealth would get willed to the church. They almost certainly did not foresee that their policies would lead to greater wealth inequality during the subsequent eras.
Figure 1. Land Distribution under Differing Adoption Regimes and Impartible Inheritance
My study shows by simulation that a large portion of the difference in wealth inequality outcomes between east and west can be explained by adoption (see figure 1). Societies without adoption have wealth distribution that are heavily skewed with many landless households unlike those with adoptions. Therefore, family institutions played a key role in determining inequality which had huge implications for the way society was organized in these two regions.
Interestingly, East Asian societies still have greater equality in wealth distributions today. Moreover, adoptions still amount to 10% of marriages in Japan which is a remarkably large share. Adoption may have continued creating a relatively equal society in Japan up to today.
While it plays a key role in theories of the transition to modern economic growth, there are few estimates of the quantity-quality trade-off from before the demographic transition. Using a uniquely suitable new dataset of vital records, I use two instrumental variable (IV) strategies to estimate the trade-off in Quebec between 1620 and 1850. I find that one additional child who survived past age one decreased the literacy rate (proxied by signatures) of their older siblings by 5 percentage points.
The first strategy exploits the fact that twin births, conditional on mother’s age and parity, are a random increase in family size. While twins are often used to identify the trade-off in contemporary studies, sufficiently large and reliable historical datasets containing twins are rare. I compare two families, one whose mother gave birth to twins and one whose mother gave birth to a singleton, both at the same parity and age. I then look at the probability that each older non-twin sibling signed their marriage record.
For the second strategy, I posit that aggregate, province-wide infant mortality rate during the year a younger child was born is exogenous to individual family characteristics. I compare two families, one whose mother gave birth during a year with relatively high infant mortality rate, both at the same parity and age. I then look at older siblings from both families who were born in the same year, controlling for potential time trends in literacy. As the two different IV techniques result in very similar estimates, I argue there is strong evidence of a modest trade-off.
By using two instruments, I am able to rule out one major source of potential bias. In many settings, IV estimates of the trade-off may be biased if parents reallocate resources towards (reinforcement) or away from (compensation) children with higher birth endowments. I show that both twins and children born in high mortality years have, on average, lower literacy rates than their older siblings. As one shock increases and one shock decreases family size, but both result in older siblings having relatively higher human capital, reinforcement or compensation would bias the estimates in different directions. As the estimates are very similar, I conclude there is no evidence that my estimates suffer from this bias.
Is the estimated trade-off economically significant? I compare Quebec to a society with similar culture and institutions: pre-Revolutionary rural France. Between 1628 and 1788, a woman surviving to age 40 in Quebec would expect to have 1.7 additional children surviving past age one compared to her rural French peers. The average literacy rate (again proxied by signatures) in France was about 9.5 percentage points higher than in Quebec. Assuming my estimate of the trade-off is a linear and constant effect (instead of just a local average), reducing family sizes to French levels would have increased literacy by 8.6 percentage points in the next generation, thereby eliminating most of the gap.
However, pre-Revolutionary France was hardly a human capital-rich society. Proxying for the presence of the primary educators of the period (clergy and members of religious orders) with unmarried adults, I find plausible evidence that the trade-off was steeper in boroughs and decades with greater access to education. Altogether, I interpret my results as evidence that a trade-off existed which explains some of the differences across societies.
Henry, Louis, 1978. “Fécondité des mariages dans le quart Sud-Est de la France de 1670 a 1829,” Population (French Edition), 33 (4/5), 855–883.
IMPQ. 2019. Infrastructure intégrée des microdonnées historiques de la population du Québec (XVIIe – XXe siècle) (IMPQ). [Dataset].Centre interuniversitaires d’études québécoises (CIEQ).
Programme de recherche en démographie historique (PRDH). 2019. Registre de la population du Québec ancien (RPQA). [Dataset]. Département de Démographie, Université de Montréal.
Projet BALSAC. 2019. Le fichier BALSAC. [Dataset]. L’Université du Québec à Chicoutimi.
by Stefania Galli and Klas Rönnbäck (University of Gothenburg)
The full article from this blog is published on the European Review of Economic History and is available as open source at this link
Land distribution has been identified as a key contributor to economic inequality in pre-industrial societies. Historical evidence on the link between land distribution and inequality for the African continent is scant, unlike the large body of research available for Europe and the Americas. Our article examines inequality in land ownership in Sierra Leone during the early nineteenth century. Our contribution is unique because it studies land inequality at a particularly early stage for African economic history.
In 1787 the Sierra Leone colony was born, the first British colony to be founded after the American War of Independence. The colony had some peculiar features. Although populated by settlers, they were not of European origin, as in most settler colonies founded at the time. Rather, Sierra Leone came to be populated by people of African descent — a mix of former and liberated slaves from America, Europe and Africa. Furthermore, Sierra Leone had deeply egalitarian foundations, which rendered it more similar to a utopian society, than to other colonies founded on the African continent in subsequent decades. The founders of the colony intended egalitarian land distribution for all settlers, aiming to create a black yeoman settler society.
In our study, we rely on a new dataset constructed from multiple different sources pertaining to the early years of Sierra Leone, which provide evidence on household land distribution for three benchmark years: 1792, 1800 and 1831. The first two benchmarks refer to a time when demographic pressure in the Colony was limited, while the last benchmark represents a period of rapidly increasing demographic pressure due to the inflow of ‘liberated slaves’ from captured slave ships landed at Freetown.
Our findings show that, in its early days, the colony was characterized by highly egalitarian land distribution, possibly the most equal distribution calculated to date. All households possessed some land, in a distribution determined to a large extent by household size. Not only were there no landless households in 1792 and 1800, but land was normally distributed around the mean. Based on these results, we conclude that the ideological foundations of the colony were manifested in egalitarian distribution of land.
Such ideological convictions were, however, hard to maintain in the long run due to mounting demographic pressure and limited government funding. Land inequality thus increased substantially by the last benchmark year (Figure 1). In 1831, land distribution was positively skewed, with a substantial proportion of households in the sample being landless or owning plots much smaller than the median, while a few households held very large plots. We argue that these findings are consistent with an institutional shift in redistributive policy, which enabled inequality to grow rapidly. In the early days, all settlers received a set amount of land. However, by 1831, land could be appropriated freely by the settlers, enabling households to appropriate land according to their ability, but also according to their wish to participate in agricultural production. Specifically, households in more fertile regions appear to have specialized in agricultural production, whereas households in regions unsuitable to agriculture increasingly came to focus upon other economic activities.
Our results have two implications for the debate on the origin of inequality. First, Sierra Leone shows how idealist motives had important consequences for inequality. This is of key importance for wider discussions on the extent to which politics generates tangible changes in society. Second, our results show how difficult it was to effect idealism when confronted by mounting material challenges.
by Victoria Baranov (University of Melbourne), Ralph De Haas (EBRD, CEPR, and Tilburg University) and Pauline Grosjean (University of New South Wales). More information on the authors below.
The content of this article was originally published on VOX and has been published here with the authors’ consent.
Why are men three times as likely than women to die from suicide? And why do many unemployed men refuse to apply for jobs that are typically done by women? This column argues that a better understanding of masculinity norms – the rules and standards that guide and constrain men’s behavior in society – can help answer important questions like these. We present evidence from Australia on how historical circumstances have instilled strong and persistent masculine identities that continue to influence outcomes related to male health; violence, suicide, and bullying; attitudes towards homosexuals; and occupational gender segregation.
What makes a ‘real’ man? According to traditional gender norms, men ought to be self-reliant, assertive, competitive, violent when needed, and in control of their emotions (Mahalik et al., 2003). Two current debates illustrate how such masculinity norms have profound economic and social impacts. First, in many countries, men die younger than women and are consistently less healthy. Masculinity norms, especially a penchant for violence and risk taking, are an important cultural driver of this gender health gap (WHO, 2013). A second debate links masculinity norms to occupational gender segregation. Technological progress and globalization have disproportionately affected male employment. Yet, many newly unemployed men refuse to fill jobs that do not match their self-perceived gender identity (Akerlof and Kranton, 2000).
The extent to which men are expected to conform to stereotypical masculinity norms nevertheless differs across societies. This raises the question: where do masculinity norms come from? The origins of gender norms about women have been the focus of a vibrant literature (Giuliano, 2018). By contrast, the origins of norms that guide and constrain the behavior of men have received no attention in the economics literature.
In recent research, we argue that strict masculinity norms can emerge in response to highly skewed sex ratios (the number of males relative to females) which intensify competition among men (Baranov, De Haas and Grosjean, 2020). When the sex ratio is more male biased, male-male competition for scarce females is more intense. This competition can intensify violence, bullying, and intimidating behavior (e.g. bravado), which, once entrenched in local culture, continue to manifest themselves in present-day outcomes long after sex ratios have normalized. We test this hypothesis using data from a unique natural experiment: the convict colonization of Australia.
Australia as a historical experiment
To establish a causal link from sex ratios to the manifestation of masculinity norms, we exploit the convict colonization of Australia. Between 1787 and 1868, Britain transported 132,308 convict men but only 24,960 convict women to Australia. Convicts were not confined to prisons but allocated across the colonies in a highly centralized manner. This created a variegated spatial pattern in sex ratios, and consequently in local male-to-male competition, in an otherwise homogeneous setting.
Convicts and ex-convicts represented the majority of the colonial population in Australia well into the mid-19th century. Voluntary migration was limited and mainly involved men migrating in response to male-biased economic opportunities available in agriculture and, after the discovery of gold in the 1850s, mining. Because of the predominance of male convicts and migrants, biased population sex ratios endured for over a century (Figure 1).
Identifying the lasting impact of skewed sex ratios
We regress present-day manifestations of masculinity norms, including violent behavior, bullying, and stereotypically male occupational choice on historical sex ratios, collected from the first reliable census in each Australian state (see also Grosjean and Khattar, 2019). An empirical challenge is that variation in historical sex ratios could reflect unobservable characteristics. To tackle this, we instrument the historical sex ratio by the sex ratio among convicts only. This instrument is highly relevant since most of the white Australian population initially consisted of convicts. Moreover, convicts were not free to move: a centralized assignment scheme determined their location as a function of labor needs, which we control for by initial economic specialization. Throughout the analysis, we also control for time-invariant geographic and historic characteristics as well as key present-day controls (sex ratio, population, and urbanization).
Masculinity norms among Australian men today
Using the above empirical strategy, we derive four sets of results:
1. Violence, suicide, and health
We first assess the impact of historically skewed sex ratios on present-day violence and health outcomes. Evidence suggests that men adhering to traditional masculinity norms attach a stronger stigma to mental health problems and tend to avoid health services. As a proxy for the avoidance of preventative health care we use local suicide and prostate cancer rates. Prostate cancer is often curable if treated early, but avoidance of diagnosis is a public health concern. The endorsement of strict masculinity norms is also associated with aggression, excessive drinking, and smoking.
Our estimates show that today, the rates of assault and sexual assault are higher in parts of Australia that were more male biased in the past. A one unit increase in the historical sex ratio (defined as the ratio of the number of men over the number of women) is associated with an 11 percent increase in the rate of assault and a 16 percent increase in sexual assaults. We also find strong evidence of elevated rates of male suicide, prostate cancer, and lung disease in these areas. For male suicide – the leading cause of death for Australian men under 45 – a one unit increase in the historical sex ratio is associated with a staggering 26 percent increase.
2. Occupational gender segregation
A second manifestation of male identity is occupational choice. Our results paint a striking picture. A one unit increase in the sex ratio is associated with a nearly 1 percentage point shift from the share of men employed in neutral (e.g. real estate, retail) or stereotypically female occupations (e.g. teachers, receptionists) to stereotypically male occupations (e.g. carpenters, metal workers).
3. Support for same-sex marriage
We capture the political expression of masculine identity by opposition against same-sex marriage, which we measure using voting records from the nation-wide referendum on same-sex marriage in 2017. Our results show that the share of votes in favor of marriage equality is substantially lower in areas where sex ratios were more male biased in the past. A one unit increase in the historical sex ratio is associated with a nearly 3 percentage point decrease in support for same-sex marriage. This is slightly over 6 percent of the mean.
Lastly, we find that boys, but not girls, are more likely to be bullied at school in areas that used to be more male biased in the past. The magnitude of the results is considerable and in line with the magnitude of the results for assaults (measured in adults). A one unit increase in the historical sex ratio is associated with a higher likelihood of parents (teachers) reporting bullying of boys by 13.7 (5.2) percentage points. This suggests that masculinity norms are perpetuated through horizontal transmission: peer pressure, starting at a young age in the playground.
We find that historically male-biased sex ratios forged a culture of male violence, help avoidance, and self-harm that persists to the present day in Australia. While our experimental setting is unique, we believe that our findings can inform the debate about the long-term socioeconomic consequences and risks of skewed sex ratios in many developing countries such as China, India, and parts of the Middle East. In these settings, sex-selective abortion and mortality as well as the cultural relegation and seclusion of women have created societies with highly skewed sex ratios. Our results suggest that the masculinity norms that develop as a result may not only be detrimental to (future generations of) men themselves but can also have important repercussions for other groups in society, in particular women and sexual minorities.
Our findings also align with an extensive psychological and medical literature that connects traditional masculinity norms to an unwillingness among men to seek timely medical help or to engage in preventive health care and protective health measures (e.g. Himmelstein and Sanchez (2016) and Salgado et al. (2016)). This suggests that voluntary observance of health measures, such as social distancing during the COVID-19 pandemic, may be considerably lower among men who adhere to traditional masculinity norms.
Akerlof, George A., and Rachel E. Kranton (2000), Economics and Identity, Quarterly Journal of Economics 115(3), 715–753.
Giuliano, Paola (2018), Gender: A Historical Perspective, The Oxford Handbook of Women and the Economy, Ed. Susan Averett, Laura Argys and Saul Hoffman. Oxford University Press, New York.
Grosjean, Pauline, and Rose Khattar (2019), It’s Raining Men! Hallelujah? The Long-Run Consequences of Male-Biased Sex Ratios, The Review of Economic Studies, 86(2), 723–754.
Himmelstein, M.S. and D.T. Sanchez (2016), Masculinity Impediments: Internalized Masculinity Contributes to Healthcare Avoidance in Men and Women, Journal of Health Psychology, 21, 1283–1292.
Mahalik, J.R., B.D. Locke, L.H. Ludlow, M.A. Diemer, R.P.J. Scott, M. Gottfried, and G. Freitas (2003), Development of the Conformity to Masculine Norms Inventory, Psychology of Men & Masculinity, 4(1), 3–25.
Salgado, D.M., A.L. Knowlton, and B.L. Johnson (2019), Men’s Health-Risk and Protective Behaviors: The Effects of Masculinity and Masculine Norms, Psychology of Men & Masculinities, 20(2), 266–275.
WHO (2013), Review of Social Determinants and the Health Divide in the WHO European Region, World Health Organization, Regional Office for Europe, Copenhagen.
Ralph De Haas, a Dutch national, is the Director of Research at the European Bank for Reconstruction and Development (EBRD) in London. He is also a part-time Associate Professor of Finance at Tilburg University, a CEPR Research Fellow, a Fellow at the European Banking Center, a Visiting Senior Fellow at the Institute of Global Affairs at the London School of Economics and Political Science, and a Research Associate at the ZEW–Leibniz Centre for European Economic Research. Ralph earned a PhD in economics from Utrecht University and is the recipient of the 2014 Willem F. Duisenberg Fellowship Prize. He has published in the Journal of Financial Economics;Review of Financial Studies; Review of Finance; Journal of International Economics, American Economic Journal: Applied Economics; the Journal of the European Economic Association and various other peer-reviewed journals. Ralph’s research interests include global banking, development finance and financial intermediation more broadly. He is currently working on randomized controlled trials related to financial inclusion in Morocco and Turkey.
Pauline Grosjean is a Professor in the School of Economics at UNSW. Previously at the University of San Francisco and the University of California at Berkeley, she has also worked as an Economist at the European Bank for Reconstruction and Development. She completed her PhD in economics at Toulouse School in Economics in 2006 after graduating from the Ecole Normale Supérieure. Her research studies the historical and dynamic context of economic development. In particular, she focuses on how culture and institutions interact and shape long-term economic development and individual behavior. She has published research that studies the historical process of a wide range of factors that are crucial for economic development, including cooperation and violence, trust, gender norms, support for democracy and for market reforms, immigration, preferences for education, and conflict.
Victoria Baranov’s research explores how health, psychological factors, and norms interact with poverty and economic development. Her recent work has focused on maternal depression and its implications for the intergenerational transmission of disadvantage. Her work has been published in the American Economic Review, American Economic Journal: Applied Economics, the Journal of Health Economics and other peer-reviewed journals across multiple disciplines. Victoria received her PhD in Economics from the University of Chicago after graduating from Barnard College. She is currently a Senior Lecturer in the Economics Department at the University of Melbourne and has affiliations with the Centre for Market Design, the Life Course Centre, and the Institute of Labor Studies (IZA).
This piece is the result of a collaboration between the Economic History Review, the Journal of Economic History, Explorations in Economic History and the European Review of Economic History. More details and special thanks below. Part A is available at this link
As the world grapples with a pandemic, informed views based on facts and evidence have become all the more important. Economic history is a uniquely well-suited discipline to provide insights into the costs and consequences of rare events, such as pandemics, as it combines the tools of an economist with the long perspective and attention to context of historians. The editors of the main journals in economic history have thus gathered a selection of the recently-published articles on epidemics, disease and public health, generously made available by publishers to the public, free of access, so that we may continue to learn from the decisions of humans and policy makers confronting earlier episodes of widespread disease and pandemics.
Generations of economic historians have studied disease and its impact on societies across history. However, as the discipline has continued to evolve with improvements in both data and methods, researchers have uncovered new evidence about episodes from the distant past, such as the Black Death, as well as more recent global pandemics, such as the Spanish Influenza of 1918. In this second instalment of The Long View on Epidemics, Disease and Public Health: Research from Economic History, the editors present a review of two major themes that have featured in the analysis of disease. The first includes articles that discuss the economic impacts of historical epidemics and the official responses they prompted. The second turns to the more optimistic story of the impact of public health regulation and interventions, and the benefits thereby generated.
My Tawney lecture reassessed the relationship between slavery and industrial capitalism in both Britain and the United States. The thesis expounded by Eric Williams held that slavery and the slave trade were vital for the expansion of British industry and commerce during the 18th century but were no longer needed by the 19th. My lecture confirmed both parts of the Williams thesis: the 18th-century Atlantic economy was dominated by sugar, which required slave labor; but after 1815, British manufactured goods found diverse new international markets that did not need captive colonial buyers, naval protection, or slavery. Long-distance trade became safer and cheaper, as freight rates fell, and international financial infrastructure developed. Figure 1 (below) shows that the slave economies absorbed the majority of British cotton goods during the 18th century, but lost their centrality during the 19th, supplanted by a diverse array of global destinations.
I argued that this formulation applies with equal force to the upstart economy across the Atlantic. The mainland North American colonies were intimately connected to the larger slave-based imperial economy. The northern colonies, holding relatively few slaves themselves, were nonetheless beneficiaries of the trading regime, protected against outsiders by British naval superiority. Between 1768 and 1772, the British West Indies were the largest single market for commodity exports from New England and the Middle Atlantic, dominating sales of wood products, fish and meat, and accounting for significant shares of whale products, grains and grain products. The prominence of slave-based commerce explains the arresting connections reported by C. S. Wilder, associating early American universities with slavery. Thus, part one of the Williams thesis also holds for 18th-century colonial America.
Insurgent scholars known as New Historians of Capitalism argue that slavery, specifically slave-grown cotton, was critical for the rise of the U.S. economy in the 19th century. In contrast, I argued that although industrial capitalism needed cheap cotton, cheap cotton did not need slavery. Unlike sugar, cotton required no large investments of fixed capital and could be cultivated efficiently at any scale, in locations that would have been settled by free farmers in the absence of slavery. Early mainland cotton growers deployed slave labour not because of its productivity or aptness for the new crop, but because they were already slave owners, searching for profitable alternatives to tobacco, indigo, and other declining crops. Slavery was, in effect, a ‘pre-existing condition’ for the 19th-century American South.
To be sure, U.S. cotton did indeed rise ‘on the backs of slaves’, and no cliometric counterfactual can gainsay this brute fact of history. But it is doubtful that this brutal system served the long-run interests of textile producers in Lancashire and New England, as many of them recognized at the time. As argued here, the slave South underperformed as a world cotton supplier, for three distinct though related reasons: in 1807 the region closed the African slave trade, yet failed to recruit free migrants, making labour supply inelastic; slave owners neglected transportation infrastructure, leaving large sections of potential cotton land on the margins of commercial agriculture; and because of the fixed-cost character of slavery, even large plantations aimed at self-sufficiency in foodstuffs, limiting the region’s overall degree of market specialization. The best evidence that slavery was not essential for cotton supply is demonstrated by what happened when slavery ended. After war and emancipation, merchants and railroads flooded into the southeast, enticing previously isolated farm areas into the cotton economy. Production in plantation areas gradually recovered, but the biggest source of new cotton came from white farmers in the Piedmont. When the dust settled in the 1880s, India, Egypt, and slave-using Brazil had retreated from world markets, and the price of cotton in Liverpool returned to its antebellum level. See Figure 2.
The New Historians of Capitalism also exaggerate the importance of the slave South for accelerated U.S. growth. The Cotton Staple Growth hypothesis advanced by Douglass North was decisively refuted by economic historians a generation ago. The South was not a major market for western foodstuffs and consumed only a small and declining share of northern manufactures. International and interregional financial connections were undeniably important, but thriving capital markets in northeastern cities clearly predated the rise of cotton, and connections to slavery were remote at best. Investments in western canals and railroads were in fact larger, accentuating the expansion of commerce along East-West lines.
It would be excessive to claim that Anglo-American industrial and financial interests recognized the growing dysfunction of the slave South, and in response fostered or encouraged the antislavery campaigns that culminated in the Civil War. A more appropriate conclusion is that because of profound changes in technologies and global economic structures, slavery — though still highly profitable to its practitioners — no longer seemed essential for the capitalist economies of the 19th-century world.
by Anna Missiaia and Kersten Enflo (Lund University)
This research is due to be published in the Economic History Review and is currently available on Early View.
For a long time, scholars have thought about regional inequality merely as a by-product of modern economic growth: following a Kuznets-style interpretation, the front-running regions increase their income levels and regional inequality during industrialization; and it is only when the other regions catch-up that overall regional inequality decreases and completes the inverted-U shaped pattern. But early empirical research on this theme was largely focused on the the 20th century, ignoring industrial take-off of many countries (Williamson, 1965). More recent empirical studies have pushed the temporal boundary back to the mid-19th century, finding that inequality in regional GDP was already high at the outset of modern industrialization (see for instance Rosés et al., 2010 on Spain and Felice, 2018 on Italy).
The main constraint for taking the estimations well into the pre-industrial period is the availability of suitable regional sources. The exceptional quality of Swedish sources allowed us for the first time to estimate a dataset of regional GDP for a European economy going back to the 16th century (Enflo and Missiaia, 2018). The estimates used here for 1571 are largely based on a one-off tax proportional to the yearly production: the Swedish Crown imposed this tax on all Swedish citizens in order to pay a ransom for the strategic Älvsborg castle that had just been conquered by Denmark. For the period 1750-1850, the estimates rely on standard population censuses. By connecting the new series to the existing ones from 1860 onwards by Enflo et al. (2014), we obtain the longest regional GDP series for any given country.
We find that inequality increased dramatically between 1571 and 1750 and remained high until the mid-19th century. Thereafter, it declined during the modern industrialization of the country (Figure 1). Our results discard the traditional view that regional divergence can only originate during an industrial take-off.
Figure 1. Coefficient of variation of GDP per capita across Swedish counties, 1571-2010.
Figure 2 shows the relative disparities in four benchmark years. If the country appeared relatively equal in 1571, between 1750 and 1850 both the mining districts in central and northern Sweden and the port cities of Stockholm and Gothenburg emerged.
Figure 2. The relative evolution of GDP per capita, 1571-1850 (Sweden=100).
The second part of the paper is devoted to the study of the drivers of pre-industrial regional inequality. Decomposing the Theil index for GDP per worker, we show that regional inequality was driven by structural change, meaning that regions diverged because they specialized in different sectors. A handful of regions specialized in either early manufacturing or in mining, both with a much higher productivity per worker compared to agriculture.
To explain this different trajectory, we use a theoretical framework introduced by Strulik and Weisdorf (2008) in the context of the British Industrial Revolution: in regions with a higher share of GDP in agriculture, technological advancements lead to productivity improvements but also to a proportional increase in population, impeding the growth in GDP per capita as in a classic Malthusian framework. Regions with a higher share of GDP in industry, on the other hand, experienced limited population growth due to the increasing relative price of children, leading to a higher level of GDP per capita. Regional inequality in this framework arises from a different role of the Malthusian mechanism in the two sectors.
Our work speaks to a growing literature on the origin of regional divergence and represents the first effort to perform this type of analysis before the 19th century.
While it is often assumed that debtors’ prisons were illogical and ineffective, my research demonstrates that they were extremely economically effective for creditors though they could ruin the lives of debtors.
The debtors’ prison is a frequent historical bogeyman, a Dickensian symptom of the illogical cruelty of the past that disappeared with enlightened capitalism. As imprisoning someone who could not afford to pay their debts, keeping them away from work and family, seems futile it is assumed creditors were doing so to satisfy petty revenge.
But they were a feature of most of English history from 1283, and though their power was curbed in 1869, there were still debtors imprisoned in the 1920s. The reason they persisted, as my research shows, is because, for creditors, they worked well.
The majority of imprisoned debtors in the eighteenth century were released relatively quickly having paid their creditors. This revelation is timely when events in America demonstrate how easily these prisons can return.
As today, most eighteenth century purchases were done on credit due to the delay in wages, limited supply of coinage, and cultural preferences for buying goods on credit. But credit was based on a range of factors including personal reputation, social rank and moral status. Informal oral contracts could frequently be made with little sense of an individual’s actual financial status, particularly if they were a gentleman or aristocrat. As contracts were not based on goods and court processes were slow, it was difficult to seize property to recover debts when creditors required money.
Creditors were able to imprison debtors without trial in this period until they paid what they owed or died. The registers of a London Debtors’ Prison, the Woodstreet Compter (1741-1815), reveal that creditors had good reasons to do so. Most of the 10,156 debtors contained in the registers left prison relatively quickly – 91% were released in under a year while almost a third were released in less than 100 days.
In addition, 84% were ‘discharged’ by their creditors, indicating that either the prisoner had paid their debts or a new contract had been agreed. Imprisonment forced debtors to find a way to pay or at least to renegotiate with creditors.
Prisoners were not the poor, but usually middle class people in small amounts of debt. One of the largest groups was made up of shopkeepers (about 20% of prisoners) though male and female prisoners came from across society with gentlemen, cheesemongers, lawyers, wigmakers and professors rubbing shoulders.
Most used their time to coordinate the selling of goods to raise money, or borrowed yet more from family and friends. Many others called in their own debts by having their debtors imprisoned as well.
As prisons were relatively open, some debtors worked off their debts. John Grano, a trumpeter who worked for Handel, imprisoned in the 1720s, taught music lessons from his cell. Others sold liquor or food to fellow prisoners or continued as best they could at their trade in the prison yard. Those with a literary mind, such as Daniel Defoe, wrote their way out.
Though credit works on different terms today, that coercive imprisonment is effective at securing repayment remains true. There have been a number of US states operating what amount to debtors’ prisons in recent years where the poor, fined by the state usually for traffic violations, are held until they pay what they owe.
Attorney General Jeff Sessions even retracted an Obama era memo in December aimed at abolishing the practice. While eighteenth century prisons worked effectively for creditors, they could ruin the lives of debtors who were forced to sell anything they could to pay their dues and escape the unsanitary hole in which they were being kept without trial. Assuming that they did not work and therefore won’t return is shown by my research to be false.
by Markus Eberhardt (School of Economics, University of Nottingham)
One of the seminal questions in world and Chinese economic history is why China, in contrast to Western Europe, failed to industrialise during the nineteenth century, leading to differential development paths commonly referred to as the Great Divergence.
Social and economic historians have tried to tackle this issue by identifying potential sufficient conditions for industrialisation. One candidate condition has been the degree of national or sub-national market integration within Asia and Western Europe on the eve of industrialisation. A long-held view maintained that Western Europe was characterised by integrated markets, which had taken root because of state-supported property rights institutions. China, in contrast, despite her unified political system, was said to have failed in creating a unified national market.
This hypothesis of differential levels of market integration has been seriously challenged more recently, most notably in the work of Kenneth Pomeranz (2000), who concluded that factor and product markets in late eighteenth century Western Europe were ‘probably further from perfect competition… than those in most of China.’
Shiue and Keller (2007) carried out a formal cross-continental comparison of rice markets in Southern China during 1742-95 with wheat markets in Europe in the eighteenth and nineteenth centuries, providing the first econometric evidence for Pomeranz’s conjecture of equivalent goods market integration in both regions.
Much of the subsequent research has adopted the conclusion that ‘in the late eighteenth century… long-distance [grain] trade in China operated more efficiently than in [continental] Europe’ (von Glahn, 2016).
In related work (Bernhofen et al, 2016) we use a number of alternative empirical methods (including the cointegration analysis employed by Shiue and Keller, 2007) along with higher frequency grain price data for China and Western Europe to provide consistent evidence for a substantial decline in Chinese market integration over time: by 1800, China’s grain markets were fragmented, including in the economically most advanced regions (Jiangnan).
Our empirical implementations account for general equilibrium effects widely acknowledged to have distorted earlier investigations of market integration using price data.
In our new study, to be to be presented at the Economic History Society’s 2019 annual conference, I and my co-authors (Daniel Bernhofen, Jianan Li and Stephen Morgan) bring together arguments for such an early decline in Qing grain market integration from the rich economic and social history literatures.
We use our estimates for market integration to test empirically one prominent factor: we investigate the role played by the unprecedented population growth and internal migration during the eighteenth century and its economic, social, political and environmental implications.
In studies of early modern Europe, population growth was found to go hand in hand with market expansion and increased integration. In China, population growth and its uneven regional distribution not merely limited the surplus grain available for trade, but exerted severe pressure on an inherently instable water control system pitting farming against flood prevention and the waterway transportation of goods, creating increasingly insurmountable challenges for water engineering.
In combination with rigid fiscal rules, population growth constrained the ability of the Qing state to govern this vast empire effectively. Local officials reacted to rising population pressure with ‘grain protectionism’, leading to temporary political borders, which further hampered the functioning of the market.
The narrative we develop is not that of a standard ‘Malthusian trap’, where an acceleration in pre-modern agricultural growth is followed by population growth that dilutes per capita resources and thus keeps the economy in a ‘low level equilibrium trap’. Instead, we describe an escalating ‘span of control’ problem, increasing the pressure on a small bureaucracy in the periphery as well as the core of the empire, caused by a rigid and underfunded state apparatus.
Figure: Population density growth and internal migration
We plot the annualised population density growth rates (in percent) between 1776 and 1820 for 211 prefectures. Black solid lines indicate provincial borders. The dashed line marks the early eighteenth century ‘frontier’ between developed and developing areas of Qing China (Myers and Wang, 2002). Arrows indicate major internal migration flows (stylised representation) during the eighteenth century. The two Northern migration strands actually extend beyond Qing China proper into Xinjiang and Manchuria.
Population density data are taken from Cao (2000), information on eighteenth century migration flows from Eliott (2009: 147), Entenmann (1980: 41f), Ho (1959: 139ff), Lee and Feng (1999: 118), Mann-Jones and Kuhn (1978: 109f, 132), Myers and Wang (2002: Map 9), shapefiles from CHGIS version 6 (2016).