Infant and child mortality by socioeconomic status in early nineteenth century England

by Jaadla Hannaliis (University of Cambridge)

The full article from this blog (co-authored with E. Potter, S. Keibek,  and R.J.  Davenport) was published on The Economic History Review and is now available on Early View at this link

Picture 1nn
Figure 1. Thomas George Webster ‘Sickness and health’ (1843). Source: Photo credit: The Wordsworth Trust, licenced under CC BY-NC-SA

Socioeconomic gradients in health and mortality are ubiquitous in modern populations. Today life expectancy is generally positively correlated with individual or ecological measures of income, educational attainment and status within national populations. However, in stark contrast to these modern patterns, there is little evidence for such pervasive advantages of wealth to survival in historical populations before the nineteenth century.

In this study, we tested whether a socioeconomic gradient in child survival was already present in early nineteenth-century England using individual-level data on infant and child mortality for eight parishes from the Cambridge Group family reconstitution dataset (Wrigley et al. 1997). We used the paternal occupational descriptors routinely recorded in the Anglican baptism registers for the period from 1813–1837 to compare infant (under 1) and early childhood (age 1–4) mortality by social status. To capture differences in survivorship we compared multiple measures of status: HISCAM, HISCLASS, and also a continuous measure of wealth which was estimated by ranking paternal occupations by the propensity for their movable wealth to be inventoried upon death (Keibek 2017).  The main analytical tool was event history analysis, where individuals were followed from baptism or birth through the first five years of life, or until their death, or leaving the sample for other reasons.

Were socioeconomic differentials in mortality present in the English population by the early nineteenth century, as suggested by theorists of historical social inequalities (Antonovsky 1967; Kunitz 1987)? Our results provide a qualified yes. We did detect differentials in child survival by paternal or household wealth in the first five years of life. However the effects of wealth were muted, and non-linear. Instead we found a U-shaped relationship between paternal social status and survival, with the children of poor labourers or wealthier fathers enjoying relatively high survival chances.  Socioeconomic differentials emerged only after the first year of life (when mortality rates were highest), and were strongest at age one. Summed over the first five years of life, however, the advantages of wealth were marginal. Furthermore, the advantages of wealth were only observed once the anomalously low mortality of labourers’ children was taken into account.

As might be expected, these results provide evidence for the contribution of both environment and household or familial factors. In infancy, mortality varied between parishes, however the environmental hazards associated with industrialising or urban settlements appear to have operated fairly equally on households of differing socioeconomic status. It is likely that most infants in our eight  reconstitution parishes were breastfed throughout the first year of life – which  probably conferred a ubiquitous advantage that overwhelmed other material differences in household conditions, for example, maternal nutrition.

To the extent that wealth conferred a survival advantage, did it operate through access to information, or to material resources? There was no evidence that literacy was important to child survival. However, our results suggest that cultural practices surrounding weaning may have been key. This was indicated by the peculiar age pattern of the socioeconomic gradient to survival, which was strongest in the second year of life, the year in which most children were weaned. We also found a marked survival advantage of longer birth intervals post-infancy, and this advantage accrued particularly to labourers’ children, because their mothers had longer than average birth intervals.

Our findings point to the importance of breastfeeding patterns in modulating the influence of socioeconomic status on infant and child survival. Breastfeeding practices varied enormously in historical populations, both geographically and by social status (Thorvaldsen 2008). These variations, together with the differential sorting of social groups into relatively healthy or unhealthy environments, probably explains the difficulty in pinpointing the emergence of socioeconomic gradients in survival, especially in infancy.

At ages 1–4 years we were able to demonstrate that the advantages of wealth and of a labouring father operated even at the level of individual parishes. That is, these advantages were not simply a function of the sorting of classes or occupations into different environments. These findings therefore implicate differences in household practices and conditions in the survival of children in our sample. This was clearest in the case of labourers. Labourers’ children enjoyed higher survival rates than predicted by household wealth, and this was associated with longer birth intervals (consistent with longer breastfeeding), as well as other factors that we could not identify, but which were probably not a function of rural isolation within parishes. Why labouring households should have differed in these ways remains unexplained.

To contact the author:  Hj309@cam.ac.uk

References

Antonovsky, A., ‘Social class, life expectancy and overall mortality’, Milbank Memorial Fund Quarterly, 45 (1967), pp. 31–73.

Keibek, S. A. J., ‘The male occupational structure of England and Wales, 1650–1850’, (unpub. Ph.D. thesis, Univ. of Cambridge, 2017).

Kunitz, S.J., ‘Making a long story short: a note on men’s height and mortality in England from the first through the nineteenth centuries’, Medical History, 31 (1987), pp. 269–80.

Thorvaldsen, G., ‘Was there a European breastfeeding pattern?’ History of the Family, 13 (2008), pp. 283–95.

Land, Ladies, and the Law: A Case Study on Women’s Land Rights and Welfare in Southeast Asia in the Nineteenth Century

by Thanyaporn Chankrajang and Jessica Vechbanyongratana (Chulalongkorn University)

The full article from this blog is forthcoming on The Economic History Review

 

Security of land rights  empowers women with greater decision-making power (Doss, 2013), potentially impacting both land-related investment decisions and the allocation of goods within the households (Allendorf, 2007; Goldstein et al., 2008; Menon et al., 2017). In historical contexts where land was the main factor of production for most economic activities, little is known about women’s land entitlements. Historical gender-disaggregated land ownership data are scarce, making  quantitative investigations of the past challenging. In new research we overcome this problem by analyzing rare, gender-disaggregated, historical land rights records to determine the extent of women’s land rights, and its implications, in nineteenth-century Bangkok.

First, we utilized orchard land deeds issued in Bangkok during the 1880s (Figure 1).  These deeds were both landownership and tax documents. Land tax assessment  was assessed based on the enumeration of mature orchard trees producing  high-value fruits, such as areca nuts, mangoes and durian. From 9,018 surviving orchard deeds, we find that 82 per cent of Bangkok orchards listed at least one woman as an owner, indicating that women did possess de jure usufruct land rights under the traditional land rights system. By analyzing the number of trees cultivated on each property (proxied by tax per hectare), we find these rights were upheld in practice and incentivized agricultural productivity. Controlling for owner and plot characteristics, plots with only female owners on average cultivated 6.7 per cent more trees per hectare than plots with mixed gender ownership, while male-owned plots cultivated 6.7 per cent fewer trees per hectare than mixed gender plots. The evidence indicates higher levels of investment in cash crop cultivation among female landowners.

Picture 1fd
Figure 1. An 1880s Government Copy of an Orchard Land Deed Issued to Two Women. Source: Department of Lands Museum, Ministry of Interior.

 

The second part of our analysis assesses 217 land-related court cases to determine whether women’s land rights in Bangkok were protected from the late nineteenth century when land disputes increased. We find that ‘commoner’ women acted as both plaintiffs and defendants, and were able to win cases even against politically powerful men. Such secure land rights helped preserve women’s livelihoods.

Finally, based on an internationally comparable welfare estimation (Allen et al. 2011; Cha, 2015), we calculate an equivalent measure of a ‘bare bones’ consumption basket. We find that the median woman-owned orchard could annually support up to 10 adults. By recognizing women’s contributions to family income (Table 1), Bangkok’s welfare ratio was as high as 1.66 for the median household, demonstrating a larger household surplus than found in Japan, and comparable to those in Beijing and Milan during the same period (Allen et al. 2011).

Superficially, our findings seem to contradict historical and contemporary observations that land rights structures favor men (Doepke et al., 2012). However, our study typifies women’s economic empowerment in Thailand and Southeast Asia more generally. Since at least the early modern period, women in Southeast Asia possessed relatively high social status and autonomy in marriage and family, literacy and literature, diplomacy and politics, and economic activities ( Hirschman, 2017; Adulyapichet, 2001; Baker et al., 2017). The evidence we provide supports the latter interpretation, and is consonant with other Southeast Asian land-related features, such as matrilocality and matrilineage (Huntrakul, 2003).

 

Screenshot 2020-06-15 at 16.23.37

To contact the authors:

Thanyaporn Chankrajang, Thanyaporn.C@chula.ac.th

Jessica Vechbanyongratana, ajarn.jessica@gmail.com
@j_vechbany

 

References

Adulyapichet, A., ‘Status and roles of Siamese women and men in the past: a case study from      Khun Chang Khun Phan’ (thesis, Silpakorn Univ., 2001).

Allen, R. C., Bassino, J. P., Ma, D., Moll‐Murata, C., and Van Zanden, J. L. ‘Wages, prices, and living standards in China, 1738–1925: in comparison with Europe, Japan, and India’, Economic History Review, 64 (2011), pp. 8-38.

Allendorf, K., ‘Do women’s land rights promote empowerment and child health in Nepal?’,  World development, 35 (2007), pp. 1975-88.

Baker, C., and Phongpaichit, P., A history of Ayutthaya: Siam in the early modern world (Cambridge, 2017).

Cha, M. S. ‘Unskilled wage gaps within the Japanese Empire’, Economic History Review, 68 (2015), pp. 23-47.

Chankrajang, T. and Vechbanyongratana, J. ‘Canals and orchards: the impact of transport network access on agricultural productivity in nineteenth-century Bangkok’, Journal of Economic History, forthcoming.

Chankrajang, T. and Vechbanyongratana, J. ‘Land, ladies, and the law: a case study on women’s land rights and welfare in Southeast Asia in the nineteenth century’, Economic History Review, forthcoming.

Doepke, M., Tertilt, M., and Voena, A., ‘The economics and politics of women’s rights’, Annual Review of Economics, 4 (2012), pp. 339-72.

Doss, C., ‘Intrahousehold bargaining and resource allocation in developing countries’, World Bank Research Observer 28 (2013), pp.52-78.

Goldstein, M., and Udry, C., ‘The profits of power land rights and agricultural investment in Ghana’, Journal of Political Economy, 116 (2008), pp. 981-1022.

Hirschman, C. ‘Gender, the status of women, and family structure in Malaysia’, Malaysian Journal of Economic Studies 53 (2017), pp. 33-50.

Huntrakul, P., ‘Thai women in the three seals code: from matriarchy to subordination’, Journal of Letters, 32 (2003), pp. 246-99.

Menon, N., van der Meulen Rodgers, Y., and Kennedy, A. R., ‘Land reform and welfare in Vietnam: why gender of the land‐rights holder matters’, Journal of International Development, 29 (2017), pp. 454-72.

 

Early-life disease exposure and occupational status

by Martin Saavedra (Oberlin College and Conservatory)

This blog is part H of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. This blog is based on the article ‘Early-life disease exposure and occupational status: The impact of yellow fever during the 19th century’, in Explorations in Economic History, 64 (2017): 62-81. https://doi.org/10.1016/j.eeh.2017.01.003   

 

Saavedra1
A girl suffering from yellow fever. Watercolour. Available at Wellcome Images.

Like epidemics, shocks to public health have the potential to affect human capital accumulation. A literature in health economics known as the ‘fetal origins hypothesis’ has examined how in utero exposure to infectious disease affects labor market outcomes. Individuals may be more sensitive to health shocks during the developmental stage of life than during later stages of childhood. For good reason, much of this literature focuses on the 1918 influenza pandemic which was a huge shock to mortality and one of the few events that can be visibly observed when examining life expectancy trends in the United States. However, there are limitations to looking at the 1918 influenza pandemic because it coincided with the First World War. Another complication in this literature is that cities with outbreaks of infectious disease often engaged in many forms of social distancing by closing schools and businesses. This is true for the 1918 influenza pandemic, but also for other diseases. For examples, many schools were closed during the polio epidemic of 1916.

So, how can we estimate the long-run effects of infectious disease when cities simultaneously respond to outbreaks? One possibility is to look at a disease that differentially affected some groups within the same city, such as yellow fever during the nineteenth century. Yellow fever is a viral infection that spreads from the Aedes aegypti mosquito and is still endemic in parts of Africa and South America.  The disease kills roughly 50,000 people per year, even though a vaccine has existed for decades. Symptoms include fever, muscle pain, chills, and jaundice, from which the disease derives its name.

During the eighteenth and nineteenth centuries, yellow fever plagued American cities, particularly port cities that traded with Caribbean Islands. In 1793, over 5,000 Philadelphians likely died of yellow fever. This would be a devasting number in any city, even by today’s standards, but it is even more so when considering that in 1790 that Philadelphia had a population of less than 29,000.

By the mid-nineteenth century, Southern port cities grew, and yellow fever stopped occurring in cities as far north as Philadelphia. The graph below displays the number of yellow fever fatalities in four southern port cities — New Orleans, LA; Mobile, AL; Charleston, SC; and Norfolk, VA — during the nineteenth century. Yellow fever was sporadic, devasting a city in one year and often leaving it untouched in the next. For example, yellow fever killed nearly 8,000 New Orleanians in 1853, and over 2,000 in both 1854 and 1855. The next two years, yellow fever killed fewer than 200 New Orleanians per year, then yellow fever come back killing over 3,500 in 1858. Norfolk, VA was only struck once in 1855. Since yellow fever never struck Norfolk during milder years, the population lacked immunity and approximately 10 percent of the city died in 1855. Charleston and Mobile show similar sporadic patterns. Likely due to the Union’s naval blockade, yellow fever did not visit any American port cities in large numbers during the Civil War.

 

Saavedra2
Source: As per original article.

 

Immigrants were particularly prone to yellow fever because they often came from European countries rarely visited by yellow fever. Native New Orleanians, however, typically caught yellow fever during a mild year as children and were then immune to the disease for the rest of their lives. For this reason, yellow fever earned the name the “stranger’s disease.”

Data from the full count of the 1880 census show that yellow fever fatality rates during an individual’s year of birth negatively affected adult occupational status, but only for individuals with foreign-born mothers. Those with US-born mothers were relatively unaffected by the disease. There are also effects for those who are exposed to yellow fever one or two years after their birth, but there are no effects, not even for those with immigrant mothers, when exposed to yellow fever three or four years after their births. These results suggest that early-life exposure to infectious disease, not just city-wide responses to disease, influence human capital development.

 


 

Martin Saavedra

Martin.Saavedra@oberlin.edu

 

The Origins of Political and Property Rights in Bronze Age Mesopotamia

by Giacomo Benati, Carmine Guerriero, Federico Zaina (University of Bologna)

The full paper from this blog is available here

unnamed (1)
Mesopotamian map of canals. Available at <http://factsanddetails.com/world/cat56/sub363/item1513.html&gt;

Despite the overwhelming empirical evidence documenting the relevance of inclusive political institutions and strong property rights, we still lack a unified framework identifying their determinants and their interaction. We develop a model to address this deficiency, and we test its implications on a novel data on Greater Mesopotamia during the Bronze Ages.

This region developed the first recorded forms of stable state institutions, which can be credibly linked to geography. Worsening  climatic conditions between the end of the Uruk period (3300-3100 BC) and the beginning of the Jemdet Nasr and Early Dynastic periods (3100-2550 BC) reduced farming returns and  forced the religious elites to share power previously acquired from the landholding households, with rising military elites. This transformation led to the peasants engaging in leasing, renting and tenure-for-service contracts requiring rents and corvèe, such as participation in infrastructure projects and a conscripted army. Being an empowerment mechanism, the latter was the citizens’ preferred public good. Second, the Pre-Sargonic period (2550-2350 BC) witnessed a milder climate, which curbed the temple and palatial elites’ need to share their policy-making power. Finally, a period of harsher climate, and the consequent rise of long-distance trade as an alternative activity, allowed the town elites to establish themselves as the third decision-maker during the Mesopotamian empires period (2350-1750 BC). Reforms towards more inclusive political institutions were accompanied by a shift towards stronger farmers’ rights on land and a larger provision of public goods, especially those most valued by the citizens, i.e., conscripted army.

To elucidate the incentives behind these stylized facts, we consider the interaction between a land-owning elite and citizens able to deliver a valuable harvest if the imperfectly observable farming conditions were favorable. To incentivize investment, the elite cannot commit to direct transfers, but can lean on two other instruments: establishing a more inclusive political process, which allows citizens to select tax rates and organize public good provision, and/or punishing citizens for suspected shirking by restricting their private rights.  This ‘stick’ is costly for the elite. When the expected harvest value is barely greater than the investment cost, citizens cooperate only under full property rights and more inclusive political institutions, allowing them to fully tax the output. When the investment return is intermediate, the elite keeps control over fiscal policies and can implement partial taxation. When, finally, the investment return is large, the elite can also weaken the protection of private rights. Yet, embracing the stick is optimal only if production is sufficiently transparent, and, thus, punishment effectively disciplines a shirking citizen. Our model has three implications. First, the inclusiveness of political institutions declines with expected farming returns and is unrelated to the opaqueness of farming. Second, the strength of property rights diminishes with the expected harvest return and is positively related to the opaqueness of farming. Finally, citizens’ expected utility from public good provision increases with the strength of political and property rights.

To evaluate these predictions, we study 44 major Mesopotamian polities observed between 3050 and 1750 BC. To proxy the expected farming return, we combine information on the growing season temperature, averaged—as any other non-institutional variable—over the previous half-century and land suitability for wheat, barley and olive breeding (Figure 1). This measure is strongly correlated with contemporaneous barley yields in l/ha. Turning to the opaqueness in the farming process, we consider the exogenous spread of viticulture through inter-palatial trade. Because of its diplomatic and ritual roles, wine represented one of the exports most valued by the ruling elites. Regarding common interest goods, we gather information on the number of public and ritual buildings and the existence of a conscripted army. To measure the strength of political and property rights, we construct a five-point score rising with the division of decision-making power and a six-point index increasing when land exploitation by the elite was indirect rather than direct and/or farmers’ rights were enforced de jure rather than de facto. These two variables build on the events in a 40-year window around each time period.

Conditional on polity and half-century fixed effects, our OLS estimates imply that the strength of political and property rights is significantly and inversely related to the expected farming return, whereas only the protection of private property is significantly and positively driven by the opaqueness of farming. Finally, public good provision is unrelated to property rights protection but significantly and positively linked to the inclusiveness of the political process, and more so when the public good was the setup of a conscripted army.

These results open three crucial avenues for further research. First, did reforms towards stronger political and property rights foster, thanks to the larger provision of public goods, economic development (Guerriero and Righi, 2019)? Second, did the most politically developed polities obstruct the market integration of the Mesopotamia empires, pushing the rulers to impose a complex bureaucracy on all of them and extractive policies on the less militarily relevant ones (de Oliveira and Guerriero, 2018; Guerriero, 2019a)? Finally, did reforms towards a more inclusive political process foster the centralization of the legal order, i.e., reforms towards statutory law, bright-line procedural rules and a strong protection of the potential buyers’ reliance on their contracts (Guerriero 2016; 2019b)?

Picture 1
Figure 1: Political and Property Rights, Public Good Provision and Farming Return.        Source: as per published paper

 

References

de Oliveira, Guilherme, and Carmine Guerriero. 2018. “Extractive States: The Case of the Italian Unification.” International Review of Law and Economics, 56: 142-159.

Guerriero, Carmine. 2016. “Endogenous Legal Traditions.” International Review of Law and Economics, 46: 49-69.

Guerriero, Carmine. 2019a. “Endogenous Institutions and Economic Outcomes.” Forthcoming, Economica.

Guerriero, Carmine. 2019b. “Property Rights, Transaction Costs, and the Limits of the Market.” Unpublished.

Guerriero, Carmine, and Laura Righi. 2019. “The Origins of the State’s Fiscal Capacity: Culture, Democracy, and Warfare.” Unpublished.

North, Douglass C., and Barry R. Weingast. 1989. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History, 49: 803-832.

 

To contact the authors

Giacomo Benati (giacomo.benati2@unibo.it)

Carmine Guerriero (c.guerriero@unibo.it)

Federico Zaina (Federico.zaina@unibo.it)

Early View: Slavery and Anglo-American capitalism revisited

by Gavin Wright (Stanford University)

The full paper for this research has now been published on The Economic History Review and is available on early view here 

 

Slaves_cutting_the_sugar_cane_-_Ten_Views_in_the_Island_of_Antigua_(1823),_plate_IV_-_BL
Slaves cutting sugar cane, taken from ‘Ten Views in the Island of Antigua’ by William Clark. Available at Wikimedia Commons.

For decades, scholars have debated the role of slavery in the rise of industrial capitalism, from the British Industrial Revolution of the eighteenth century to the acceleration of the American economy in the nineteenth century.

Most recent studies find an important element of truth in the thesis associated with Eric Williams that links the slave trade and slave-based commerce with early British industrial development. Long-distance markets were crucial supports for technological progress and for the infrastructure of financial markets and the shipping sector.

But the eighteenth century Atlantic economy was dominated by sugar, and sugar was dominated by slavery. The role of the slave trade was central to the process, because it would have been all but impossible to attract a free labour force to the brutal and deadly conditions that prevailed in sugar cultivation. As the mercantilist, Sir James Steuart asked in 1767: ‘Could the sugar islands be cultivated to any advantage by hired labour?’

Adherents of an insurgency known as the New History of Capitalism have extended this line of analysis to nineteenth century America, maintaining that: ‘During the eighty years between the American Revolution and the Civil War, slavery was indispensable to the economic development of the United States.’ A crucial linkage in this perspective is between slave-grown cotton and the cotton textile industries of both Britain and the United States, as asserted by Marx: ‘Without slavery you have no cotton; without cotton you have no modern industry.’

My research, to be presented in this year’s Tawney Lecture to the Economic History Society’s annual conference, argues to the contrary, that such analyses overlook the second part of the Williams thesis, which held that industrial capitalism abandoned slavery because it was no longer needed for continued economic expansion. We need not ascribe cynical or self-interested motives to the abolitionists to assert that these forces were able to succeed because the political-economic consensus that supported slavery in the eighteenth century no longer prevailed in the nineteenth.

Between the American Revolution in 1776 and the end of the Napoleonic Wars in 1815, the demands of industrial capitalism changed in fundamental ways: expansion of new export markets in non-slave areas; streamlined channels for migration of free labour; the shift of the primary raw material from sugar to cotton. Unlike sugar, cotton was not confined to unhealthy locations, did not require large fixed capital investment, and would have spread rapidly through the American South, with or without slavery.

These historic shifts were recognised in the United States as in Britain, as indicated by the post-Revolutionary abolitions in the northern states and territories. To be sure, southern slavery was highly profitable to the owners, and the slave economy experienced considerable growth in the antebellum period. But the southern regional economy seemed increasingly out of step with the US mainstream, its centrality for national prosperity diminishing over time.

Indeed, my study asserts that on balance the persistence of slavery actually reduced the growth of cotton supply compared with a free-labour alternative. The truth of this proposition is most clearly demonstrated by the expansion of production after the Civil War and emancipation, and the return of world cotton prices to their pre-war levels.

Sanitary infrastructures and the decline of mortality in Germany, 1877-1913

by Daniel Gallardo Albarrán (Wageningen University)

The full article on this blog has been now published for The Economic History Review and it is available for free on Early View for 7 days, at this link

M0011720 The main drainage of the Metropolis
Wellcome Collections. The main drainage of the Metropolis. Available at

Lack of access to clean water and sanitation facilities are still common across the globe.  Simultaneously,  infectious, water-transmitted  illnesses are an important cause of death in these regions. Similarly, industrializing economies during the late 19th century exhibited extraordinarily high death rates from waterborne diseases. However, unlike contemporary developing countries, the former experienced a large decrease in mortality in subsequent decades which meant that deaths from waterborne diseases were totally eradicated.

What explains this unprecedented improvement? The provision of safe drinking water is often considered a key factor. However, the prevalence of waterborne ailments transmitted through faecal-oral mechanisms is also determined by water contamination and/or the inadequate storage and disposal of human waste.  Consequently, doubts remain about efficacy of clean water per se to reduce mortality; this necessitates  an integrative analysis considering both waterworks and sewerage systems.

My research adopts this approach by considering the case of Germany between 1877 and 1913 when both utilities were adopted nationally and crude death rates (CDR) and infant mortality rates (IMR) declined by almost 50 per cent.  A quick glance at trends in mortality and the timing of sanitary infrastructures in Figure 1 suggests that improvements in water supply and sewage disposal are associated with better health outcomes. However, this evidence is only suggestive: Figure 1 only presents the experience of two cities and, importantly, factors outside public health investments — for example,  better nutrition, improved infant care — may account for changes in mortality To study the link between sanitary improvements and mortality more systematically, I examine two new datasets containing information on various measures of mortality at city level (overall deaths, infant mortality and cause-specific deaths) and the timing when municipalities began improving water supply and sewage disposal.

Picture 1
Figure 1: Mortality and sanitary interventions in two selected cities. Source: per original article. Note: The thick and thin vertical lines mark the initial year when cities had piped water and sewers.

The first set of results show that piped water reduced mortality, although its effects were limited given the absence of efficient systems of waste removal. Both sanitary interventions account for (at least) a fifth of the decrease in crude death rates between 1877 and 1913. If we consider the fall in infant deaths instead, I find that sewers were equally important in providing effective protection against waterborne illnesses, since improvements in water supply and sewage disposal explain a quarter of the fall in infant mortality rates.
I interpret these findings causally because both interventions had a persistent short-term impact on mortality instantaneously following their implementation, not before. As Figure 2 shows, CDR and IMR immediately decline following the construction of both waterworks and sewerage, and mortality exhibits no statistically significant trends in the years preceding the sanitary interventions (the reference point for these comparisons is one year prior to their construction). Furthermore, using cause-specific deaths I find that sanitary infrastructures are strongly associated with enteric-related illnesses, and deaths from a very different set of causes — homicides, suicides or accidents — are not.

Picture 2
Figure 2: The joint effect of water supply and sewerage over time. Source: per original article. Note: Figures show the joint effect of two variables capturing the existence (or lack thereof) of waterworks and sewerage over time on CDR and IMR. The vertical bars are 90 percent confidence intervals. The reference year (-1) is one year prior the coded.

The second set of results relates to the heterogeneous effects of sanitary interventions along different dimensions. I find that their impact on mortality are less universal than hitherto thought, since their effectiveness largely depended on local characteristics such as income inequality or the availability of female employment.
In sum, my research shows that the mere provision of safe water, is not sufficient to explain a significant fraction of the mortality decline in Germany at the turn of the 20th century. Investments in proper waste removal were needed to realize the full potential of piped water. Most importantly, the unequal mortality-reducing effect of sanitation calls for a deeper understanding of how local factors interact with public health policies. This is especially relevant today, as international initiatives, for example, the Water, Sanitation and Hygiene programmes led by UNICEF, aims to of promote universal access to sanitary services in markedly different local contexts.

To contact the author:

daniel.gallardoalbarran@wur.nl

Twitter:  @DanielGalAlb

Why did the industrial diet triumph?

by Fernando Collantes (University of Zaragoza and Instituto Agroalimentario de Aragón)

This blog is part of a larger research paper published in the Economic History Review.

 

Harvard_food_pyramid
Harvard food pyramid. Available at Wikimedia Commons.

Consumers in the Northern hemisphere are feeling increasingly uneasy about their industrial diet. Few question that during the twentieth century the industrial diet helped us solve the nutritional problems related to scarcity. But there is now growing recognition that the triumph of the industrial diet triggered new problems related to abundance, among them obesity, excessive consumerism and environmental degradation. Currently, alternatives, ranging from organic food and those bearing geographical-‘quality’ labels, struggle to transcend the industrial diet. Frequently, these alternatives face a major obstacle: their relatively high price compared to mass-produced and mass-retailed food.

The research that I have conducted examines the literature on nutritional transitions, food regimes and food history, and positions it within present-day debates on diet change in affluent societies.  I employ a case-study of the growth in mass consumption of dairy products in Spain between 1965 and 1990. In the mid-1960s, dairy consumption was very low in Spain and many suffered from calcium deficiency.  Subsequently, there was a rapid growth in consumption. Milk, especially, became an integral part of the diet for the population. Alongside mass consumption there was also mass-production and complementary technical change. In the early 1960s, most consumers only drank raw milk, but by the 1990s milk was being sterilised and pasteurised to standard specifications by an emergent national dairy industry.

In the early 1960s, the regular purchase of milk was too expensive for most households. By the early 1990s, an increase in household incomes, complemented by (alleged) price reductions generated by dairy industrialization, facilitated rapid milk consumption. A further factor aiding consumption was changing consumer preferences. Previously, consumers perceptions of milk were affected by recurrent episodes of poisoning and fraud. The process of dairy industrialization ensured a greater supply of ‘safe’ milk and this encouraged consumers to use their increased real incomes to buy more milk. ‘Quality’ milk meant milk that was safe to consume became the main theme in the advertising campaigns employed milk processers (Figure 1).

 

Figure 1. Advertisement by La Lactaria Española in the early 1970s.

Collantes Pic
Source: Revista Española de Lechería, no. 90 (1973).

 

What are the implications of my research to contemporary debates on food quality? First the transition toward a diet richer in organic foods and foods characterised by short food-supply chains and artisan-like production, backed by geographical-quality labels has more than niche relevance. There are historical precedents (such as the one studied in this article) of large sections of the populace willing to pay premium prices for food products that in some senses are  perceived as qualitatively superior to other, more ‘conventional’ alternatives. If it happened in the past, it can happen again.  Indeed, new qualitative substitutions are already taking place. The key issue is the direction of this substitution. Will consumers use their affluence to ‘green’ their diet? Or will they use higher incomes  to purchase more highly processed foods — with possibly negative implications for  public health and environmental sustainability? This juncture between  food-system dynamics and  public policy is crucial. As Fernand Braudel argued, it is the extraordinary capacity for adaption that defines capitalism.  My research suggests that we need public policies that reorient food capitalism towards socially progressive ends.

 

To contact the author:  collantf@unizar.es

Plague and long-term development

by Guido Alfani (Bocconi University, Dondena Centre and IGIER)

 

The full paper has been published in The Economic History Review and is available here.

A YouTube video accompanies this work and can be found here.

 

How did preindustrial economies react to extreme mortality crises caused by severe epidemics of plague? Were health shocks of this kind able to shape long-term development patterns? While past research focused on the Black Death that affected Europe during 1347-52 ( Álvarez Nogal and Prados de la Escosura 2013; Clark 2007; Voigtländer and Voth 2013), in a forthcoming article with Marco Percoco we analyse the long-term consequences of what was by far the worst mortality crisis affecting Italy during the Early Modern period: the 1629-30 plague which killed an estimated 30-35% of the northern Italian population — about two million victims.

 

Figure 1 Luigi Pellegrini Scaramuccia (1670), Federico Borromeo visits the plague ward during the 1630 plague,

Alfani 1

Source: Milan, Biblioteca Ambrosiana

 

This episode is significant in Italian history, and more generally, for our understanding of the Little Divergence between the North and South of Europe. It had recently been hypothesized that the 1630 plague was the source of Italy’s relative decline during the seventeenth century (Alfani 2013). However, this hypothesis lacked solid empirical evidence. To resolve this question, we take a different approach from previous studies, and  demonstrate that plague lowered the trajectory of development of Italian cities. We argue that this was mostly due to a productivity shock caused by the plague, but we also explore other contributing factors. Consequently,  we provide support for the view that the economic consequences of severe demographic shocks need to be understood and studied on a case-by-case basis, as the historical context in which they occurred can lead to very different outcomes (Alfani and Murphy 2017).

After assembling a new database of mortality rates in a sample of 56 cities, we estimate a model of population growth allowing for different regimes of growth. We build on the seminal papers by Davis and Weinstein (2002), and Brakman et al. (2004) who based their analysis on a new framework in economic geography framework in which a relative city size growth model is estimated to determine whether a shock has temporary or persistent effects. We find that cities affected by the 1629-30 plague experienced persistent, long-term effects (i.e., up to 1800) on their pattern of relative population growth.

 

Figure 2. Giacomo Borlone de Buschis (attributed), Triumph of Death (1485), fresco

Alfani 2

Source: Oratorio dei Disciplini, Clusone (Italy).

 

We complete our analysis by estimating the absolute impact of the epidemic. We find that in northern Italian regions the plague caused a lasting decline in both the size and rate of change  of urban populations. The lasting damage done to the urban population are shown in Figure 3. For urbanization rates it will suffice to notice that across the North of Italy, by 1700 (70 years after the 1630 plague), they were still more than 20 per cent lower than in the decades preceding the catastrophe (16.1 per cent in 1700 versus an estimated 20.4 per cent in 1600, for cities >5,000). Overall, these findings suggest that surges in plagues may contribute to the decline of economic regions or whole countries. Our conclusions are  strengthened by showing that while there is clear evidence of the negative consequences of the 1630 plague, there is hardly any evidence for a positive effect (Pamuk 2007). We hypothesize that the potential positive consequences of the 1630 plague were entirely eroded by a negative productivity shock.

 

Figure 3. Size of the urban population in Piedmont, Lombardy, and Veneto (1620-1700)

Alfani 3

Source: see original article

 

Demonstrating that the plague had a persistent negative effect on many key Italian urban economies, we provide support for the hypothesis that the origins of  relative economic decline in northern Italy are to be found in particularly unfavorable epidemiological conditions. It was the context in which an epidemic occurred that increased its ability to affect the economy, not the plague itself.  Indeed, the 1630 plague affected the main states of the Italian Peninsula at the worst possible moment when its manufacturing were dealing with increasing competition from northern European countries. This explanation, however, provides a different interpretation to the Little Divergence in recent literature.

 

To contact the author: guido.alfani@unibocconi.it

 

References

Alfani, G., ‘Plague in seventeenth century Europe and the decline of Italy: and epidemiological hypothesis’, European Review of Economic History, 17, 4 (2013), pp.  408-430

Alfani, G. and Murphy, T., ‘Plague and Lethal Epidemics in the Pre-Industrial World’, Journal of Economic History, 77, 1 (2017), pp. 314-343.

Alfani, G. and Percoco, M., ‘Plague and long-term development: the lasting effects of the 1629-30 epidemic on the Italian cities’, The Economic History Review, forthcoming, https://doi.org/10.1111/ehr.12652

Álvarez Nogal, C. and Prados de la Escosura,L., ‘The Rise and Fall of Spain (1270-1850)’, Economic History Review, 66, 1 (2013), pp. 1–37.

Brakman, S., Garretsen H., Schramm M. ‘The Strategic Bombing of German Cities during World War II and its Impact on City Growth’, Journal of Economic Geography, 4 (2004), pp. 201-218.

Clark, G., A Farewell to Alms (Princeton, 2007).

Davis, D.R. and Weinstein, D.E. ‘Bones, Bombs, and Break Points: The Geography of Economic Activity’, American Economic Review, 92, 5 (2002), pp. 1269-1289.

Pamuk, S., ‘The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600’, European Review of Economic History, 11 (2007), pp. 289-317.

Voigtländer, N. and H.J. Voth, ‘The Three Horsemen of Riches: Plague, War, and Urbanization in Early Modern Europe’, Review of Economic Studies 80, 2 (2013), pp. 774–811.

The Political Economy of the Army in a Nonconsolidated Democracy: Spain (1931-1939)

by Alvaro La Parra-Perez (Weber State University)

The full article is published by the Economic History Review and is available for Early View at this link 

The Spanish Civil War (1936-9; henceforth, SCW) ended the Second Spanish Republic (1931-9), which is often considered Spain’s first democracy. Despite the hopes raised by the Republic – which enfranchised women and held free and fair elections, separated Church and state, and drafted and ambitious agrarian reform- , its end was not very different from many previous Spanish regimes: a military coup started the SCW which ultimately resulted in a dictatorship led by one of the rebel officers: Francisco Franco (1939/75).

In my article “For a Fistful of Pesetas? The Political Economy of the Army in a Non-Consolidated Democracy: The Second Spanish Republic and Civil War (1931-9)”, I open the “military black box” to understand the motivations driving officers’ behavior. In particular, the article explores how the redistribution of economic and professional rents during the Republic influenced officers’ likelihood of rebelling or remaining loyal to the republican government in 1936. By looking at (military) intra-elite conflict, I depart from the traditional assumption of an “elite single agent” that characterizes the neoclassical theory of the state (e.g. here, here, here, or here; also here).

The article uses a new data set with almost 12,000 active officers active in 1936 who belonged to the corps more directly involved in combat. Using the Spanish military yearbooks between 1931 and 1936, I traced officers’ individual professional trajectories and assessed the impact that republican military reforms in 1931-6 had on their careers. The side –loyal or rebel- chosen by each officer comes from Carlos Engel.

Untitled
Figure 1. Extract from the 1936 military yearbook. Source: 1936 Military Yearbook published by the Spanish Minister of War: http://hemerotecadigital.bne.es/issue.vm?id=0026976287&search=&lang=en

The main military reforms during the Republic took place under Manuel Azaña’s term as Minister of the War (1931-3). Azaña was also the leader of the leftist coalition that ruled the Republic when some officers rebelled and the SCW began. Azaña’s reforms favored the professional and economic independence of the Air Force and harmed many officers’ careers when some promotions passed during Primo de Rivera’s dictatorship (1923/30) were revised and cancelled. The system of military promotions was also revised and rendered more impersonal and meritocratic. Some historians also argue that the elimination of the highest rank in the army (Lieutenant General) worsened the professional prospects of many officers because vacancies for promotions became scarcer.

The results suggest that, at the margin, economic and professional considerations had a significant influence on officers’ choice of side during the SCW. The figure below shows the probit average marginal effects for the likelihood of rebelling among officers in republican-controlled areas. The main variables of interest are the ones under the “Rents” header. In general, those individuals or factions that improved their economic rents under Azaña’s reforms were less likely to rebel. For example, aviators were almost 20 percentage points less likely to rebel than the reference corps (artillerymen) and those officers with worse prospects after the rank of lieutenant general was eliminated were more likely to join the rebel ranks. Also, officers with faster careers (greater “change of position”) in the months before the SCW were less likely to rebel. The results also suggest that officers had a high discount rate for changes in their rank or position in the scale. Pre-1935 promotions are not significantly related to officers’ side during the SCW. Officers negatively affected by the revision of promotions in 1931/3 were more likely to rebel only at the 10 percent significance level (p-value=0.089).

Untitled 2
Figure 2. Probit average marginal effects for officers in republican-controlled areas with 95-percent confidence intervals. Source: see original article

To be clear, economic and professional interest were not the only elements explaining officers’ behavior. The article also finds evidence for the significance of other social and ideological factors. Take the case of hierarchical influences. Subordinates’ likelihood of rebelling in a given unit increased if their leader rebelled. Also, officers were less likely to rebel in those areas where the leftist parties that ruled in July 1936 had obtained better results in the elections held in February. Finally, members of the Assault Guard –a unit for which proven loyalty to the Republic was required- were more likely to remain loyal to the republican government.

The results are hardly surprising for an economist: people respond to incentives and officers – being people- were influenced at the margin by the impact that Azaña’s reforms had on their careers. This mechanism adds to the ideological explanations that have often dominated the narratives of the SCW, which tend to depict the army –more or less explicitly- as a monolithic agent aligned with conservative elites. As North, Wallis, and Weingast showed for other developing societies, intra-elite conflict and the redistribution of rents were an important factor in the dynamics (and ultimate fall) of the dominant coalition in Spain’s first democracy.

 

To contact the author:

Twitter: @AlvaroLaParra

Professional website: https://sites.google.com/site/alvarolaparraperez/

Can school centralization foster human capital accumulation? A quasi-experiment from early twentieth-century Italy

By Gabriele Cappelli (University of Siena) and Michelangelo Vasta (University of Siena)

The article is available on Early View at the Economic History Review’s link here

 

The issue of school reform is a key element of institutional change across countries. In developing economies the focus is rapidly shifting from increasing enrolments to improving educational outputs (literacy and skills) and outcomes (wages and productivity). In advanced economies, policy-makers focus on generating skills from educational inputs despite limited resources. This is unsurprising, because human capital formation is largely acknowledged as one of the main factors of economic growth.

Related to education policy, reforms have long focused on the way that the school systems can be organized, particularly its management and funding by local v. central government. On the one hand, local policy makers are more aware of the needs of local communities, which is supposed to improve schooling. On the other hand, school preferences might vary considerably between the central government and the local ruling elites, hampering the diffusion of education. Despite the importance of the topic, there is little historical research on this topic.

In this paper, we offer fresh evidence using a quasi-experiment that aims to explore dramatic changes in Italy’s educational institutions at the beginning of the 20th century, i.e. the 1911 Daneo-Credaro Reform. Due to this legislation, most municipalities moved from a decentralized school system, which had been based on the 1859 Casati Law, to direct state management and funding, while other municipalities, mainly provincial and district capitals, retained their autonomy, thus forming two distinct groups (Figure 1).

The Reform design allows us to compare these two groups through a quasi-experiment based on an innovative technique, namely Propensity Score Matching (henceforth PSM). PSM tackles an issue with the Reform that we study, namely that the assignment into treatment (centralization) of the municipalities is not random: the municipalities that retained school autonomy were those characterized by high literacy. By contrast, the poorest and less literate municipalities were more likely to end up under state control, implying that the analysis of the Daneo-Credaro Reform as an experiment will tend to overestimate the impact of centralization. PSM tackles the issue by ‘randomizing’ the selection into treatment: a statistical model is used to estimate the probability of being selected into centralization (propensity score) for each municipality; then, an algorithm matches municipalities in the treatment group with municipalities in the control group that have an equal (or very similar) propensity score – meaning that the only different feature will be whether they are treated or not. To perform PSM, we construct a novel database at the municipal level (a large sample of 1,000+ comuni). Secondly, we fill a gap in the historiography by providing an in-depth discussion of the way that the Reform worked, which has so far been neglected.

1
Figure 1 – Municipalities that still retained school autonomy in Italy by 1923. Source: Ministero della Pubblica Istruzione (1923), Relazione sul numero, la distribuzione e il funzionamento delle scuole elementari. Rome. Note: both the grey and black dots represent municipalities that retained school autonomy by 1923, while the others (not shown in the map) had shifted to centralized school management and funding. 

We find that the municipalities that switched to state control were characterized by a 0.43 percentage-point premium on the average annual growth of literacy between 1911 and 1921, compared to those that retained autonomy (Table 1). The estimated coefficient means that two very similar municipalities with equal literacy rates at 60% in 1911 will have a literacy gap equal to 3 percentage points in 1921, i.e. 72.07% (school autonomy) vs 75.17% (treated). This difference is similar to the gap between the treatment group and a counterfactual that we estimated in a robustness check based on Italian provinces (Figure 2).

Screen Shot 2019-07-22 at 20.34.56
Table 1 – Estimated treatment (Daneo-Credaro Reform) effect, 1911 – 1921.
2
Figure 2 – Literacy rates in the treatment and control groups, 1881 – 1921, pseudo-DiD. Source: see original article

Centralization improved the overall functioning of the school system and the efficiency of school funding. First, it reduced the distance between the central government and the city councils by granting more decision-making power to the provincial schooling board under the supervision of the central government. Thus, the control exercised by the Ministry reassured teachers that their salary would be increased, and the government could now guarantee that they would be paid regularly, which was not always the case when the municipalities managed primary schooling. Secondly, additional funding was provided to build new schools. The resultant increase appears to have been very large and its impact was amplified by the full reorganization of the school system. The funds could be directed to where they were most needed. Consequently, we argue, a mere increase in funding without institutional change would have been less effective in increasing literacy rates.
To conclude, the 50-year persistence of decentralized primary schooling hampered the accumulation of human capital and regional convergence in basic education, thus casting a long shadow on the future pace of aggregate and regional economic growth. The centralization of primary education via the Daneo-Credaro Reform in 1911 was a major breakthrough, which fostered the spread of literacy and allowed the country to reduce the human-capital gap with the most advanced economies.

 

To contact the author: Gabriele Cappelli

Email: gabriele.cappelli@unisi.it

Twitter: gabercappe