Britain’s inter-war super-rich, the 1928/9 ‘millionaire list’.

by Peter Scott (Henley Business School at the University of Reading)

The roaring 1920s. Available at <https://www.lovemoney.com/gallerylist/87193/the-roaring-1920s-richest-people-and-how-they-made-their-money&gt;

Most of our information on wealth distribution and top incomes is derived from data on wealth left at death, recorded in probates and estate duty statistics. This study utilises a unique list of all living millionaires for the 1928/9 tax year, compiled by the Inland Revenue to estimate how much a 40 per cent estate duty on them would raise in government revenue. Millionaires were identified by their incomes (over £50,000, or £3 million in 2018 prices), equivalent to a capitalised sum of over £1 million (£60 million in 2018 prices). Data for living millionaires are particularly valuable, given that even in the 1930s millionaires often had considerable longevity, and data on wealth at death typically reflected fortunes made, or inherited, several decades previously. Some millionaires’ names had been redacted – where their dates of birth or marriage were known – but cross-referencing with various data sources enabled the identification of 319 millionaires, equivalent to 72.8 percent of the number appearing on the millionaire list.

The tax year 1928 to 1929 is a very useful bench-mark for assessing the impact of the First World War and its aftermath on the composition of the super-rich. Prior to the 20th century, the highest echelons of wealth were dominated by the great landowners; reflecting a concentration of land-ownership unparalleled in Europe. William Rubinstein found that the wealth of the greatest landowners exceeded that of the richest businessmen until 1914, if not later. However, war-time inflation, higher taxes, and the post-war agricultural depression negatively impacted their fortunes. Meanwhile some industrialists benefitted enormously from the War.

By 1928 business fortunes had pushed even the wealthiest aristocrats, the Dukes of Bedford and Westminster, into seventh and eighth place on the list of top incomes. Their taxable incomes, £360,000 and £336,000 respectively, were dwarfed by those of the richest businessmen, such as the shipping magnate Sir John Ellerman (Britain’s richest man; the son of an immigrant corn broker who died in 1871, leaving £600) with a 1928 income of £1,553,000, or James Williamson, the first Baron Ashton, who  pioneered the mass production of linoleum – second on the list, with £760,000. Indeed, some 90 percent of named 1928/9 millionaires had fortunes based on (non-landed) business incomes. Moreover, the vast majority – 85.6 percent of non-landed males on the list – were active businesspeople, rather than rentiers.

 “Businesspeople millionaires” were highly clustered in certain sectors (relative to those sectors’ shares of all corporate profits): tobacco (5.40 times over-represented); shipbuilding (4.79); merchant and other banking (3.42); foods (3.20); ship-owning (3.02); other textiles (2.98); distilling (2.67), and brewing (2.59). These eight sectors collectively comprised 42.4 percent of all 1928/9 millionaires, but only 15.5 percent of aggregate profits. Meanwhile important sectors such as chemicals, cotton and woollen textiles, construction, and, particularly, distribution, are substantially under-represented.

The over-represented sectors were characterised by either rapid cartelisation and/or integration which, in most cases, had intensified during the War and its aftermath. Given that Britain had very limited tariffs, cartels and monopolies could only raise prices in sectors with other barriers to imports, principally “strategic assets”: assets that sustain competitive advantage through being valuable, rare, inimitable, and imperfectly substitutable. These included patents (rayon); control of distribution (brewing and tobacco); strong brands (whiskey; branded packaged foods); reputational assets (merchant banking); or membership of international cartels that granted territorial monopolies  (shipping; rayon). Conversely, there is very little evidence of “technical” barriers such as L-shaped cost curves that could have offset the welfare costs of industrial combination/concentration through scale economies. Instead, amalgamation or cartelisation were typically followed by rising real prices.

Another, less widespread but important tactic for gaining a personal and corporate competitive edge, was the use of sophisticated tax avoidance/evasion techniques to reduce tax liablity to a fraction of its headline rate. Tax avoidance was commonplace among Britain’s economic elite by the late 1920s, but a small proportion of business millionaires developed it to a level where most of their tax burden was removed, mainly via transmuting income into non-taxable capital gains and/or creating excessive depreciation tax allowances. Several leading British millionaires, including Ellerman, Lord Nuffield, Montague Burton, and the Vestey brothers (Union Cold Storage) were known to the Inland Revenue as skilled and successful tax avoiders.

These findings imply that the composition of economic elites should not simply be conflated with ‘wealth-creation’ prosperity (except for those elites), especially where their incomes include a substantial element of rent-seeking. Erecting or defending barriers to competition (through cartels, mergers, and strategic assets) may increase the number of very wealthy people, but  unlikely to have generated a positive influence on national economic growth and living standards — unless accompanied by rationalisation to substantially lower costs. In this respect typical inter-war business millionaires had strong commonalities with earlier, landed, British elites, in that they sustained their wealth through creating, and then perpetuating, scarcity in the markets for the goods and services they controlled.

To contact the author:

p.m.scott@henley.ac.uk

How to Keep Society Equal: The Case of Pre-industrial East Asia (NR Online Session 4)

By Yuzuru Kumon (Bocconi University)

This research is due to be presented in the fourth New Researcher Online Session: ‘Equality & Wages’.

Kumon2

Theatrum orbis terrarum: Map Tartaria, by Abraham Ortelius. Available at State Library Victoria.

 

 

Is high inequality destiny? The established view is that societies naturally converge towards high inequality in the absence of catastrophes (world wars or revolutions) or the progressive taxation of the rich. Yet, I show that rural Japan, 1700-1870, is an unexpected historical case in which a stable equality was sustained without such aids. Most peasants owned land, the most valuable asset in an agricultural economy, and Japan remained a society of land-owning peasants. This contrasts with the landless laborer societies of contemporary Western Europe which were highly unequal. Why were the outcomes so different?

My research shows that the relative equality of pre-industrial Japan can partly be explained by the widespread use of adoptions in Japan, which was used as a means of securing a male heir. The reasoning becomes clear if we first consider the case of the Earls Cowper in 18th century England where adoption was not practiced. The first Earl Cowper was a modest landowner and married Mary Clavering in 1706. When Mary’s brother subsequently died, she became the heiress and the couple inherited the Clavering estate. Similar (miss)fortunes for their heirs led the Cowpers to become one of the greatest landed families of England. The Cowpers were not particularly lucky, as one quarter of families were heirless during this era of high child mortality. The outcome of this death lottery was inequality.

Had the Cowpers lived in contemporary Japan, they would have remained modest landowners. An heirless household in Japan would adopt a son. Hence, the Claverings would have an adopted son and the family estate would have remained in the family. To keep the blood in the family, the adopted son may have married a daughter if available. If unavailable, the next generation could be formed by total strangers but they would continue the family line. Amassing a fortune in Japan was unrelated to demographic luck.

Widespread adoptions were not a peculiarity of Japan and this mechanism can also explain why East Asian societies were landowning peasant societies. China also had high rates of adoption in addition to equal distributions of land according to surveys from the 1930s. Perhaps more surprisingly, adoptions were common in ancient Europe where the Greeks and Romans practiced adoptions to secure heirs. For example, Augustus, the first emperor of the Roman Empire, was adopted. Adoptions were a natural means of keeping wealth under the control of the family.

Europe changed due to the church discouraging adoptions from the early middle ages, leading to adoptions becoming rarities by the 11th century. The church was partially motivated by theology but also by the possibility that heir-less wealth would get willed to the church. They almost certainly did not foresee that their policies would lead to greater wealth inequality during the subsequent eras.

 

Figure 1. Land Distribution under Differing Adoption Regimes and Impartible Inheritance

Kumon1

 

My study shows by simulation that a large portion of the difference in wealth inequality outcomes between east and west can be explained by adoption (see figure 1). Societies without adoption have wealth distribution that are heavily skewed with many landless households unlike those with adoptions. Therefore, family institutions played a key role in determining inequality which had huge implications for the way society was organized in these two regions.

Interestingly, East Asian societies still have greater equality in wealth distributions today. Moreover, adoptions still amount to 10% of marriages in Japan which is a remarkably large share. Adoption may have continued creating a relatively equal society in Japan up to today.

Land distribution and Inequality in a Black Settler Colony: The Case of Sierra Leone, 1792–1831

by Stefania Galli and Klas Rönnbäck (University of Gothenburg)

The full article from this blog is published on the European Review of Economic History and is available as open source at this link

Governor_John_Thomas's_house_in_Sierra_Leone,_mid-17th_century,_artist's_recreation
“Houses at Sierra-Leone”, Wesleyan Juvenile Offering: A Miscellany of Missionary Information for Young Persons, volume X, May 1853, pp. 55–57, illustration on p. 55. Available on Wikimedia

Land distribution has been identified as a key contributor to economic inequality in pre-industrial societies. Historical evidence on the link between land distribution and inequality for the African continent is scant, unlike the large body of research available for Europe and the Americas. Our article examines inequality in land ownership in Sierra Leone during the early nineteenth century. Our contribution is  unique because it studies land inequality at a particularly early stage for African economic history.

In 1787 the Sierra Leone colony was born, the first British colony to be founded after the American War of Independence. The colony had some peculiar features. Although populated by settlers, they were not of European origin, as in most settler colonies founded at the time. Rather, Sierra Leone came to be populated by people of African descent — a mix of former and liberated slaves from America, Europe and Africa. Furthermore, Sierra Leone had deeply egalitarian foundations, which rendered it more similar to a utopian society, than to other colonies founded on the African continent in subsequent  decades. The founders of the colony intended egalitarian land distribution for all settlers, aiming to create a black yeoman settler society.

In our study, we rely on a new dataset constructed from multiple different sources pertaining to the early years of Sierra Leone, which provide evidence on household  land distribution for three benchmark years: 1792, 1800 and 1831. The first two benchmarks refer to a time when demographic pressure in the Colony was limited, while the last benchmark represents a period of  rapidly increasing  demographic pressure due to the inflow of ‘liberated slaves’ from captured slave ships landed at Freetown.

Our findings show that, in its early days, the colony was characterized by highly egalitarian land distribution, possibly the most equal distribution calculated to date. All households possessed some land, in a distribution determined to a large extent by household size. Not only were there no landless households in 1792 and 1800, but land was normally distributed around the mean. Based on these results, we conclude that the ideological foundations of the colony were manifested in egalitarian distribution of land.

Such ideological convictions were, however, hard to maintain in the long run due to mounting demographic pressure and limited government funding. Land inequality thus increased substantially by the last benchmark year (Figure 1). In 1831, land distribution was positively skewed, with a substantial proportion of households in the sample being landless or owning plots much smaller than the median, while a few households held very large plots. We argue that these findings are consistent with an institutional shift in redistributive policy, which enabled inequality to grow rapidly. In the early days, all settlers received a set amount of land. However, by 1831, land could be appropriated freely by the settlers, enabling households to appropriate land according to their ability, but also according to their wish to participate in agricultural production. Specifically, households in more fertile regions appear to have specialized in agricultural production, whereas households in regions unsuitable to agriculture increasingly came to focus upon other economic activities.

Picture 1new
Figure 1. Land distribution in Sierra Leone, 1792, 1800 and 1831. Source: as per article

Our results have two implications for the debate on the origin of inequality. First, Sierra Leone shows how idealist motives had important consequences for inequality. This is of key importance for wider discussions on the extent to which politics generates tangible changes in society. Second, our results show how difficult it was to effect idealism when confronted by  mounting material challenges.

 

To contact the authors:

Stefania Galli (stefania.galli@gu.se)

Twitter: https://twitter.com/galli_stef

Infant and child mortality by socioeconomic status in early nineteenth century England

by Jaadla Hannaliis (University of Cambridge)

The full article from this blog (co-authored with E. Potter, S. Keibek,  and R.J.  Davenport) was published on The Economic History Review and is now available on Early View at this link

Picture 1nn
Figure 1. Thomas George Webster ‘Sickness and health’ (1843). Source: Photo credit: The Wordsworth Trust, licenced under CC BY-NC-SA

Socioeconomic gradients in health and mortality are ubiquitous in modern populations. Today life expectancy is generally positively correlated with individual or ecological measures of income, educational attainment and status within national populations. However, in stark contrast to these modern patterns, there is little evidence for such pervasive advantages of wealth to survival in historical populations before the nineteenth century.

In this study, we tested whether a socioeconomic gradient in child survival was already present in early nineteenth-century England using individual-level data on infant and child mortality for eight parishes from the Cambridge Group family reconstitution dataset (Wrigley et al. 1997). We used the paternal occupational descriptors routinely recorded in the Anglican baptism registers for the period from 1813–1837 to compare infant (under 1) and early childhood (age 1–4) mortality by social status. To capture differences in survivorship we compared multiple measures of status: HISCAM, HISCLASS, and also a continuous measure of wealth which was estimated by ranking paternal occupations by the propensity for their movable wealth to be inventoried upon death (Keibek 2017).  The main analytical tool was event history analysis, where individuals were followed from baptism or birth through the first five years of life, or until their death, or leaving the sample for other reasons.

Were socioeconomic differentials in mortality present in the English population by the early nineteenth century, as suggested by theorists of historical social inequalities (Antonovsky 1967; Kunitz 1987)? Our results provide a qualified yes. We did detect differentials in child survival by paternal or household wealth in the first five years of life. However the effects of wealth were muted, and non-linear. Instead we found a U-shaped relationship between paternal social status and survival, with the children of poor labourers or wealthier fathers enjoying relatively high survival chances.  Socioeconomic differentials emerged only after the first year of life (when mortality rates were highest), and were strongest at age one. Summed over the first five years of life, however, the advantages of wealth were marginal. Furthermore, the advantages of wealth were only observed once the anomalously low mortality of labourers’ children was taken into account.

As might be expected, these results provide evidence for the contribution of both environment and household or familial factors. In infancy, mortality varied between parishes, however the environmental hazards associated with industrialising or urban settlements appear to have operated fairly equally on households of differing socioeconomic status. It is likely that most infants in our eight  reconstitution parishes were breastfed throughout the first year of life – which  probably conferred a ubiquitous advantage that overwhelmed other material differences in household conditions, for example, maternal nutrition.

To the extent that wealth conferred a survival advantage, did it operate through access to information, or to material resources? There was no evidence that literacy was important to child survival. However, our results suggest that cultural practices surrounding weaning may have been key. This was indicated by the peculiar age pattern of the socioeconomic gradient to survival, which was strongest in the second year of life, the year in which most children were weaned. We also found a marked survival advantage of longer birth intervals post-infancy, and this advantage accrued particularly to labourers’ children, because their mothers had longer than average birth intervals.

Our findings point to the importance of breastfeeding patterns in modulating the influence of socioeconomic status on infant and child survival. Breastfeeding practices varied enormously in historical populations, both geographically and by social status (Thorvaldsen 2008). These variations, together with the differential sorting of social groups into relatively healthy or unhealthy environments, probably explains the difficulty in pinpointing the emergence of socioeconomic gradients in survival, especially in infancy.

At ages 1–4 years we were able to demonstrate that the advantages of wealth and of a labouring father operated even at the level of individual parishes. That is, these advantages were not simply a function of the sorting of classes or occupations into different environments. These findings therefore implicate differences in household practices and conditions in the survival of children in our sample. This was clearest in the case of labourers. Labourers’ children enjoyed higher survival rates than predicted by household wealth, and this was associated with longer birth intervals (consistent with longer breastfeeding), as well as other factors that we could not identify, but which were probably not a function of rural isolation within parishes. Why labouring households should have differed in these ways remains unexplained.

To contact the author:  Hj309@cam.ac.uk

References

Antonovsky, A., ‘Social class, life expectancy and overall mortality’, Milbank Memorial Fund Quarterly, 45 (1967), pp. 31–73.

Keibek, S. A. J., ‘The male occupational structure of England and Wales, 1650–1850’, (unpub. Ph.D. thesis, Univ. of Cambridge, 2017).

Kunitz, S.J., ‘Making a long story short: a note on men’s height and mortality in England from the first through the nineteenth centuries’, Medical History, 31 (1987), pp. 269–80.

Thorvaldsen, G., ‘Was there a European breastfeeding pattern?’ History of the Family, 13 (2008), pp. 283–95.

Land, Ladies, and the Law: A Case Study on Women’s Land Rights and Welfare in Southeast Asia in the Nineteenth Century

by Thanyaporn Chankrajang and Jessica Vechbanyongratana (Chulalongkorn University)

The full article from this blog is forthcoming on The Economic History Review

 

Security of land rights  empowers women with greater decision-making power (Doss, 2013), potentially impacting both land-related investment decisions and the allocation of goods within the households (Allendorf, 2007; Goldstein et al., 2008; Menon et al., 2017). In historical contexts where land was the main factor of production for most economic activities, little is known about women’s land entitlements. Historical gender-disaggregated land ownership data are scarce, making  quantitative investigations of the past challenging. In new research we overcome this problem by analyzing rare, gender-disaggregated, historical land rights records to determine the extent of women’s land rights, and its implications, in nineteenth-century Bangkok.

First, we utilized orchard land deeds issued in Bangkok during the 1880s (Figure 1).  These deeds were both landownership and tax documents. Land tax assessment  was assessed based on the enumeration of mature orchard trees producing  high-value fruits, such as areca nuts, mangoes and durian. From 9,018 surviving orchard deeds, we find that 82 per cent of Bangkok orchards listed at least one woman as an owner, indicating that women did possess de jure usufruct land rights under the traditional land rights system. By analyzing the number of trees cultivated on each property (proxied by tax per hectare), we find these rights were upheld in practice and incentivized agricultural productivity. Controlling for owner and plot characteristics, plots with only female owners on average cultivated 6.7 per cent more trees per hectare than plots with mixed gender ownership, while male-owned plots cultivated 6.7 per cent fewer trees per hectare than mixed gender plots. The evidence indicates higher levels of investment in cash crop cultivation among female landowners.

Picture 1fd
Figure 1. An 1880s Government Copy of an Orchard Land Deed Issued to Two Women. Source: Department of Lands Museum, Ministry of Interior.

 

The second part of our analysis assesses 217 land-related court cases to determine whether women’s land rights in Bangkok were protected from the late nineteenth century when land disputes increased. We find that ‘commoner’ women acted as both plaintiffs and defendants, and were able to win cases even against politically powerful men. Such secure land rights helped preserve women’s livelihoods.

Finally, based on an internationally comparable welfare estimation (Allen et al. 2011; Cha, 2015), we calculate an equivalent measure of a ‘bare bones’ consumption basket. We find that the median woman-owned orchard could annually support up to 10 adults. By recognizing women’s contributions to family income (Table 1), Bangkok’s welfare ratio was as high as 1.66 for the median household, demonstrating a larger household surplus than found in Japan, and comparable to those in Beijing and Milan during the same period (Allen et al. 2011).

Superficially, our findings seem to contradict historical and contemporary observations that land rights structures favor men (Doepke et al., 2012). However, our study typifies women’s economic empowerment in Thailand and Southeast Asia more generally. Since at least the early modern period, women in Southeast Asia possessed relatively high social status and autonomy in marriage and family, literacy and literature, diplomacy and politics, and economic activities ( Hirschman, 2017; Adulyapichet, 2001; Baker et al., 2017). The evidence we provide supports the latter interpretation, and is consonant with other Southeast Asian land-related features, such as matrilocality and matrilineage (Huntrakul, 2003).

 

Screenshot 2020-06-15 at 16.23.37

To contact the authors:

Thanyaporn Chankrajang, Thanyaporn.C@chula.ac.th

Jessica Vechbanyongratana, ajarn.jessica@gmail.com
@j_vechbany

 

References

Adulyapichet, A., ‘Status and roles of Siamese women and men in the past: a case study from      Khun Chang Khun Phan’ (thesis, Silpakorn Univ., 2001).

Allen, R. C., Bassino, J. P., Ma, D., Moll‐Murata, C., and Van Zanden, J. L. ‘Wages, prices, and living standards in China, 1738–1925: in comparison with Europe, Japan, and India’, Economic History Review, 64 (2011), pp. 8-38.

Allendorf, K., ‘Do women’s land rights promote empowerment and child health in Nepal?’,  World development, 35 (2007), pp. 1975-88.

Baker, C., and Phongpaichit, P., A history of Ayutthaya: Siam in the early modern world (Cambridge, 2017).

Cha, M. S. ‘Unskilled wage gaps within the Japanese Empire’, Economic History Review, 68 (2015), pp. 23-47.

Chankrajang, T. and Vechbanyongratana, J. ‘Canals and orchards: the impact of transport network access on agricultural productivity in nineteenth-century Bangkok’, Journal of Economic History, forthcoming.

Chankrajang, T. and Vechbanyongratana, J. ‘Land, ladies, and the law: a case study on women’s land rights and welfare in Southeast Asia in the nineteenth century’, Economic History Review, forthcoming.

Doepke, M., Tertilt, M., and Voena, A., ‘The economics and politics of women’s rights’, Annual Review of Economics, 4 (2012), pp. 339-72.

Doss, C., ‘Intrahousehold bargaining and resource allocation in developing countries’, World Bank Research Observer 28 (2013), pp.52-78.

Goldstein, M., and Udry, C., ‘The profits of power land rights and agricultural investment in Ghana’, Journal of Political Economy, 116 (2008), pp. 981-1022.

Hirschman, C. ‘Gender, the status of women, and family structure in Malaysia’, Malaysian Journal of Economic Studies 53 (2017), pp. 33-50.

Huntrakul, P., ‘Thai women in the three seals code: from matriarchy to subordination’, Journal of Letters, 32 (2003), pp. 246-99.

Menon, N., van der Meulen Rodgers, Y., and Kennedy, A. R., ‘Land reform and welfare in Vietnam: why gender of the land‐rights holder matters’, Journal of International Development, 29 (2017), pp. 454-72.

 

Early-life disease exposure and occupational status

by Martin Saavedra (Oberlin College and Conservatory)

This blog is part H of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. This blog is based on the article ‘Early-life disease exposure and occupational status: The impact of yellow fever during the 19th century’, in Explorations in Economic History, 64 (2017): 62-81. https://doi.org/10.1016/j.eeh.2017.01.003   

 

Saavedra1
A girl suffering from yellow fever. Watercolour. Available at Wellcome Images.

Like epidemics, shocks to public health have the potential to affect human capital accumulation. A literature in health economics known as the ‘fetal origins hypothesis’ has examined how in utero exposure to infectious disease affects labor market outcomes. Individuals may be more sensitive to health shocks during the developmental stage of life than during later stages of childhood. For good reason, much of this literature focuses on the 1918 influenza pandemic which was a huge shock to mortality and one of the few events that can be visibly observed when examining life expectancy trends in the United States. However, there are limitations to looking at the 1918 influenza pandemic because it coincided with the First World War. Another complication in this literature is that cities with outbreaks of infectious disease often engaged in many forms of social distancing by closing schools and businesses. This is true for the 1918 influenza pandemic, but also for other diseases. For examples, many schools were closed during the polio epidemic of 1916.

So, how can we estimate the long-run effects of infectious disease when cities simultaneously respond to outbreaks? One possibility is to look at a disease that differentially affected some groups within the same city, such as yellow fever during the nineteenth century. Yellow fever is a viral infection that spreads from the Aedes aegypti mosquito and is still endemic in parts of Africa and South America.  The disease kills roughly 50,000 people per year, even though a vaccine has existed for decades. Symptoms include fever, muscle pain, chills, and jaundice, from which the disease derives its name.

During the eighteenth and nineteenth centuries, yellow fever plagued American cities, particularly port cities that traded with Caribbean Islands. In 1793, over 5,000 Philadelphians likely died of yellow fever. This would be a devasting number in any city, even by today’s standards, but it is even more so when considering that in 1790 that Philadelphia had a population of less than 29,000.

By the mid-nineteenth century, Southern port cities grew, and yellow fever stopped occurring in cities as far north as Philadelphia. The graph below displays the number of yellow fever fatalities in four southern port cities — New Orleans, LA; Mobile, AL; Charleston, SC; and Norfolk, VA — during the nineteenth century. Yellow fever was sporadic, devasting a city in one year and often leaving it untouched in the next. For example, yellow fever killed nearly 8,000 New Orleanians in 1853, and over 2,000 in both 1854 and 1855. The next two years, yellow fever killed fewer than 200 New Orleanians per year, then yellow fever come back killing over 3,500 in 1858. Norfolk, VA was only struck once in 1855. Since yellow fever never struck Norfolk during milder years, the population lacked immunity and approximately 10 percent of the city died in 1855. Charleston and Mobile show similar sporadic patterns. Likely due to the Union’s naval blockade, yellow fever did not visit any American port cities in large numbers during the Civil War.

 

Saavedra2
Source: As per original article.

 

Immigrants were particularly prone to yellow fever because they often came from European countries rarely visited by yellow fever. Native New Orleanians, however, typically caught yellow fever during a mild year as children and were then immune to the disease for the rest of their lives. For this reason, yellow fever earned the name the “stranger’s disease.”

Data from the full count of the 1880 census show that yellow fever fatality rates during an individual’s year of birth negatively affected adult occupational status, but only for individuals with foreign-born mothers. Those with US-born mothers were relatively unaffected by the disease. There are also effects for those who are exposed to yellow fever one or two years after their birth, but there are no effects, not even for those with immigrant mothers, when exposed to yellow fever three or four years after their births. These results suggest that early-life exposure to infectious disease, not just city-wide responses to disease, influence human capital development.

 


 

Martin Saavedra

Martin.Saavedra@oberlin.edu

 

The Origins of Political and Property Rights in Bronze Age Mesopotamia

by Giacomo Benati, Carmine Guerriero, Federico Zaina (University of Bologna)

The full paper from this blog is available here

unnamed (1)
Mesopotamian map of canals. Available at <http://factsanddetails.com/world/cat56/sub363/item1513.html&gt;

Despite the overwhelming empirical evidence documenting the relevance of inclusive political institutions and strong property rights, we still lack a unified framework identifying their determinants and their interaction. We develop a model to address this deficiency, and we test its implications on a novel data on Greater Mesopotamia during the Bronze Ages.

This region developed the first recorded forms of stable state institutions, which can be credibly linked to geography. Worsening  climatic conditions between the end of the Uruk period (3300-3100 BC) and the beginning of the Jemdet Nasr and Early Dynastic periods (3100-2550 BC) reduced farming returns and  forced the religious elites to share power previously acquired from the landholding households, with rising military elites. This transformation led to the peasants engaging in leasing, renting and tenure-for-service contracts requiring rents and corvèe, such as participation in infrastructure projects and a conscripted army. Being an empowerment mechanism, the latter was the citizens’ preferred public good. Second, the Pre-Sargonic period (2550-2350 BC) witnessed a milder climate, which curbed the temple and palatial elites’ need to share their policy-making power. Finally, a period of harsher climate, and the consequent rise of long-distance trade as an alternative activity, allowed the town elites to establish themselves as the third decision-maker during the Mesopotamian empires period (2350-1750 BC). Reforms towards more inclusive political institutions were accompanied by a shift towards stronger farmers’ rights on land and a larger provision of public goods, especially those most valued by the citizens, i.e., conscripted army.

To elucidate the incentives behind these stylized facts, we consider the interaction between a land-owning elite and citizens able to deliver a valuable harvest if the imperfectly observable farming conditions were favorable. To incentivize investment, the elite cannot commit to direct transfers, but can lean on two other instruments: establishing a more inclusive political process, which allows citizens to select tax rates and organize public good provision, and/or punishing citizens for suspected shirking by restricting their private rights.  This ‘stick’ is costly for the elite. When the expected harvest value is barely greater than the investment cost, citizens cooperate only under full property rights and more inclusive political institutions, allowing them to fully tax the output. When the investment return is intermediate, the elite keeps control over fiscal policies and can implement partial taxation. When, finally, the investment return is large, the elite can also weaken the protection of private rights. Yet, embracing the stick is optimal only if production is sufficiently transparent, and, thus, punishment effectively disciplines a shirking citizen. Our model has three implications. First, the inclusiveness of political institutions declines with expected farming returns and is unrelated to the opaqueness of farming. Second, the strength of property rights diminishes with the expected harvest return and is positively related to the opaqueness of farming. Finally, citizens’ expected utility from public good provision increases with the strength of political and property rights.

To evaluate these predictions, we study 44 major Mesopotamian polities observed between 3050 and 1750 BC. To proxy the expected farming return, we combine information on the growing season temperature, averaged—as any other non-institutional variable—over the previous half-century and land suitability for wheat, barley and olive breeding (Figure 1). This measure is strongly correlated with contemporaneous barley yields in l/ha. Turning to the opaqueness in the farming process, we consider the exogenous spread of viticulture through inter-palatial trade. Because of its diplomatic and ritual roles, wine represented one of the exports most valued by the ruling elites. Regarding common interest goods, we gather information on the number of public and ritual buildings and the existence of a conscripted army. To measure the strength of political and property rights, we construct a five-point score rising with the division of decision-making power and a six-point index increasing when land exploitation by the elite was indirect rather than direct and/or farmers’ rights were enforced de jure rather than de facto. These two variables build on the events in a 40-year window around each time period.

Conditional on polity and half-century fixed effects, our OLS estimates imply that the strength of political and property rights is significantly and inversely related to the expected farming return, whereas only the protection of private property is significantly and positively driven by the opaqueness of farming. Finally, public good provision is unrelated to property rights protection but significantly and positively linked to the inclusiveness of the political process, and more so when the public good was the setup of a conscripted army.

These results open three crucial avenues for further research. First, did reforms towards stronger political and property rights foster, thanks to the larger provision of public goods, economic development (Guerriero and Righi, 2019)? Second, did the most politically developed polities obstruct the market integration of the Mesopotamia empires, pushing the rulers to impose a complex bureaucracy on all of them and extractive policies on the less militarily relevant ones (de Oliveira and Guerriero, 2018; Guerriero, 2019a)? Finally, did reforms towards a more inclusive political process foster the centralization of the legal order, i.e., reforms towards statutory law, bright-line procedural rules and a strong protection of the potential buyers’ reliance on their contracts (Guerriero 2016; 2019b)?

Picture 1
Figure 1: Political and Property Rights, Public Good Provision and Farming Return.        Source: as per published paper

 

References

de Oliveira, Guilherme, and Carmine Guerriero. 2018. “Extractive States: The Case of the Italian Unification.” International Review of Law and Economics, 56: 142-159.

Guerriero, Carmine. 2016. “Endogenous Legal Traditions.” International Review of Law and Economics, 46: 49-69.

Guerriero, Carmine. 2019a. “Endogenous Institutions and Economic Outcomes.” Forthcoming, Economica.

Guerriero, Carmine. 2019b. “Property Rights, Transaction Costs, and the Limits of the Market.” Unpublished.

Guerriero, Carmine, and Laura Righi. 2019. “The Origins of the State’s Fiscal Capacity: Culture, Democracy, and Warfare.” Unpublished.

North, Douglass C., and Barry R. Weingast. 1989. “Constitutions and Commitment: The Evolution of Institutions Governing Public Choice in Seventeenth-Century England.” Journal of Economic History, 49: 803-832.

 

To contact the authors

Giacomo Benati (giacomo.benati2@unibo.it)

Carmine Guerriero (c.guerriero@unibo.it)

Federico Zaina (Federico.zaina@unibo.it)

Early View: Slavery and Anglo-American capitalism revisited

by Gavin Wright (Stanford University)

The full paper for this research has now been published on The Economic History Review and is available on early view here 

 

Slaves_cutting_the_sugar_cane_-_Ten_Views_in_the_Island_of_Antigua_(1823),_plate_IV_-_BL
Slaves cutting sugar cane, taken from ‘Ten Views in the Island of Antigua’ by William Clark. Available at Wikimedia Commons.

For decades, scholars have debated the role of slavery in the rise of industrial capitalism, from the British Industrial Revolution of the eighteenth century to the acceleration of the American economy in the nineteenth century.

Most recent studies find an important element of truth in the thesis associated with Eric Williams that links the slave trade and slave-based commerce with early British industrial development. Long-distance markets were crucial supports for technological progress and for the infrastructure of financial markets and the shipping sector.

But the eighteenth century Atlantic economy was dominated by sugar, and sugar was dominated by slavery. The role of the slave trade was central to the process, because it would have been all but impossible to attract a free labour force to the brutal and deadly conditions that prevailed in sugar cultivation. As the mercantilist, Sir James Steuart asked in 1767: ‘Could the sugar islands be cultivated to any advantage by hired labour?’

Adherents of an insurgency known as the New History of Capitalism have extended this line of analysis to nineteenth century America, maintaining that: ‘During the eighty years between the American Revolution and the Civil War, slavery was indispensable to the economic development of the United States.’ A crucial linkage in this perspective is between slave-grown cotton and the cotton textile industries of both Britain and the United States, as asserted by Marx: ‘Without slavery you have no cotton; without cotton you have no modern industry.’

My research, to be presented in this year’s Tawney Lecture to the Economic History Society’s annual conference, argues to the contrary, that such analyses overlook the second part of the Williams thesis, which held that industrial capitalism abandoned slavery because it was no longer needed for continued economic expansion. We need not ascribe cynical or self-interested motives to the abolitionists to assert that these forces were able to succeed because the political-economic consensus that supported slavery in the eighteenth century no longer prevailed in the nineteenth.

Between the American Revolution in 1776 and the end of the Napoleonic Wars in 1815, the demands of industrial capitalism changed in fundamental ways: expansion of new export markets in non-slave areas; streamlined channels for migration of free labour; the shift of the primary raw material from sugar to cotton. Unlike sugar, cotton was not confined to unhealthy locations, did not require large fixed capital investment, and would have spread rapidly through the American South, with or without slavery.

These historic shifts were recognised in the United States as in Britain, as indicated by the post-Revolutionary abolitions in the northern states and territories. To be sure, southern slavery was highly profitable to the owners, and the slave economy experienced considerable growth in the antebellum period. But the southern regional economy seemed increasingly out of step with the US mainstream, its centrality for national prosperity diminishing over time.

Indeed, my study asserts that on balance the persistence of slavery actually reduced the growth of cotton supply compared with a free-labour alternative. The truth of this proposition is most clearly demonstrated by the expansion of production after the Civil War and emancipation, and the return of world cotton prices to their pre-war levels.

Sanitary infrastructures and the decline of mortality in Germany, 1877-1913

by Daniel Gallardo Albarrán (Wageningen University)

The full article on this blog has been now published for The Economic History Review and it is available for free on Early View for 7 days, at this link

M0011720 The main drainage of the Metropolis
Wellcome Collections. The main drainage of the Metropolis. Available at

Lack of access to clean water and sanitation facilities are still common across the globe.  Simultaneously,  infectious, water-transmitted  illnesses are an important cause of death in these regions. Similarly, industrializing economies during the late 19th century exhibited extraordinarily high death rates from waterborne diseases. However, unlike contemporary developing countries, the former experienced a large decrease in mortality in subsequent decades which meant that deaths from waterborne diseases were totally eradicated.

What explains this unprecedented improvement? The provision of safe drinking water is often considered a key factor. However, the prevalence of waterborne ailments transmitted through faecal-oral mechanisms is also determined by water contamination and/or the inadequate storage and disposal of human waste.  Consequently, doubts remain about efficacy of clean water per se to reduce mortality; this necessitates  an integrative analysis considering both waterworks and sewerage systems.

My research adopts this approach by considering the case of Germany between 1877 and 1913 when both utilities were adopted nationally and crude death rates (CDR) and infant mortality rates (IMR) declined by almost 50 per cent.  A quick glance at trends in mortality and the timing of sanitary infrastructures in Figure 1 suggests that improvements in water supply and sewage disposal are associated with better health outcomes. However, this evidence is only suggestive: Figure 1 only presents the experience of two cities and, importantly, factors outside public health investments — for example,  better nutrition, improved infant care — may account for changes in mortality To study the link between sanitary improvements and mortality more systematically, I examine two new datasets containing information on various measures of mortality at city level (overall deaths, infant mortality and cause-specific deaths) and the timing when municipalities began improving water supply and sewage disposal.

Picture 1
Figure 1: Mortality and sanitary interventions in two selected cities. Source: per original article. Note: The thick and thin vertical lines mark the initial year when cities had piped water and sewers.

The first set of results show that piped water reduced mortality, although its effects were limited given the absence of efficient systems of waste removal. Both sanitary interventions account for (at least) a fifth of the decrease in crude death rates between 1877 and 1913. If we consider the fall in infant deaths instead, I find that sewers were equally important in providing effective protection against waterborne illnesses, since improvements in water supply and sewage disposal explain a quarter of the fall in infant mortality rates.
I interpret these findings causally because both interventions had a persistent short-term impact on mortality instantaneously following their implementation, not before. As Figure 2 shows, CDR and IMR immediately decline following the construction of both waterworks and sewerage, and mortality exhibits no statistically significant trends in the years preceding the sanitary interventions (the reference point for these comparisons is one year prior to their construction). Furthermore, using cause-specific deaths I find that sanitary infrastructures are strongly associated with enteric-related illnesses, and deaths from a very different set of causes — homicides, suicides or accidents — are not.

Picture 2
Figure 2: The joint effect of water supply and sewerage over time. Source: per original article. Note: Figures show the joint effect of two variables capturing the existence (or lack thereof) of waterworks and sewerage over time on CDR and IMR. The vertical bars are 90 percent confidence intervals. The reference year (-1) is one year prior the coded.

The second set of results relates to the heterogeneous effects of sanitary interventions along different dimensions. I find that their impact on mortality are less universal than hitherto thought, since their effectiveness largely depended on local characteristics such as income inequality or the availability of female employment.
In sum, my research shows that the mere provision of safe water, is not sufficient to explain a significant fraction of the mortality decline in Germany at the turn of the 20th century. Investments in proper waste removal were needed to realize the full potential of piped water. Most importantly, the unequal mortality-reducing effect of sanitation calls for a deeper understanding of how local factors interact with public health policies. This is especially relevant today, as international initiatives, for example, the Water, Sanitation and Hygiene programmes led by UNICEF, aims to of promote universal access to sanitary services in markedly different local contexts.

To contact the author:

daniel.gallardoalbarran@wur.nl

Twitter:  @DanielGalAlb

Why did the industrial diet triumph?

by Fernando Collantes (University of Zaragoza and Instituto Agroalimentario de Aragón)

This blog is part of a larger research paper published in the Economic History Review.

 

Harvard_food_pyramid
Harvard food pyramid. Available at Wikimedia Commons.

Consumers in the Northern hemisphere are feeling increasingly uneasy about their industrial diet. Few question that during the twentieth century the industrial diet helped us solve the nutritional problems related to scarcity. But there is now growing recognition that the triumph of the industrial diet triggered new problems related to abundance, among them obesity, excessive consumerism and environmental degradation. Currently, alternatives, ranging from organic food and those bearing geographical-‘quality’ labels, struggle to transcend the industrial diet. Frequently, these alternatives face a major obstacle: their relatively high price compared to mass-produced and mass-retailed food.

The research that I have conducted examines the literature on nutritional transitions, food regimes and food history, and positions it within present-day debates on diet change in affluent societies.  I employ a case-study of the growth in mass consumption of dairy products in Spain between 1965 and 1990. In the mid-1960s, dairy consumption was very low in Spain and many suffered from calcium deficiency.  Subsequently, there was a rapid growth in consumption. Milk, especially, became an integral part of the diet for the population. Alongside mass consumption there was also mass-production and complementary technical change. In the early 1960s, most consumers only drank raw milk, but by the 1990s milk was being sterilised and pasteurised to standard specifications by an emergent national dairy industry.

In the early 1960s, the regular purchase of milk was too expensive for most households. By the early 1990s, an increase in household incomes, complemented by (alleged) price reductions generated by dairy industrialization, facilitated rapid milk consumption. A further factor aiding consumption was changing consumer preferences. Previously, consumers perceptions of milk were affected by recurrent episodes of poisoning and fraud. The process of dairy industrialization ensured a greater supply of ‘safe’ milk and this encouraged consumers to use their increased real incomes to buy more milk. ‘Quality’ milk meant milk that was safe to consume became the main theme in the advertising campaigns employed milk processers (Figure 1).

 

Figure 1. Advertisement by La Lactaria Española in the early 1970s.

Collantes Pic
Source: Revista Española de Lechería, no. 90 (1973).

 

What are the implications of my research to contemporary debates on food quality? First the transition toward a diet richer in organic foods and foods characterised by short food-supply chains and artisan-like production, backed by geographical-quality labels has more than niche relevance. There are historical precedents (such as the one studied in this article) of large sections of the populace willing to pay premium prices for food products that in some senses are  perceived as qualitatively superior to other, more ‘conventional’ alternatives. If it happened in the past, it can happen again.  Indeed, new qualitative substitutions are already taking place. The key issue is the direction of this substitution. Will consumers use their affluence to ‘green’ their diet? Or will they use higher incomes  to purchase more highly processed foods — with possibly negative implications for  public health and environmental sustainability? This juncture between  food-system dynamics and  public policy is crucial. As Fernand Braudel argued, it is the extraordinary capacity for adaption that defines capitalism.  My research suggests that we need public policies that reorient food capitalism towards socially progressive ends.

 

To contact the author:  collantf@unizar.es