Overcoming the Egyptian cotton crisis in the interwar period: the role of irrigation, drainage, new seeds and access to credit

By Ulas Karakoc (TOBB ETU, Ankara & Humboldt University Berlin) & Laura Panza (University of Melbourne)

The full article from this blog is forthcoming in the Economic History Review.

 

Panza1
A study of diversity in Egyptian cotton, 1909. Available at Wikimedia Commons.

By 1914, Egypt’s large agricultural sector was negatively hit by declining yields in cotton production. Egypt at the time was a textbook case of export-led development.  The decline in cotton yields — the ‘cotton crisis’ — was coupled with two other constraints: land scarcity and high population density. Nonethless, Egyptian agriculture was able to overcome this crisis in the interwar period, despite unfavourable price shocks. The output stagnation between 1900 and the 1920s clearly contrasts with the following recovery (Figure 1). In this paper, we empirically examine how this happened, by focusing on the role of government investment in irrigation infrastructure, farmers crop choices (intra-cotton shifts), and access to credit.

 

Figure 1: Cotton output, acreage and yields, 1895-1940

Panza2
Source: Annuaire Statistique (various issues)

 

The decline in yields was caused by expanded irrigation without sufficient drainage, leading to a higher water table, increased salination, and increased pest attacks on cotton (Radwan, 1974; Owen, 1968; Richards, 1982).  The government introduced an extensive public works programme, to reverse soil degradation and restore production. Simultaneously, Egypt’s farmers changed the type of cotton they were cultivating, shifting from the long staple and low yielding Sakellaridis to the medium-short staple and high yielding Achmouni, which reflected income maximizing preferences (Goldberg 2004 and 2006). Another important feature of the Egyptian economy between the 1920s and 1940s, was the expansion of credit facilities and the connected increase in farmers’ accessibility to agricultural loans. The interwar years witnessed the establishment of cooperatives to facilitate small landowners’ access to inputs (Issawi,1954), and the foundation of the Crèdit Agricole in 1931, offering small loans (Eshag and Kamal, 1967). These credit institutions coexisted with a number of mortgage banks, among which the Credit Foncièr was the largest, servicing predominantly large owners. Figure 2 illustrates the average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939).

 

Figure 2: Average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939)

Panza3
Source: Annuaire Statistique (various issues)

 

Our work investigates the extent to which these factors contributed to the recovery of the raw cotton industry. Specifically: to what extent can intra-cotton shifts explain changes in total output? How did the increase in public works, mainly investment in the canal and drainage network, help boost production? And what role did differential access to credit play? To answer these questions, we construct a new dataset by exploiting official statistics (Annuaire Statistique de l’Egypte) covering 11 provinces and 17 years during 1923-1939. These data allow us to provide the first empirical estimates of Egyptian cotton output at the province level.

Access to finance and improved seeds significantly increased cotton output. The declining price premium of Sakellaridis led to a large-scale switch to Achmouni, which indicates that farmers responded to market incentives in their cultivation choices. Our study shows that cultivators’ response to market changes was fundamental in the recovery of the cotton sector. Access to credit was also a strong determinant of cotton output, especially to the benefit of large landowners. That access to credit plays a vital role in enabling the adoption of productivity-enhancing innovations is consonant with the literature on the Green Revolution, (Glaeser, 2010).

Our results show that the expansion of irrigation and drainage did not have a direct effect on output. However, we cannot rule out completely the role played by improved irrigation infrastructure because we do not observe investment in private drains, so we cannot assess complementarities between private and public drainage. Further, we find some evidence of a cumulative effect of drainage pipes, two to three years after installation.

The structure of land ownership, specifically the presence of large landowners, contributed to output recovery. Thus, despite institutional innovations designed to give small farmers better access to credit, large landowners benefitted disproportionally from credit availability. This is not a surprising finding: extreme inequality of land holdings had been a central feature of the country’s agricultural system for centuries.

 

References

Eshag, Eprime, and M. A. Kamal. “A Note on the Reform of the Rural Credit System in U.A.R (Egypt).” Bulletin of the Oxford University Institute of Economics & Statistics 29, no. 2 (1967): 95–107. https://doi.org/10.1111/j.1468-0084.1967.mp29002001.x.

Glaeser, Bernhard. The Green Revolution Revisited: Critique and Alternatives. Taylor & Francis, 2010.

Goldberg, Ellis. “Historiography of Crisis in the Egyptian Political Economy.” In Middle Eastern Historiographies: Narrating the Twentieth Century, edited by I. Gershoni, Amy Singer, and Hakan Erdem, 183–207. University of Washington Press, 2006.

———. Trade, Reputation and Child Labour in the Twentieth-Century Egypt. Palgrave Macmillan, 2004.

Issawi, Charles. Egypt at Mid-Century. Oxford University Press, 1954.

Owen, Roger. “Agricultural Production in Historical Perspective: A Case Study of the Period 1890-1939.” In Egypt Since the Revolution, edited by P. Vatikiotis, 40–65, 1968.

Radwan, Samir. Capital Formation in Egyptian Industry and Agriculture, 1882-1967. Ithaca Press, 1974.

Richards, Alan Egypt’s Agricultural Development, 1800-1980: Technical and Social Change. Westview Press, 1982.

 


Ulas Karakoc

ulaslar@gmail.com

 

Laura Panza

lpanza@unimelb.edu.au

 

 

 

 

 

Patents and Invention in Jamaica and the British Atlantic before 1857

By Aaron Graham (Oxford University)

This article will be published in the Economic History Review and is currently available on Early View.

 

Cardiff Hall, St. Ann's.
A Picturesque Tour of the Island of Jamaica, by James Hakewill (1875). Available at Wikimedia Commons.

For a long time the plantation colonies of the Americas were seen as backward and undeveloped, dependent for their wealth on the grinding enslavement of hundreds of thousands of people.  This was only part of the story, albeit a major one. Sugar, coffee, cotton, tobacco and indigo plantations were also some of the largest and most complex economic enterprises of the early industrial revolution, exceeding many textile factories in size and relying upon sophisticated technologies for the processing of raw materials.  My article looks at the patent system of Jamaica and the British Atlantic which supported this system, arguing that it facilitated a process of transatlantic invention, innovation and technological diffusion.

The first key finding concerns the nature of the patent system in Jamaica.  As in British America, patents were granted by colonial legislatures rather than by the Crown, and besides merely registering the proprietary right to an invention they often included further powers, to facilitate the process of licensing and diffusion.  They were therefore more akin to industrial subsidies than modern patents.  The corollary was that inventors had to demonstrate not just novelty but practicality and utility; in 1786, when two inventors competed to patent the same invention, the prize went to the one who provided a successful demonstration (Figure 1).   As a result, the bar was higher, and only about sixty patents were passed in Jamaica between 1664 and 1857, compared to the many thousands in Britain and the United States.

 

Figure 1. ‘Elevation & Plan of an Improved SUGAR MILL by Edward Woollery Esq of Jamaica’

Graham1
Source: Bryan Edwards, The History, Civil and Commercial, of the British Colonies of the West Indies (London, 1794).

 

However, the second key finding is that this ‘bar’ was enough to make Jamaica one of the centres of colonial technological innovation before 1770, along with Barbados and South Carolina, which accounted for about two-thirds of the patents passed in that period.  All three were successful plantation colonies, where planters earned large amounts of money and had both the incentive and the means to invest heavily in technological innovations intended to improve efficiency and profits.  Patenting peaked in Jamaica between the 1760s and 1780s, as the island adapted to sudden economic change, as part of a package of measures that included opening up new lands, experimenting with new cane varieties, engaging in closer accounting, importing more slaves and developing new ways of working them harder.

A further finding of the article is that the English and Jamaican patent systems until 1852 were complementary.  Inventors in Britain could purchase an English patent with a ‘colonial clause’ extending it to colonial territories, but a Jamaican patent offered them additional powers and flexibility as they brought their inventions to Jamaica and adapted it to local conditions.  Inventors in Jamaica could obtain a local patent to protect their invention while they perfected it and prepared to market it in Britain.  The article shows how inventors used varies strategies within the two systems to help support the process of turning their inventions into viable technologies.

Finally, the colonial patents operated alongside a system of grants, premiums and prizes operated by the Jamaican Assembly, which helped to support innovation by plugging the gaps left by the patent system.  Inventors who felt that their designs were too easily pirated, or that they themselves lacked the capacity to develop them properly, could ask for a grant instead that recompensed them for the costs of invention and made the new technology widely available.  Like the imperial and colonial patents, the grants were part of the strategies used to promote invention.

Indeed, sometimes the Assembly stepped in directly.  In 1799, Jean Baptiste Brouet asked the House for a patent for a machine for curing coffee.  The committee agreed that the invention was novel, useful and practical, ‘but as the petitioner has not been naturalised and is totally unable to pay the fees for a private bill’, they suggested granting him £350 instead, ‘as a full reward for his invention; [and] the machines constructed according to the model whereof may then be used by any person desirous of the same, without any license from or fee paid to the petitioner’.

The article therefore argues that Jamaican patents were part of wider transatlantic system that acted to facilitate invention, innovation and technological diffusion in support of the plantation economy and slave society.

 


 

Aaron Graham

aaron.graham@history.ox.ac.uk

Famine, institutions, and indentured migration in colonial India

By Ashish Aggarwal (University of Warwick)

This blog is part of a series of New Researcher blogs.

 

Aggarwal1
Women fetching water in India in the late 19th century. Available at Wikimedia Commons.

A large share of the working population in developing countries is still engaged in agricultural activities. In India, for instance, over 40% of the employed population works in the agricultural sector and nearly three-quarters of the households depend on rural incomes (World Bank[1]). In addition, the agricultural sector in developing countries is plagued with low investments, forcing workers to rely on natural sources for irrigation as opposed to perennial man-made sources. Gadgil and Gadgil (2006) study the agricultural sector in India during 1951-2003 and find that despite a decline in share of agriculture in GDP in India, severe droughts still adversely impact GDP by 2-5%. In such a context, any unanticipated deviation from normal in rainfall is bound to have adverse effects on productivity and consequently, on incomes of these workers. In this paper, I study whether workers adopt migration as a coping strategy in response to income risks arising out of negative shocks to agriculture. And, if local institutions facilitate or hinder the use of this strategy. In a nutshell, the answers are yes and yes.

I study these questions in the context of indentured migration from colonial India to several British colonies. The abolition of slavery in the 1830s led to a demand for new sources of labour to work on plantations in the colonies. Starting with the “great experiment” in Mauritius (Carter, 1993), over a million Indians became indentured migrants with Mauritius, British Guyana, Natal, and Trinidad being the major destinations. The indentured migration from India was a system of voluntary migration, wherein passages were paid-for and migrants earned fixed wages and rations. The exact terms varied across different colonies, but generally the contracts were specified for a period of five years and after ten years of residency in the colony, a paid-for return passage was also available.

Using a unique dataset on annual district-level outflows of indentured migrants from colonial lndia to several British colonies in the period 1860-1912, I find that famines increased indentures. However, this effect varied according to the land-revenue collection system established by the British. Using the year the district was annexed by Britain to construct an instrument for the land revenue system (Banerjee and Iyer, 2005), I find that emigration responded less to famines in British districts where landlords collected revenue (as opposed to places where individual was responsible for revenue payments). I also find this to be the case in Princely States. However, the reasons for these results are markedly different. Qualitative evidence suggests that landlords were unlikely to grant remissions to their tenants; this increased tenant debt, preventing them from migrating. Interlinked transactions and a general fear of the landlords prevented the tenants from defaulting on their debts. Such coercion was not witnessed in areas where landlords were not the revenue collectors making it easier for people to migrate in times of distress. On the other hand, in Princely states, local rulers adopted liberal measures during famine years in order to help the population. These findings are robust to various placebo and robustness checks. The results are in line with Persaud (2019) who shows that people engaged in indentured migration to escape local price volatility.

 

[1] https://www.worldbank.org/en/news/feature/2012/05/17/india-agriculture-issues-priorities

 

References

Banerjee, Abhijit, and Lakshmi Iyer (2005): “History, Institutions, and Economic Performance: The Legacy of Colonial Land Tenure Systems in India”, American Economic Review, Vol. 95, No. 4, pp. 1190-1213.

Carter, Marina (1993): “The Transition from Slave to Indentured Labour in Mauritius”, Slavery and Abolition, 14:1, pp. 114-130.

Gadgil, Sulochana, and Siddhartha Gadgil (2006): “The Indian Monsoon, GDP and Agriculture”, Economic and Political Weekly, Vol. 41, No. 47, 4887-4895.

Persaud, Alexander (2019): “Escaping Local Risk by Entering Indentureship: Evidence from Nineteenth-Century Indian Migration”, Journal of Economic History, Vol. 79, No. 2, pp. 447-476.

 

 

Before the fall: quantity versus quality in pre–demographic transition Quebec (NR Online Session 3)

By Matthew Curtis (University of California, Davis)

This research is due to be presented in the third New Researcher Online Session: ‘Human Capital & Development’.


 

Curtis1
Map of East Canada or Quebec and New Brunswick, by John Tallis c.1850. Available at Wikimedia Commons.

While it plays a key role in theories of the transition to modern economic growth, there are few estimates of the quantity-quality trade-off from before the demographic transition. Using a uniquely suitable new dataset of vital records, I use two instrumental variable (IV) strategies to estimate the trade-off in Quebec between 1620 and 1850. I find that one additional child who survived past age one decreased the literacy rate (proxied by signatures) of their older siblings by 5 percentage points.

The first strategy exploits the fact that twin births, conditional on mother’s age and parity, are a random increase in family size. While twins are often used to identify the trade-off in contemporary studies, sufficiently large and reliable historical datasets containing twins are rare. I compare two families, one whose mother gave birth to twins and one whose mother gave birth to a singleton, both at the same parity and age. I then look at the probability that each older non-twin sibling signed their marriage record.

For the second strategy, I posit that aggregate, province-wide infant mortality rate during the year a younger child was born is exogenous to individual family characteristics. I compare two families, one whose mother gave birth during a year with relatively high infant mortality rate, both at the same parity and age. I then look at older siblings from both families who were born in the same year, controlling for potential time trends in literacy. As the two different IV techniques result in very similar estimates, I argue there is strong evidence of a modest trade-off.

By using two instruments, I am able to rule out one major source of potential bias. In many settings, IV estimates of the trade-off may be biased if parents reallocate resources towards (reinforcement) or away from (compensation) children with higher birth endowments. I show that both twins and children born in high mortality years have, on average, lower literacy rates than their older siblings. As one shock increases and one shock decreases family size, but both result in older siblings having relatively higher human capital, reinforcement or compensation would bias the estimates in different directions. As the estimates are very similar, I conclude there is no evidence that my estimates suffer from this bias.

Is the estimated trade-off economically significant? I compare Quebec to a society with similar culture and institutions: pre-Revolutionary rural France. Between  1628 and 1788, a woman surviving to age 40 in Quebec would expect to have 1.7 additional children surviving past age one compared to her rural French peers. The average literacy rate (again proxied by signatures) in France was about 9.5 percentage points higher than in Quebec. Assuming my estimate of the trade-off is a linear and constant effect (instead of just a local average), reducing family sizes to French levels would have increased literacy by 8.6 percentage points in the next generation, thereby eliminating most of the gap.

However, pre-Revolutionary France was hardly a human capital-rich society. Proxying for the presence of the primary educators of the period (clergy and members of religious orders) with unmarried adults, I find plausible evidence that the trade-off was steeper in boroughs and decades with greater access to education. Altogether, I interpret my results as evidence that a trade-off existed which explains some of the differences across societies.

 

Data Sources

Henry, Louis, 1978. “Fécondité des mariages dans le quart Sud-Est de la France de 1670 a 1829,” Population (French Edition), 33 (4/5), 855–883.

IMPQ. 2019. Infrastructure intégrée des microdonnées historiques de la population du Québec (XVIIe – XXe siècle) (IMPQ). [Dataset].Centre interuniversitaires d’études              québécoises (CIEQ).

Programme de recherche en démographie historique (PRDH). 2019. Registre de la population du Québec ancien (RPQA). [Dataset]. Département de Démographie, Université de Montréal.

Projet BALSAC. 2019. Le fichier BALSAC. [Dataset]. L’Université du Québec à Chicoutimi.

Honest, sober and willing: Oxford college servants 1850-1939 (NR Online Session 3)

By Kathryne Crossley (University of Oxford)

This research is due to be presented in the third New Researcher Online Session: ‘Human Capital & Development’.


 

Crossley1
The library of Christ Church, Oxford from Rudolph Ackermann’s History of Oxford (1813). Available at Wikimedia Commons.

 

 

Oxford colleges were among the earliest employers in England to offer organised pension schemes for their workers. These schemes were remarkable for several reasons: they were early, the first was established in 1852; they included domestic servants, rather than white-collar workers; and colleges were unlike typical early adopters of pension schemes, which tended to be large bureaucratic organisations, such as railways or the civil service.

The schemes developed from various motives: from preventing poverty in workers’ old age to promoting middle-class values, like thrift and sobriety, through compulsory savings.

Until the Second World War, college servants were often described as a ‘labour aristocracy’, and while there were many successful senior servants, equally there were many casual, part-time and seasonal workers. The experience of these workers provides an unusually detailed look at the precarity of working-class life in the nineteenth and early twentieth centuries, and the strategies that workers developed to manage uncertainty, especially in old age.

My research uses a wide variety of archival sources, many previously unknown, from 19 Oxford colleges to consider why these colleges decided to overhaul servants’ pension provisions during this period, how retirement savings schemes were designed and implemented, and to try and understand what workers thought of these fundamental changes to the labour contract.

During this period, Oxford was a highly seasonal, low-waged economy. It was hard for many people to find enough work during the year to earn an adequate living, much less save for an old age they usually did not expect to see. Most men and women worked as long as they were capable, often past what we think of as a typical retirement age today.

It’s no surprise then that the protections against illness, disability, old age and death offered by these paternalistic employers encouraged a highly competitive labour market for college work, and the promise of an ex gratia, or traditional non-contributory pension, was one of the most attractive features of college employment.

For centuries, colleges awarded these traditional pensions to workers. Rights to these pensions, which usually replaced about a quarter to a third of a worker’s total earnings, were insecure and awards were made entirely at the discretion of the college.

In 1852, the first retirement savings scheme for Oxford college servants was created at Balliol College. By the 1920s, traditional non-contributory pensions had been replaced by contributory schemes at most Oxford colleges, shifting the risk of old age from employers to employees. Even though making contributions often meant a decrease in take-home pay, servants always preferred a guaranteed pension entitlement over traditional non-contributory pensions.

The earliest savings schemes mandated the purchase of life insurance policies. These were intended not only to protect a servant’s dependent family members, but also to limit the college’s financial liability in the event of a servant’s death. Servants were similarly risk-averse and often purchased multiple policies when they could afford to; many joined friendly societies and purchased insurance privately, in addition to employer-directed schemes.

The popularity of these schemes among Oxford colleges mirrors the growth of the insurance industry and the development of actuarial science during this period. By the 1870s, nearly all schemes included annuities or endowment assurance policies, which provided a guaranteed income for servants, usually at age 60-65, and facilitated the introduction of mandatory retirement ages for these workers.

Traditional paternalism remained influential throughout the period. Colleges insisted on controlling insurance policies, naming themselves as beneficiaries and directing the proceeds. Women, who were more likely to be in low-waged seasonal work, were nearly always excluded from these schemes and had to depend on ex gratia pension awards much longer than their male colleagues.

These early pension schemes offered no protection against inflation and colleges were usually slow to increase pension awards in response to rising prices. By the end of the Great War, dissatisfaction with inadequate pensions was one of several factors that pushed college servants to form a trade union in 1919.

 

Land distribution and Inequality in a Black Settler Colony: The Case of Sierra Leone, 1792–1831

by Stefania Galli and Klas Rönnbäck (University of Gothenburg)

The full article from this blog is published on the European Review of Economic History and is available as open source at this link

Governor_John_Thomas's_house_in_Sierra_Leone,_mid-17th_century,_artist's_recreation
“Houses at Sierra-Leone”, Wesleyan Juvenile Offering: A Miscellany of Missionary Information for Young Persons, volume X, May 1853, pp. 55–57, illustration on p. 55. Available on Wikimedia

Land distribution has been identified as a key contributor to economic inequality in pre-industrial societies. Historical evidence on the link between land distribution and inequality for the African continent is scant, unlike the large body of research available for Europe and the Americas. Our article examines inequality in land ownership in Sierra Leone during the early nineteenth century. Our contribution is  unique because it studies land inequality at a particularly early stage for African economic history.

In 1787 the Sierra Leone colony was born, the first British colony to be founded after the American War of Independence. The colony had some peculiar features. Although populated by settlers, they were not of European origin, as in most settler colonies founded at the time. Rather, Sierra Leone came to be populated by people of African descent — a mix of former and liberated slaves from America, Europe and Africa. Furthermore, Sierra Leone had deeply egalitarian foundations, which rendered it more similar to a utopian society, than to other colonies founded on the African continent in subsequent  decades. The founders of the colony intended egalitarian land distribution for all settlers, aiming to create a black yeoman settler society.

In our study, we rely on a new dataset constructed from multiple different sources pertaining to the early years of Sierra Leone, which provide evidence on household  land distribution for three benchmark years: 1792, 1800 and 1831. The first two benchmarks refer to a time when demographic pressure in the Colony was limited, while the last benchmark represents a period of  rapidly increasing  demographic pressure due to the inflow of ‘liberated slaves’ from captured slave ships landed at Freetown.

Our findings show that, in its early days, the colony was characterized by highly egalitarian land distribution, possibly the most equal distribution calculated to date. All households possessed some land, in a distribution determined to a large extent by household size. Not only were there no landless households in 1792 and 1800, but land was normally distributed around the mean. Based on these results, we conclude that the ideological foundations of the colony were manifested in egalitarian distribution of land.

Such ideological convictions were, however, hard to maintain in the long run due to mounting demographic pressure and limited government funding. Land inequality thus increased substantially by the last benchmark year (Figure 1). In 1831, land distribution was positively skewed, with a substantial proportion of households in the sample being landless or owning plots much smaller than the median, while a few households held very large plots. We argue that these findings are consistent with an institutional shift in redistributive policy, which enabled inequality to grow rapidly. In the early days, all settlers received a set amount of land. However, by 1831, land could be appropriated freely by the settlers, enabling households to appropriate land according to their ability, but also according to their wish to participate in agricultural production. Specifically, households in more fertile regions appear to have specialized in agricultural production, whereas households in regions unsuitable to agriculture increasingly came to focus upon other economic activities.

Picture 1new
Figure 1. Land distribution in Sierra Leone, 1792, 1800 and 1831. Source: as per article

Our results have two implications for the debate on the origin of inequality. First, Sierra Leone shows how idealist motives had important consequences for inequality. This is of key importance for wider discussions on the extent to which politics generates tangible changes in society. Second, our results show how difficult it was to effect idealism when confronted by  mounting material challenges.

 

To contact the authors:

Stefania Galli (stefania.galli@gu.se)

Twitter: https://twitter.com/galli_stef

Corporate Social Responsibility for workers: Pirelli (1950-1980)

by Ilaria Suffia (Università Cattolica, Milan)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

 

Suffia1
Pirelli headquarters in Milan’s Bicocca district. Available at Wikimedia Commons.

Corporate social responsibility (CSR) in relation to the workforce has generated extensive academic and public debate. In this paper I evaluate Pirelli’s approach to CSR, by exploring its archives over the period 1950 to 1967.

Pirelli, founded in Milan by Giovanni Battista Pirelli in 1872, introduced industrial welfare for its employees and their family from its inception. In 1950, it deepened its relationship with them by publishing ‘Fatti e Notizie’ [Events and News], the company’s in-house newspaper. The journal was intended to share information with workers, at any level and, above all, it was meant to strengthen relationships within the ‘Pirelli family’.

Pirelli industrial welfare began in the 1870s and, by the end of the decade, a mutual aid fund and some institutions for its employees families (kindergarten and school), were established. Over the next 20 years, the company set the basis of its welfare policy which encompassed three main features: a series of ‘workplace’ protections, including accident and maternity assistance;  ‘family assistance’, including (in addition to kindergarten and school), seasonal care for children and, finally,  commitment to the professional training of its workers.

In the 1920s, the company’s welfare enlarged. In 1926, Pirelli created a health care service for the whole family and, in the same period, sport, culture and ‘free time’ activities became the main pillars of its CSR. Pirelli also provided houses for its workers, best exemplified in 1921, with the ‘Pirelli Village’. After 1945, Pirelli continued its welfare policy. The Company started a new programme of construction of workers’ houses (based on national provision), expanding its Village, and founding a professional training institute, dedicated to Piero Pirelli. The establishment in 1950 of the company journal, ‘Fatti e Notizie’, can be considered part of Pirelli’s welfare activities.

‘Fatti e Notizie’ was designed to improve internal communication about the company, especially Pirelli’s workers.  Subsequently, Pirelli also introduced in-house articles on current news or special pieces on economics, law and politics. My analysis of ‘Fatti e Notizie’ demonstrates that welfare news initially occupied about 80 per cent of coverage, but after the mid-1950s it decreased to 50 per cent in the late 1960s.

The welfare articles indicate that the type of communication depended on subject matter. Thus, health care, news on colleagues, sport and culture were mainly ‘instructive’, reporting information and keeping up to date with events. ‘Official’ communications on subjects such as CEO reports and financial statements, utilised ‘top to bottom’ articles. Cooperation, often reinforced with propaganda language, was promoted for accident prevention and workplace safety. Moreover, this kind of communication was applied to ‘bottom to top’ messages, such as an ‘ideas box’ in which workers presented their suggestions to improve production processes or safety.

My analysis shows that the communication model implemented by Pirelli in the 1950s and 1960s, navigated models of capitulation, (where the managerial view prevails) in the 1950s, to trivialisation (dealing only with ‘neutral’ topics, from the 1960s.

 

 

Ilaria Suffia: ilaria.suffia@unicatt.it

Infant and child mortality by socioeconomic status in early nineteenth century England

by Jaadla Hannaliis (University of Cambridge)

The full article from this blog (co-authored with E. Potter, S. Keibek,  and R.J.  Davenport) was published on The Economic History Review and is now available on Early View at this link

Picture 1nn
Figure 1. Thomas George Webster ‘Sickness and health’ (1843). Source: Photo credit: The Wordsworth Trust, licenced under CC BY-NC-SA

Socioeconomic gradients in health and mortality are ubiquitous in modern populations. Today life expectancy is generally positively correlated with individual or ecological measures of income, educational attainment and status within national populations. However, in stark contrast to these modern patterns, there is little evidence for such pervasive advantages of wealth to survival in historical populations before the nineteenth century.

In this study, we tested whether a socioeconomic gradient in child survival was already present in early nineteenth-century England using individual-level data on infant and child mortality for eight parishes from the Cambridge Group family reconstitution dataset (Wrigley et al. 1997). We used the paternal occupational descriptors routinely recorded in the Anglican baptism registers for the period from 1813–1837 to compare infant (under 1) and early childhood (age 1–4) mortality by social status. To capture differences in survivorship we compared multiple measures of status: HISCAM, HISCLASS, and also a continuous measure of wealth which was estimated by ranking paternal occupations by the propensity for their movable wealth to be inventoried upon death (Keibek 2017).  The main analytical tool was event history analysis, where individuals were followed from baptism or birth through the first five years of life, or until their death, or leaving the sample for other reasons.

Were socioeconomic differentials in mortality present in the English population by the early nineteenth century, as suggested by theorists of historical social inequalities (Antonovsky 1967; Kunitz 1987)? Our results provide a qualified yes. We did detect differentials in child survival by paternal or household wealth in the first five years of life. However the effects of wealth were muted, and non-linear. Instead we found a U-shaped relationship between paternal social status and survival, with the children of poor labourers or wealthier fathers enjoying relatively high survival chances.  Socioeconomic differentials emerged only after the first year of life (when mortality rates were highest), and were strongest at age one. Summed over the first five years of life, however, the advantages of wealth were marginal. Furthermore, the advantages of wealth were only observed once the anomalously low mortality of labourers’ children was taken into account.

As might be expected, these results provide evidence for the contribution of both environment and household or familial factors. In infancy, mortality varied between parishes, however the environmental hazards associated with industrialising or urban settlements appear to have operated fairly equally on households of differing socioeconomic status. It is likely that most infants in our eight  reconstitution parishes were breastfed throughout the first year of life – which  probably conferred a ubiquitous advantage that overwhelmed other material differences in household conditions, for example, maternal nutrition.

To the extent that wealth conferred a survival advantage, did it operate through access to information, or to material resources? There was no evidence that literacy was important to child survival. However, our results suggest that cultural practices surrounding weaning may have been key. This was indicated by the peculiar age pattern of the socioeconomic gradient to survival, which was strongest in the second year of life, the year in which most children were weaned. We also found a marked survival advantage of longer birth intervals post-infancy, and this advantage accrued particularly to labourers’ children, because their mothers had longer than average birth intervals.

Our findings point to the importance of breastfeeding patterns in modulating the influence of socioeconomic status on infant and child survival. Breastfeeding practices varied enormously in historical populations, both geographically and by social status (Thorvaldsen 2008). These variations, together with the differential sorting of social groups into relatively healthy or unhealthy environments, probably explains the difficulty in pinpointing the emergence of socioeconomic gradients in survival, especially in infancy.

At ages 1–4 years we were able to demonstrate that the advantages of wealth and of a labouring father operated even at the level of individual parishes. That is, these advantages were not simply a function of the sorting of classes or occupations into different environments. These findings therefore implicate differences in household practices and conditions in the survival of children in our sample. This was clearest in the case of labourers. Labourers’ children enjoyed higher survival rates than predicted by household wealth, and this was associated with longer birth intervals (consistent with longer breastfeeding), as well as other factors that we could not identify, but which were probably not a function of rural isolation within parishes. Why labouring households should have differed in these ways remains unexplained.

To contact the author:  Hj309@cam.ac.uk

References

Antonovsky, A., ‘Social class, life expectancy and overall mortality’, Milbank Memorial Fund Quarterly, 45 (1967), pp. 31–73.

Keibek, S. A. J., ‘The male occupational structure of England and Wales, 1650–1850’, (unpub. Ph.D. thesis, Univ. of Cambridge, 2017).

Kunitz, S.J., ‘Making a long story short: a note on men’s height and mortality in England from the first through the nineteenth centuries’, Medical History, 31 (1987), pp. 269–80.

Thorvaldsen, G., ‘Was there a European breastfeeding pattern?’ History of the Family, 13 (2008), pp. 283–95.

Real urban wage in an agricultural economy without landless farmers: Serbia, 1862-1910

by Branko Milanović (City University New York and LSE)

This blog is based on a forthcoming article on The Economic History Review

Screenshot 2020-06-10 at 17.10.50
Railway construction workers, ca.1900.

Calculations of historical welfare ratios (wages expressed in relation to the  subsistence needs of a wage-earner’s family) exist for many countries and time periods.  The original methodology was developed by Robert Allen (2001).  The objective of real wage studies is not only to estimate real wages but to assess living standards before the advent of national accounts.  This methodology has been employed to address key questions in economic history: income divergence between Northern Europe and China (Li and van Zanden, 2012; Allen, Bassino, Ma, Moll-Murata, and van Zanden, 2011); the  “Little Divergence”  (Pamuk 2007); development of North v. South America (Allen, Murphy and Schneider, 2012), and even the causes of the Industrial Revolution (Allen 2009; Humphries 2011; Stephenson 2018, 2019).

We apply this methodology to Serbia between 1862 and 1910, to consider  the extent to which  small, peasant-owned farms and backward agricultural technology can be  used to approximate  real income.   Further,  we develop debates on   North v. South European divergence by focusing on  Serbia (a South-Eastern country), in contrast to previous studies which focus on Mediterranean countries (Pamuk 2007; Losa and Zarauz, forthcoming). This approach allows us to formulate a hypothesis regarding the social determination of wages.

Using Serbian wage and price data from 1862 to 1910, we calculate welfare ratios for unskilled (ordinary) and skilled (construction) urban workers. We use two different baskets of goods for wage comparison: a ‘subsistence’ basket that includes a very austere diet, clothing and housing needs, but no alcohol, and a ‘respectability’ basket, composed of a greater quantity and variety of goods, including alcohol.  We modify some of the usual assumptions found in the literature to better reflect the economic and demographic conditions of Serbia in the second half of the 19th century.  Based on contemporary sources, we make the assumption that the ‘work year’ was 200, not  250 days, and that the average family size was six, not four.  Both assumptions reduce the level of the welfare ratio, but do not affect its evolution.

We find that the urban wage of unskilled workers was, on average, about 50 per cent higher than the subsistence basket for the family (Figure 1), and remained broadly constant throughout the period. This result confirms the absence of modern economic growth in Serbia (at least as far as the low income population is concerned), and indicates economic divergence between South-East and Western Europe. Serbia, diverged from Western Europe’s standard of living during the second half of the 19th century:  in 1860 the welfare ratio in London was about three times  higher than urban Serbia but by 1907, this gap had widened to more than five  to one (Figure 1).

Picture 1ee
Figure 1. Welfare ratio (using subsistence basket), urban Serbia 1862-1910. Note: Under the assumptions of 200 working days per year, household size of 6, and inclusive of the daily food and wine allowance provided by the employer. Source: as per article.

 

In contrast, the welfare ratio of skilled construction workers was between 20 to 30 percent higher in the 1900s compared to the 1860s (Figure 1). This trend reflects modest economic progress as well as an increase in the skill premium, which has been observed for Ottoman Turkey (Pamuk 2016).

The wages of ordinary workers appear to move more closely with the ‘subsistence basket’, whereas the wages of construction (skilled) workers wage seem to vary with the cost of the ‘respectability basket’. This leads us to hypothesize that the wages of both groups of workers were implicitly “indexed” to different baskets, reflecting the different value of the work done by each group.

Our results enhance provide further insights on economic conditions in 19th century Balkans, and generate searching questions about the assumptions used in Allen-inspired work on real wages. The standard assumption of 250 days work per annum and a ‘typical’ family size of four, may be undesirable for comparative purposes. The ultimate objective of real wage/welfare ratio studies is to provide more accurate assessments of real incomes between counties. Consequently, the assumptions underlying welfare ratios need to be country-specific.

To contact the authorbmilanovic@gc.cuny.edu

https://twitter.com/BrankoMilan

 

REFERENCES

Allen, Robert C. (2001), “The Great Divergence in European Wages and Prices from the Middle Ages to the First World War“, Explorations in Economic History, October.

Allen, Robert C. (2009), The British Industrial Revolution in Global Perspective, New Approaches to Economic and Social History, Cambridge.

Allen Robert C., Jean-Pascal Bassino, Debin Ma, Christine Moll-Murata and Jan Luiten van Zanden (2011), “Wages, prices, and living standards in China, 1738-1925: in comparison with Europe, Japan, and India”.  Economic History Review, vol. 64, pp. 8-36.

Allen, Robert C., Tommy E. Murphy and Eric B. Schneider (2012), “The colonial origins of the divergence in the Americas: A labor market approach”, Journal of Economic History, vol. 72, no. 4, December.

Humphries, Jane (2011), “The Lure of Aggregates and the Pitfalls of the Patriarchal Perspective: A Critique of the High-Wage Economy Interpretation of the British Industrial Revolution”, Discussion Papers in Economic and Social History, University of Oxford, No. 91.

Li, Bozhong and Jan Luiten van Zanden (2012), “Before the Great Divergence: Comparing the Yangzi delta and the Netherlands at the beginning of the nineteenth century”, Journal of Economic History, vol. 72, No. 4, pp. 956-989.

Losa, Ernesto Lopez and Santiao Paquero Zarauz, “Spanish Subsistence Wages and the Little Divergence in Europe, 1500-1800”, European Review of Economic History, forthcomng.

Pamuk, Şevket (2007), “The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600”, European Review of Economic History, vol. 11, 2007, pp. 280-317.

Pamuk, Şevket (2016),  “Economic Growth in Southeastern Europe and Eastern Mediterranean, 1820-1914”, Economic Alternatives, No. 3.

Stephenson, Judy Z. (2018), “ ‘Real’ wages? Contractors, workers, and pay in London building trades, 1650–1800’,  Economic History Review, vol. 71 (1), pp. 106-132.

Stephenson, Judy Z. (2019), “Working days in a London construction team in the eighteenth century: evidence from St Paul’s Cathedral”, The Economic History Review, published 18 September 2019. https://onlinelibrary.wiley.com/doi/abs/10.1111/ehr.12883.

 

 

Early-life disease exposure and occupational status

by Martin Saavedra (Oberlin College and Conservatory)

This blog is part H of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. This blog is based on the article ‘Early-life disease exposure and occupational status: The impact of yellow fever during the 19th century’, in Explorations in Economic History, 64 (2017): 62-81. https://doi.org/10.1016/j.eeh.2017.01.003   

 

Saavedra1
A girl suffering from yellow fever. Watercolour. Available at Wellcome Images.

Like epidemics, shocks to public health have the potential to affect human capital accumulation. A literature in health economics known as the ‘fetal origins hypothesis’ has examined how in utero exposure to infectious disease affects labor market outcomes. Individuals may be more sensitive to health shocks during the developmental stage of life than during later stages of childhood. For good reason, much of this literature focuses on the 1918 influenza pandemic which was a huge shock to mortality and one of the few events that can be visibly observed when examining life expectancy trends in the United States. However, there are limitations to looking at the 1918 influenza pandemic because it coincided with the First World War. Another complication in this literature is that cities with outbreaks of infectious disease often engaged in many forms of social distancing by closing schools and businesses. This is true for the 1918 influenza pandemic, but also for other diseases. For examples, many schools were closed during the polio epidemic of 1916.

So, how can we estimate the long-run effects of infectious disease when cities simultaneously respond to outbreaks? One possibility is to look at a disease that differentially affected some groups within the same city, such as yellow fever during the nineteenth century. Yellow fever is a viral infection that spreads from the Aedes aegypti mosquito and is still endemic in parts of Africa and South America.  The disease kills roughly 50,000 people per year, even though a vaccine has existed for decades. Symptoms include fever, muscle pain, chills, and jaundice, from which the disease derives its name.

During the eighteenth and nineteenth centuries, yellow fever plagued American cities, particularly port cities that traded with Caribbean Islands. In 1793, over 5,000 Philadelphians likely died of yellow fever. This would be a devasting number in any city, even by today’s standards, but it is even more so when considering that in 1790 that Philadelphia had a population of less than 29,000.

By the mid-nineteenth century, Southern port cities grew, and yellow fever stopped occurring in cities as far north as Philadelphia. The graph below displays the number of yellow fever fatalities in four southern port cities — New Orleans, LA; Mobile, AL; Charleston, SC; and Norfolk, VA — during the nineteenth century. Yellow fever was sporadic, devasting a city in one year and often leaving it untouched in the next. For example, yellow fever killed nearly 8,000 New Orleanians in 1853, and over 2,000 in both 1854 and 1855. The next two years, yellow fever killed fewer than 200 New Orleanians per year, then yellow fever come back killing over 3,500 in 1858. Norfolk, VA was only struck once in 1855. Since yellow fever never struck Norfolk during milder years, the population lacked immunity and approximately 10 percent of the city died in 1855. Charleston and Mobile show similar sporadic patterns. Likely due to the Union’s naval blockade, yellow fever did not visit any American port cities in large numbers during the Civil War.

 

Saavedra2
Source: As per original article.

 

Immigrants were particularly prone to yellow fever because they often came from European countries rarely visited by yellow fever. Native New Orleanians, however, typically caught yellow fever during a mild year as children and were then immune to the disease for the rest of their lives. For this reason, yellow fever earned the name the “stranger’s disease.”

Data from the full count of the 1880 census show that yellow fever fatality rates during an individual’s year of birth negatively affected adult occupational status, but only for individuals with foreign-born mothers. Those with US-born mothers were relatively unaffected by the disease. There are also effects for those who are exposed to yellow fever one or two years after their birth, but there are no effects, not even for those with immigrant mothers, when exposed to yellow fever three or four years after their births. These results suggest that early-life exposure to infectious disease, not just city-wide responses to disease, influence human capital development.

 


 

Martin Saavedra

Martin.Saavedra@oberlin.edu