Corporate Social Responsibility for workers: Pirelli (1950-1980)

by Ilaria Suffia (Università Cattolica, Milan)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

 

Suffia1
Pirelli headquarters in Milan’s Bicocca district. Available at Wikimedia Commons.

Corporate social responsibility (CSR) in relation to the workforce has generated extensive academic and public debate. In this paper I evaluate Pirelli’s approach to CSR, by exploring its archives over the period 1950 to 1967.

Pirelli, founded in Milan by Giovanni Battista Pirelli in 1872, introduced industrial welfare for its employees and their family from its inception. In 1950, it deepened its relationship with them by publishing ‘Fatti e Notizie’ [Events and News], the company’s in-house newspaper. The journal was intended to share information with workers, at any level and, above all, it was meant to strengthen relationships within the ‘Pirelli family’.

Pirelli industrial welfare began in the 1870s and, by the end of the decade, a mutual aid fund and some institutions for its employees families (kindergarten and school), were established. Over the next 20 years, the company set the basis of its welfare policy which encompassed three main features: a series of ‘workplace’ protections, including accident and maternity assistance;  ‘family assistance’, including (in addition to kindergarten and school), seasonal care for children and, finally,  commitment to the professional training of its workers.

In the 1920s, the company’s welfare enlarged. In 1926, Pirelli created a health care service for the whole family and, in the same period, sport, culture and ‘free time’ activities became the main pillars of its CSR. Pirelli also provided houses for its workers, best exemplified in 1921, with the ‘Pirelli Village’. After 1945, Pirelli continued its welfare policy. The Company started a new programme of construction of workers’ houses (based on national provision), expanding its Village, and founding a professional training institute, dedicated to Piero Pirelli. The establishment in 1950 of the company journal, ‘Fatti e Notizie’, can be considered part of Pirelli’s welfare activities.

‘Fatti e Notizie’ was designed to improve internal communication about the company, especially Pirelli’s workers.  Subsequently, Pirelli also introduced in-house articles on current news or special pieces on economics, law and politics. My analysis of ‘Fatti e Notizie’ demonstrates that welfare news initially occupied about 80 per cent of coverage, but after the mid-1950s it decreased to 50 per cent in the late 1960s.

The welfare articles indicate that the type of communication depended on subject matter. Thus, health care, news on colleagues, sport and culture were mainly ‘instructive’, reporting information and keeping up to date with events. ‘Official’ communications on subjects such as CEO reports and financial statements, utilised ‘top to bottom’ articles. Cooperation, often reinforced with propaganda language, was promoted for accident prevention and workplace safety. Moreover, this kind of communication was applied to ‘bottom to top’ messages, such as an ‘ideas box’ in which workers presented their suggestions to improve production processes or safety.

My analysis shows that the communication model implemented by Pirelli in the 1950s and 1960s, navigated models of capitulation, (where the managerial view prevails) in the 1950s, to trivialisation (dealing only with ‘neutral’ topics, from the 1960s.

 

 

Ilaria Suffia: ilaria.suffia@unicatt.it

Infant and child mortality by socioeconomic status in early nineteenth century England

by Jaadla Hannaliis (University of Cambridge)

The full article from this blog (co-authored with E. Potter, S. Keibek,  and R.J.  Davenport) was published on The Economic History Review and is now available on Early View at this link

Picture 1nn
Figure 1. Thomas George Webster ‘Sickness and health’ (1843). Source: Photo credit: The Wordsworth Trust, licenced under CC BY-NC-SA

Socioeconomic gradients in health and mortality are ubiquitous in modern populations. Today life expectancy is generally positively correlated with individual or ecological measures of income, educational attainment and status within national populations. However, in stark contrast to these modern patterns, there is little evidence for such pervasive advantages of wealth to survival in historical populations before the nineteenth century.

In this study, we tested whether a socioeconomic gradient in child survival was already present in early nineteenth-century England using individual-level data on infant and child mortality for eight parishes from the Cambridge Group family reconstitution dataset (Wrigley et al. 1997). We used the paternal occupational descriptors routinely recorded in the Anglican baptism registers for the period from 1813–1837 to compare infant (under 1) and early childhood (age 1–4) mortality by social status. To capture differences in survivorship we compared multiple measures of status: HISCAM, HISCLASS, and also a continuous measure of wealth which was estimated by ranking paternal occupations by the propensity for their movable wealth to be inventoried upon death (Keibek 2017).  The main analytical tool was event history analysis, where individuals were followed from baptism or birth through the first five years of life, or until their death, or leaving the sample for other reasons.

Were socioeconomic differentials in mortality present in the English population by the early nineteenth century, as suggested by theorists of historical social inequalities (Antonovsky 1967; Kunitz 1987)? Our results provide a qualified yes. We did detect differentials in child survival by paternal or household wealth in the first five years of life. However the effects of wealth were muted, and non-linear. Instead we found a U-shaped relationship between paternal social status and survival, with the children of poor labourers or wealthier fathers enjoying relatively high survival chances.  Socioeconomic differentials emerged only after the first year of life (when mortality rates were highest), and were strongest at age one. Summed over the first five years of life, however, the advantages of wealth were marginal. Furthermore, the advantages of wealth were only observed once the anomalously low mortality of labourers’ children was taken into account.

As might be expected, these results provide evidence for the contribution of both environment and household or familial factors. In infancy, mortality varied between parishes, however the environmental hazards associated with industrialising or urban settlements appear to have operated fairly equally on households of differing socioeconomic status. It is likely that most infants in our eight  reconstitution parishes were breastfed throughout the first year of life – which  probably conferred a ubiquitous advantage that overwhelmed other material differences in household conditions, for example, maternal nutrition.

To the extent that wealth conferred a survival advantage, did it operate through access to information, or to material resources? There was no evidence that literacy was important to child survival. However, our results suggest that cultural practices surrounding weaning may have been key. This was indicated by the peculiar age pattern of the socioeconomic gradient to survival, which was strongest in the second year of life, the year in which most children were weaned. We also found a marked survival advantage of longer birth intervals post-infancy, and this advantage accrued particularly to labourers’ children, because their mothers had longer than average birth intervals.

Our findings point to the importance of breastfeeding patterns in modulating the influence of socioeconomic status on infant and child survival. Breastfeeding practices varied enormously in historical populations, both geographically and by social status (Thorvaldsen 2008). These variations, together with the differential sorting of social groups into relatively healthy or unhealthy environments, probably explains the difficulty in pinpointing the emergence of socioeconomic gradients in survival, especially in infancy.

At ages 1–4 years we were able to demonstrate that the advantages of wealth and of a labouring father operated even at the level of individual parishes. That is, these advantages were not simply a function of the sorting of classes or occupations into different environments. These findings therefore implicate differences in household practices and conditions in the survival of children in our sample. This was clearest in the case of labourers. Labourers’ children enjoyed higher survival rates than predicted by household wealth, and this was associated with longer birth intervals (consistent with longer breastfeeding), as well as other factors that we could not identify, but which were probably not a function of rural isolation within parishes. Why labouring households should have differed in these ways remains unexplained.

To contact the author:  Hj309@cam.ac.uk

References

Antonovsky, A., ‘Social class, life expectancy and overall mortality’, Milbank Memorial Fund Quarterly, 45 (1967), pp. 31–73.

Keibek, S. A. J., ‘The male occupational structure of England and Wales, 1650–1850’, (unpub. Ph.D. thesis, Univ. of Cambridge, 2017).

Kunitz, S.J., ‘Making a long story short: a note on men’s height and mortality in England from the first through the nineteenth centuries’, Medical History, 31 (1987), pp. 269–80.

Thorvaldsen, G., ‘Was there a European breastfeeding pattern?’ History of the Family, 13 (2008), pp. 283–95.

Real urban wage in an agricultural economy without landless farmers: Serbia, 1862-1910

by Branko Milanović (City University New York and LSE)

This blog is based on a forthcoming article on The Economic History Review

Screenshot 2020-06-10 at 17.10.50
Railway construction workers, ca.1900.

Calculations of historical welfare ratios (wages expressed in relation to the  subsistence needs of a wage-earner’s family) exist for many countries and time periods.  The original methodology was developed by Robert Allen (2001).  The objective of real wage studies is not only to estimate real wages but to assess living standards before the advent of national accounts.  This methodology has been employed to address key questions in economic history: income divergence between Northern Europe and China (Li and van Zanden, 2012; Allen, Bassino, Ma, Moll-Murata, and van Zanden, 2011); the  “Little Divergence”  (Pamuk 2007); development of North v. South America (Allen, Murphy and Schneider, 2012), and even the causes of the Industrial Revolution (Allen 2009; Humphries 2011; Stephenson 2018, 2019).

We apply this methodology to Serbia between 1862 and 1910, to consider  the extent to which  small, peasant-owned farms and backward agricultural technology can be  used to approximate  real income.   Further,  we develop debates on   North v. South European divergence by focusing on  Serbia (a South-Eastern country), in contrast to previous studies which focus on Mediterranean countries (Pamuk 2007; Losa and Zarauz, forthcoming). This approach allows us to formulate a hypothesis regarding the social determination of wages.

Using Serbian wage and price data from 1862 to 1910, we calculate welfare ratios for unskilled (ordinary) and skilled (construction) urban workers. We use two different baskets of goods for wage comparison: a ‘subsistence’ basket that includes a very austere diet, clothing and housing needs, but no alcohol, and a ‘respectability’ basket, composed of a greater quantity and variety of goods, including alcohol.  We modify some of the usual assumptions found in the literature to better reflect the economic and demographic conditions of Serbia in the second half of the 19th century.  Based on contemporary sources, we make the assumption that the ‘work year’ was 200, not  250 days, and that the average family size was six, not four.  Both assumptions reduce the level of the welfare ratio, but do not affect its evolution.

We find that the urban wage of unskilled workers was, on average, about 50 per cent higher than the subsistence basket for the family (Figure 1), and remained broadly constant throughout the period. This result confirms the absence of modern economic growth in Serbia (at least as far as the low income population is concerned), and indicates economic divergence between South-East and Western Europe. Serbia, diverged from Western Europe’s standard of living during the second half of the 19th century:  in 1860 the welfare ratio in London was about three times  higher than urban Serbia but by 1907, this gap had widened to more than five  to one (Figure 1).

Picture 1ee
Figure 1. Welfare ratio (using subsistence basket), urban Serbia 1862-1910. Note: Under the assumptions of 200 working days per year, household size of 6, and inclusive of the daily food and wine allowance provided by the employer. Source: as per article.

 

In contrast, the welfare ratio of skilled construction workers was between 20 to 30 percent higher in the 1900s compared to the 1860s (Figure 1). This trend reflects modest economic progress as well as an increase in the skill premium, which has been observed for Ottoman Turkey (Pamuk 2016).

The wages of ordinary workers appear to move more closely with the ‘subsistence basket’, whereas the wages of construction (skilled) workers wage seem to vary with the cost of the ‘respectability basket’. This leads us to hypothesize that the wages of both groups of workers were implicitly “indexed” to different baskets, reflecting the different value of the work done by each group.

Our results enhance provide further insights on economic conditions in 19th century Balkans, and generate searching questions about the assumptions used in Allen-inspired work on real wages. The standard assumption of 250 days work per annum and a ‘typical’ family size of four, may be undesirable for comparative purposes. The ultimate objective of real wage/welfare ratio studies is to provide more accurate assessments of real incomes between counties. Consequently, the assumptions underlying welfare ratios need to be country-specific.

To contact the authorbmilanovic@gc.cuny.edu

https://twitter.com/BrankoMilan

 

REFERENCES

Allen, Robert C. (2001), “The Great Divergence in European Wages and Prices from the Middle Ages to the First World War“, Explorations in Economic History, October.

Allen, Robert C. (2009), The British Industrial Revolution in Global Perspective, New Approaches to Economic and Social History, Cambridge.

Allen Robert C., Jean-Pascal Bassino, Debin Ma, Christine Moll-Murata and Jan Luiten van Zanden (2011), “Wages, prices, and living standards in China, 1738-1925: in comparison with Europe, Japan, and India”.  Economic History Review, vol. 64, pp. 8-36.

Allen, Robert C., Tommy E. Murphy and Eric B. Schneider (2012), “The colonial origins of the divergence in the Americas: A labor market approach”, Journal of Economic History, vol. 72, no. 4, December.

Humphries, Jane (2011), “The Lure of Aggregates and the Pitfalls of the Patriarchal Perspective: A Critique of the High-Wage Economy Interpretation of the British Industrial Revolution”, Discussion Papers in Economic and Social History, University of Oxford, No. 91.

Li, Bozhong and Jan Luiten van Zanden (2012), “Before the Great Divergence: Comparing the Yangzi delta and the Netherlands at the beginning of the nineteenth century”, Journal of Economic History, vol. 72, No. 4, pp. 956-989.

Losa, Ernesto Lopez and Santiao Paquero Zarauz, “Spanish Subsistence Wages and the Little Divergence in Europe, 1500-1800”, European Review of Economic History, forthcomng.

Pamuk, Şevket (2007), “The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600”, European Review of Economic History, vol. 11, 2007, pp. 280-317.

Pamuk, Şevket (2016),  “Economic Growth in Southeastern Europe and Eastern Mediterranean, 1820-1914”, Economic Alternatives, No. 3.

Stephenson, Judy Z. (2018), “ ‘Real’ wages? Contractors, workers, and pay in London building trades, 1650–1800’,  Economic History Review, vol. 71 (1), pp. 106-132.

Stephenson, Judy Z. (2019), “Working days in a London construction team in the eighteenth century: evidence from St Paul’s Cathedral”, The Economic History Review, published 18 September 2019. https://onlinelibrary.wiley.com/doi/abs/10.1111/ehr.12883.

 

 

Early-life disease exposure and occupational status

by Martin Saavedra (Oberlin College and Conservatory)

This blog is part H of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. This blog is based on the article ‘Early-life disease exposure and occupational status: The impact of yellow fever during the 19th century’, in Explorations in Economic History, 64 (2017): 62-81. https://doi.org/10.1016/j.eeh.2017.01.003   

 

Saavedra1
A girl suffering from yellow fever. Watercolour. Available at Wellcome Images.

Like epidemics, shocks to public health have the potential to affect human capital accumulation. A literature in health economics known as the ‘fetal origins hypothesis’ has examined how in utero exposure to infectious disease affects labor market outcomes. Individuals may be more sensitive to health shocks during the developmental stage of life than during later stages of childhood. For good reason, much of this literature focuses on the 1918 influenza pandemic which was a huge shock to mortality and one of the few events that can be visibly observed when examining life expectancy trends in the United States. However, there are limitations to looking at the 1918 influenza pandemic because it coincided with the First World War. Another complication in this literature is that cities with outbreaks of infectious disease often engaged in many forms of social distancing by closing schools and businesses. This is true for the 1918 influenza pandemic, but also for other diseases. For examples, many schools were closed during the polio epidemic of 1916.

So, how can we estimate the long-run effects of infectious disease when cities simultaneously respond to outbreaks? One possibility is to look at a disease that differentially affected some groups within the same city, such as yellow fever during the nineteenth century. Yellow fever is a viral infection that spreads from the Aedes aegypti mosquito and is still endemic in parts of Africa and South America.  The disease kills roughly 50,000 people per year, even though a vaccine has existed for decades. Symptoms include fever, muscle pain, chills, and jaundice, from which the disease derives its name.

During the eighteenth and nineteenth centuries, yellow fever plagued American cities, particularly port cities that traded with Caribbean Islands. In 1793, over 5,000 Philadelphians likely died of yellow fever. This would be a devasting number in any city, even by today’s standards, but it is even more so when considering that in 1790 that Philadelphia had a population of less than 29,000.

By the mid-nineteenth century, Southern port cities grew, and yellow fever stopped occurring in cities as far north as Philadelphia. The graph below displays the number of yellow fever fatalities in four southern port cities — New Orleans, LA; Mobile, AL; Charleston, SC; and Norfolk, VA — during the nineteenth century. Yellow fever was sporadic, devasting a city in one year and often leaving it untouched in the next. For example, yellow fever killed nearly 8,000 New Orleanians in 1853, and over 2,000 in both 1854 and 1855. The next two years, yellow fever killed fewer than 200 New Orleanians per year, then yellow fever come back killing over 3,500 in 1858. Norfolk, VA was only struck once in 1855. Since yellow fever never struck Norfolk during milder years, the population lacked immunity and approximately 10 percent of the city died in 1855. Charleston and Mobile show similar sporadic patterns. Likely due to the Union’s naval blockade, yellow fever did not visit any American port cities in large numbers during the Civil War.

 

Saavedra2
Source: As per original article.

 

Immigrants were particularly prone to yellow fever because they often came from European countries rarely visited by yellow fever. Native New Orleanians, however, typically caught yellow fever during a mild year as children and were then immune to the disease for the rest of their lives. For this reason, yellow fever earned the name the “stranger’s disease.”

Data from the full count of the 1880 census show that yellow fever fatality rates during an individual’s year of birth negatively affected adult occupational status, but only for individuals with foreign-born mothers. Those with US-born mothers were relatively unaffected by the disease. There are also effects for those who are exposed to yellow fever one or two years after their birth, but there are no effects, not even for those with immigrant mothers, when exposed to yellow fever three or four years after their births. These results suggest that early-life exposure to infectious disease, not just city-wide responses to disease, influence human capital development.

 


 

Martin Saavedra

Martin.Saavedra@oberlin.edu

 

Unequal access to food during the nutritional transition: evidence from Mediterranean Spain

by Francisco J. Medina-Albaladejo & Salvador Calatayud (Universitat de València).

This article is forthcoming in the Economic History Review.

 

Medina1
Figure 1 – General pathology ward, Hospital General de Valencia (Spain), 1949. Source: Consejo General de Colegios Médicos de España. Banco de imágenes de la medicina española. Real Academia Nacional de Medicina de España. Available here.

Over the last century, European historiography has debated whether industrialisation brought about an improvement in working class living standards.  Multiple demographic, economic, anthropometric and wellbeing indicators have been examined in this regard, but it was Eric Hobsbawm (1957) who, in the late 1950s, incorporated food consumption patterns into the analysis.

Between the mid-19th and the first half of the 20th century, the diet of European populations underwent radical changes. Caloric intake increased significantly, and cereals were to a large extent replaced by animal proteins and fat, resulting from a substantial increase in meat, milk, eggs and fish consumption. This transformation was referred to by Popkin (1993) as the ‘Nutritional transition’.

These dietary changes were  driven, inter alia,  by the evolution of income levels which raises the possibility  that significant inequalities between different social groups ensued. Dietary inequalities between different social groups are a key component in the analysis of inequality and living standards; they directly affect mortality, life expectancy, and morbidity. However, this hypothesis  remains unproven, as historians are still searching for adequate sources and methods with which to measure the effects of dietary changes on living standards.

This study contributes to the debate by analysing a relatively untapped source: hospital diets. We have analysed the diet of psychiatric patients and members of staff in the main hospital of the city of Valencia (Spain) between 1852 and 1923. The diet of patients depended on their social status and the amounts they paid for their upkeep. ‘Poor psychiatric patients’ and abandoned children, who paid no fee, were fed according to hospital regulations, whereas ‘well-off psychiatric patients’ paid a daily fee in exchange for a richer and more varied diet. There were also differences among members of staff, with nuns receiving a richer diet than other personnel (launderers, nurses and wet-nurses). We think that our source  broadly  reflects dietary patterns of the Spanish population and the effect of income levels thereon.

Figure 2 illustrates some of these differences in terms of animal-based caloric intake in each of the groups under study. Three population groups can be clearly distinguished: ‘well-off psychiatric patients’ and nuns, whose diet already presented some of the features of the nutritional transition by the mid-19th century, including fewer cereals and a meat-rich diet, as well as the inclusion of new products, such as olive oil, milk, eggs and fish; hospital staff, whose diet was rich in calories,to compensate for their demanding jobs, but still traditional in structure, being largely based on cereals, legumes, meat and wine; and, finally, ‘poor psychiatric patients’ and abandoned children, whose diet was poorer and which, by the 1920, had barely joined the trends that characterised the nutritional transition.

 

Medina2
Figure 2. Percentage of animal calories in the daily average diet by population groups in the Hospital General de Valencia, 1852-1923 (%). Source: as per original article.

 

In conclusion, the nutritional transition was not a homogenous process, affecting all diets at the time or at the same pace. On the contrary, it was a process marked by social difference, and the progress of dietary changes was largely determined by social factors. By the mid-19th century, the diet structure of well-to-do social groups resembled diets that were more characteristic of the 1930s, while less favoured and intermediate social groups had to wait until the early 20th century before they could incorporate new foodstuffs into their diet. As this sequence clearly indicates, less favoured social groups always lagged behind.

 

References

Medina-Albaladejo, F. J. and Calatayud, S., “Unequal access to food during the nutritional transition: evidence from Mediterranean Spain”, Economic History Review, (forthcoming).

Hobsbawm, E. J., “The British Standard of Living, 1790-1850”, Economic History Review, 2nd ser., X (1957), pp. 46-68.

Popkin B. M., “Nutritional Patterns and Transitions”, Population and Development Review, 19, 1 (1993), pp. 138-157.

Airborne diseases: Tuberculosis in the Union Army

by Javier Birchenall (University of California, Santa Barbara)

This is Part F of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History The full article from this blog was published in ‘Explorations in Economic History’ and is available here

TB-do-not-spit-1910-v2
1910 advertising postcard for the National Association for the Prevention of Tuberculosis. 

Tuberculosis (TB) is one of the oldest and deadliest diseases. Traces of TB in humans can be found as early as 9,000 years ago, and written accounts date back 3,300 years in India. Untreated, TB’s case-fatality rate is as high as 50 percent. It was a dreaded disease.  TB is an airborne disease caused by the bacteria Mycobacterium tuberculosis. Tuberculosis spreads through the air when a person who has an active infection coughs, sneezes, speaks, or sings. Most cases remain latent and do not develop symptoms. Activation of tuberculosis is particularly influenced by undernutrition.

Tuberculosis played a prominent role in the secular mortality decline. Of the 27 years of life expectancy gained in England and Wales between 1871 and 1951, TB accounts for about 40 percent of the improvement, a 12-year gain. Modern medicine, the usual suspect used to explain this mortality decline, could not have been the culprit. As Thomas McKeown famously pointed out, TB mortality started its decline long before the tubercle bacillus was identified and long before  an  effective treatment was provided (Figure 1). McKeown viewed improvements in economic and social conditions, especially improved diets, as the principal factor arresting the combatting tuberculosis. A healthy diet, however, is not the only factor behind nutritional status. Infections, no matter how mild, reduce nutritional status and increase susceptibility to infection.

Figure 1. Mortality rate from TB.

fig01

Source: as per original article

In “Airborne Diseases: Tuberculosis in the Union Army” I studied the determinants of diagnosis, discharge, and mortality from tuberculosis in the past. I examined the medical histories of 25,000 soldiers and veterans in the Union Army using data collected under the direction of Robert Fogel. The Civil War brought together soldiers from many socioeconomic conditions and ecological backgrounds into an environment which was ideal for the spread of this disease. The war also provided a unique setting to examine many of the factors which were  likely responsible for the decline in TB mortality. Before enlistment, individuals had differential exposure to harmful dust and fumes. They also faced different disease environments and living conditions. By housing recruits in confined spaces, the war exposed soldiers to a host of waterborne and airborne infections. In the Civil War, disease was far more deadly than battle.

The Union Army data contains detailed medical records and measures of nutritional status. Height at enlistment measures net nutritional experiences at early ages. Weight, needed to measure current nutritional status using the Body Mass Index (BMI), is available for war veterans. My estimates use a hazard model and a variety of controls aligned with existing explanations proposed for the decline in TB prevalence and fatality rates. By how much would the diagnosis of TB have declined if the average Union Army soldier had the height of the current U.S. male population, and if all his relevant infections diagnosed prior to TB were eradicated?  Figure 2 presents the contribution of the predictors of TB diagnosis in soldiers who did not engage in battle, and  Figure 3 reports soldiers discharged because of TB.  Nutritional experiences in early life provided a protective effect against TB.  Between 25 and 50 per cent of the predictable decline in tuberculosis could be associated with the modern increase in height. Declines in the risk of waterborne and airborne diseases are as important as the predicted changes in height

 

Figure 2. Contribution of various factors to the decline in TB diagnosis

fig02
Source: as per original article

 

Figure 3. Contribution of various factors to the decline in discharges because of TB.

fig03
Source: as per original article

My analysis showed that a wartime diagnosis of TB increased the risk of tuberculosis mortality. Because of the chronic nature of the disease, infected soldiers likely developed a latent or persistent infection that remained active until resistance failed at old age. Nutritional status provided some protection against mortality. For veterans, height was not as robust as BMI. If a veteran’s BMI increased from its historical value of 23 to current levels of 27, his mortality risk from tuberculosis would have been reduced by 50 per cent. Overall, the contribution of changes in `pure’ diets and changes in infectious disease exposure, was probably equal.

What lessons can be drawn for the current covid-19 pandemic? Covid-19 is also an airborne disease. Airborne diseases (e.g., influenza, measles, smallpox, and tuberculosis) are difficult to control. In unfamiliar populations, they often break wreak havoc. But influenza, measles, smallpox, and tuberculosis are mostly killers from the past. The findings in my paper suggest that the conquest of tuberculosis happened through both individual and public health efforts. Improvements in diets and public health worked simultaneously and synergistically. There was no silver bullet to defeat the great white plague, tuberculosis. Diets are no longer as inadequate as in the past. Still, Covid-19 has exposed differential susceptibility to the disease. Success in combatting Covid-19 is likely to require simultaneous and synergistic private and public efforts.

from VOX – Men

by Victoria Baranov (University of Melbourne), Ralph De Haas (EBRD, CEPR, and Tilburg University) and Pauline Grosjean (University of New South Wales). More information on the authors below.

The content of this article was originally published on VOX and has been published here with the authors’ consent.


 

NSW1834
Mitchell, T 1834, To the Right Honorable Edward Geoffrey Smith Stanley this map of the Colony of New South Wales, ca. 1:540 000, National Library of Australia. Available here.

 

Why are men three times as likely than women to die from suicide? And why do many unemployed men refuse to apply for jobs that are typically done by women? This column argues that a better understanding of masculinity norms – the rules and standards that guide and constrain men’s behavior in society – can help answer important questions like these. We present evidence from Australia on how historical circumstances have instilled strong and persistent masculine identities that continue to influence outcomes related to male health; violence, suicide, and bullying; attitudes towards homosexuals; and occupational gender segregation.

 

What makes a ‘real’ man? According to traditional gender norms, men ought to be self-reliant, assertive, competitive, violent when needed, and in control of their emotions (Mahalik et al., 2003). Two current debates illustrate how such masculinity norms have profound economic and social impacts. First, in many countries, men die younger than women and are consistently less healthy. Masculinity norms, especially a penchant for violence and risk taking, are an important cultural driver of this gender health gap (WHO, 2013). A second debate links masculinity norms to occupational gender segregation. Technological progress and globalization have disproportionately affected male employment. Yet, many newly unemployed men refuse to fill jobs that do not match their self-perceived gender identity (Akerlof and Kranton, 2000).

The extent to which men are expected to conform to stereotypical masculinity norms nevertheless differs across societies. This raises the question: where do masculinity norms come from? The origins of gender norms about women have been the focus of a vibrant literature (Giuliano, 2018). By contrast, the origins of norms that guide and constrain the behavior of men have received no attention in the economics literature.

In recent research, we argue that strict masculinity norms can emerge in response to highly skewed sex ratios (the number of males relative to females) which intensify competition among men (Baranov, De Haas and Grosjean, 2020). When the sex ratio is more male biased, male-male competition for scarce females is more intense. This competition can intensify violence, bullying, and intimidating behavior (e.g. bravado), which, once entrenched in local culture, continue to manifest themselves in present-day outcomes long after sex ratios have normalized. We test this hypothesis using data from a unique natural experiment: the convict colonization of Australia.

 

Register of Convicts Applications to Mary 4-4509 p 75
NSW – Convicts Applications to Marry

 

Australia as a historical experiment

To establish a causal link from sex ratios to the manifestation of masculinity norms, we exploit the convict colonization of Australia. Between 1787 and 1868, Britain transported 132,308 convict men but only 24,960 convict women to Australia. Convicts were not confined to prisons but allocated across the colonies in a highly centralized manner. This created a variegated spatial pattern in sex ratios, and consequently in local male-to-male competition, in an otherwise homogeneous setting.

Convicts and ex-convicts represented the majority of the colonial population in Australia well into the mid-19th century. Voluntary migration was limited and mainly involved men migrating in response to male-biased economic opportunities available in agriculture and, after the discovery of gold in the 1850s, mining. Because of the predominance of male convicts and migrants, biased population sex ratios endured for over a century (Figure 1).

Figure 1 copy

 

Identifying the lasting impact of skewed sex ratios

We regress present-day manifestations of masculinity norms, including violent behavior, bullying, and stereotypically male occupational choice on historical sex ratios, collected from the first reliable census in each Australian state (see also Grosjean and Khattar, 2019). An empirical challenge is that variation in historical sex ratios could reflect unobservable characteristics. To tackle this, we instrument the historical sex ratio by the sex ratio among convicts only. This instrument is highly relevant since most of the white Australian population initially consisted of convicts. Moreover, convicts were not free to move: a centralized assignment scheme determined their location as a function of labor needs, which we control for by initial economic specialization. Throughout the analysis, we also control for time-invariant geographic and historic characteristics as well as key present-day controls (sex ratio, population, and urbanization).

 

Masculinity norms among Australian men today

Using the above empirical strategy, we derive four sets of results:

1. Violence, suicide, and health

We first assess the impact of historically skewed sex ratios on present-day violence and health outcomes. Evidence suggests that men adhering to traditional masculinity norms attach a stronger stigma to mental health problems and tend to avoid health services. As a proxy for the avoidance of preventative health care we use local suicide and prostate cancer rates. Prostate cancer is often curable if treated early, but avoidance of diagnosis is a public health concern. The endorsement of strict masculinity norms is also associated with aggression, excessive drinking, and smoking.

Our estimates show that today, the rates of assault and sexual assault are higher in parts of Australia that were more male biased in the past. A one unit increase in the historical sex ratio (defined as the ratio of the number of men over the number of women) is associated with an 11 percent increase in the rate of assault and a 16 percent increase in sexual assaults. We also find strong evidence of elevated rates of male suicide, prostate cancer, and lung disease in these areas. For male suicide – the leading cause of death for Australian men under 45 – a one unit increase in the historical sex ratio is associated with a staggering 26 percent increase.

2. Occupational gender segregation

A second manifestation of male identity is occupational choice. Our results paint a striking picture. A one unit increase in the sex ratio is associated with a nearly 1 percentage point shift from the share of men employed in neutral (e.g. real estate, retail) or stereotypically female occupations (e.g. teachers, receptionists) to stereotypically male occupations (e.g. carpenters, metal workers).

3. Support for same-sex marriage

We capture the political expression of masculine identity by opposition against same-sex marriage, which we measure using voting records from the nation-wide referendum on same-sex marriage in 2017. Our results show that the share of votes in favor of marriage equality is substantially lower in areas where sex ratios were more male biased in the past. A one unit increase in the historical sex ratio is associated with a nearly 3 percentage point decrease in support for same-sex marriage. This is slightly over 6 percent of the mean.

4. Bullying

Lastly, we find that boys, but not girls, are more likely to be bullied at school in areas that used to be more male biased in the past. The magnitude of the results is considerable and in line with the magnitude of the results for assaults (measured in adults). A one unit increase in the historical sex ratio is associated with a higher likelihood of parents (teachers) reporting bullying of boys by 13.7 (5.2) percentage points. This suggests that masculinity norms are perpetuated through horizontal transmission: peer pressure, starting at a young age in the playground.

 

Conclusions

We find that historically male-biased sex ratios forged a culture of male violence, help avoidance, and self-harm that persists to the present day in Australia. While our experimental setting is unique, we believe that our findings can inform the debate about the long-term socioeconomic consequences and risks of skewed sex ratios in many developing countries such as China, India, and parts of the Middle East. In these settings, sex-selective abortion and mortality as well as the cultural relegation and seclusion of women have created societies with highly skewed sex ratios. Our results suggest that the masculinity norms that develop as a result may not only be detrimental to (future generations of) men themselves but can also have important repercussions for other groups in society, in particular women and sexual minorities.

Our findings also align with an extensive psychological and medical literature that connects traditional masculinity norms to an unwillingness among men to seek timely medical help or to engage in preventive health care and protective health measures (e.g. Himmelstein and Sanchez (2016) and Salgado et al. (2016)). This suggests that voluntary observance of health measures, such as social distancing during the COVID-19 pandemic, may be considerably lower among men who adhere to traditional masculinity norms.

 

References

Akerlof, George A., and Rachel E. Kranton (2000), Economics and Identity, Quarterly Journal of Economics 115(3), 715–753.

Baranov, Victoria, Ralph De Haas and Pauline Grosjean (2020), Men. Roots and Consequences of Masculinity Norms, CEPR Discussion Paper No. 14493, London.

Giuliano, Paola (2018), Gender: A Historical Perspective, The Oxford Handbook of Women and the Economy, Ed. Susan Averett, Laura Argys and Saul Hoffman. Oxford University Press, New York.

Grosjean, Pauline, and Rose Khattar (2019), It’s Raining Men! Hallelujah? The Long-Run Consequences of Male-Biased Sex Ratios, The Review of Economic Studies, 86(2), 723–754.

Himmelstein, M.S. and D.T. Sanchez (2016), Masculinity Impediments: Internalized Masculinity Contributes to Healthcare Avoidance in Men and Women, Journal of Health Psychology, 21, 1283–1292.

Mahalik, J.R., B.D. Locke, L.H. Ludlow, M.A. Diemer, R.P.J. Scott, M. Gottfried, and G. Freitas (2003), Development of the Conformity to Masculine Norms Inventory, Psychology of Men & Masculinity, 4(1), 3–25.

Salgado, D.M., A.L. Knowlton, and B.L. Johnson (2019), Men’s Health-Risk and Protective Behaviors: The Effects of Masculinity and Masculine Norms, Psychology of Men & Masculinities, 20(2), 266–275.

WHO (2013), Review of Social Determinants and the Health Divide in the WHO European Region, World Health Organization, Regional Office for Europe, Copenhagen.

 


 

Ralph De Haas, a Dutch national, is the Director of Research at the European Bank for Reconstruction and Development (EBRD) in London. He is also a part-time Associate Professor of Finance at Tilburg University, a CEPR Research Fellow, a Fellow at the European Banking Center, a Visiting Senior Fellow at the Institute of Global Affairs at the London School of Economics and Political Science, and a Research Associate at the ZEW–Leibniz Centre for European Economic Research. Ralph earned a PhD in economics from Utrecht University and is the recipient of the 2014 Willem F. Duisenberg Fellowship Prize. He has published in the Journal of Financial Economics; Review of Financial Studies; Review of Finance; Journal of International Economics, American Economic Journal: Applied Economics; the Journal of the European Economic Association and various other peer-reviewed journals. Ralph’s research interests include global banking, development finance and financial intermediation more broadly. He is currently working on randomized controlled trials related to financial inclusion in Morocco and Turkey.

Twitter: @ralphdehaas 

 

Pauline Grosjean is a Professor in the School of Economics at UNSW. Previously at the University of San Francisco and the University of California at Berkeley, she has also worked as an Economist at the European Bank for Reconstruction and Development. She completed her PhD in economics at Toulouse School in Economics in 2006 after graduating from the Ecole Normale Supérieure. Her research studies the historical and dynamic context of economic development. In particular, she focuses on how culture and institutions interact and shape long-term economic development and individual behavior. She has published research that studies the historical process of a wide range of factors that are crucial for economic development, including cooperation and violence, trust, gender norms, support for democracy and for market reforms, immigration, preferences for education, and conflict.

 

Victoria Baranov’s research explores how health, psychological factors, and norms interact with poverty and economic development. Her recent work has focused on maternal depression and its implications for the intergenerational transmission of disadvantage. Her work has been published in the American Economic Review, American Economic Journal: Applied Economics, the Journal of Health Economics and other peer-reviewed journals across multiple disciplines.  Victoria received her PhD in Economics from the University of Chicago after graduating from Barnard College.  She is currently a Senior Lecturer in the Economics Department at the University of Melbourne and has affiliations with the Centre for Market Design, the Life Course Centre, and the Institute of Labor Studies (IZA).

Twitter: @VictoriaBaranov

The Long View on Epidemics, Disease and Public Health: Research from Economic History Part B*

This piece is the result of a collaboration between the Economic History Review, the Journal of Economic History, Explorations in Economic History and the European Review of Economic History. More details and special thanks below. Part A is available at this link 

Bubonic plague cases are on the rise in the US. Yes, really. - Vox
Man and women with the bubonic plague with its characteristic buboes on their bodies — a medieval painting from 1411.
 Everett Historical/Shutterstock

As the world grapples with a pandemic, informed views based on facts and evidence have become all the more important. Economic history is a uniquely well-suited discipline to provide insights into the costs and consequences of rare events, such as pandemics, as it combines the tools of an economist with the long perspective and attention to context of historians. The editors of the main journals in economic history have thus gathered a selection of the recently-published articles on epidemics, disease and public health, generously made available by publishers to the public, free of access, so that we may continue to learn from the decisions of humans and policy makers confronting earlier episodes of widespread disease and pandemics.

Generations of economic historians have studied disease and its impact on societies across history. However, as the discipline has continued to evolve with improvements in both data and methods, researchers have uncovered new evidence about episodes from the distant past, such as the Black Death, as well as more recent global pandemics, such as the Spanish Influenza of 1918. In this second instalment of The Long View on Epidemics, Disease and Public Health: Research from Economic History, the editors present a review of two major themes that have featured in the analysis of disease. The first  includes articles that discuss the economic impacts of historical epidemics and the official responses they prompted.  The second  turns to the more optimistic story of the impact of public health regulation and interventions, and the benefits thereby generated.

 

S T R A V A G A N Z A: ECONOMIC EFFECTS OF PLAGUE IN EUROPE
Pieter Bruegel the Elder, The Triumph of Death (1562 ca.)

Epidemics and the Economy

 The ways in which societies  and economies are affected by repeated epidemics is a question that historians have struggled to understand. Paolo Malanima provides a detailed analysis of how Renaissance Italy was shaped by the impact of plague: ‘Italy in the Renaissance: A Leading Economy in the European Context, 1350–1550’. Economic History Review 71, no. 1 (2018): 3-30. The consequences of plague for Italy are explored in even more detail by Guido Alfani who demonstrates that the peninsula struggled to recover after experiencing pervasive mortality during the seventeenth century: ‘Plague in Seventeenth-century Europe and the Decline of Italy: An Epidemiological Hypothesis’. European Review of Economic History 17, no. 4 (2013): 408-30.  Epidemics cause multiple changes to the economic environment which necessitates a multifaceted response by government.  Samuel Cohn examines the  oppressive nature of these  reactions in his luminous study of the way European governments sought to prevent workers benefiting from the increased demand for their labour following the Black Death: ‘After the Black Death: Labour Legislation and Attitudes Towards Labour in Late-Medieval Western Europe’. The Economic History Review, 60, no. 3 (2007): 457-85.  

 

The Black Death Actually Improved Public Health | Smart News ...
Josse Lieferinxe, Saint Sebastian Interceding for the Plague Stricken (1497 ca)

 

Public Health

Richard Easterlin’s  panoramic overview of mortality  shows that government policy was critical  in reducing levels of mortality from the early nineteenth century. Economic growth by itself did not lift life expectancy. This major  paper illuminates the essential contribution of public intervention to health in modern societies:  “How Beneficent Is the Market? A Look at the Modern History of Mortality.” European Review of Economic History 3, no. 3 (1999): 257-94. .  Does strict health regulation save lives?  Alan Olmstead and Paul Rhode respond to this question in the affirmative by explaining how the US federal government succeeded in lowering the spread of tuberculosis by establishing controls on cattle in the early part of the twentieth century. Their analysis has considerable contemporary relevance:  only robust and universal controls saved lives: ‘The ‘Tuberculous Cattle Trust’: Disease Contagion in an Era of Regulatory Uncertainty’.  The Journal of Economic History 64, no. 4 (2004): 929–63.

Human society has achieved enormous gains in life expectancy over the last two centuries. Part of the explanation for this improvement  was improvements in key infrastructure.  However, as Daniel Gallardo‐Albarrán demonstrates, this was not simply a  question of ‘dig and save lives’, because  it was the combination  of types of structure  — water and sewers – that mattered: ‘Sanitary infrastructures and the decline of mortality in Germany, 1877–1913’, The Economic History Review (2020). One of the big goals of economic historians has been to measure the multiple benefits of public health interventions. Brian Beach,  Joseph Ferrie, Martin Saavedra, and Werner Troesken,  provide a  brilliant example of how novel statistical techniques  allow us to determine the gains from one such intervention – water purification. They demonstrate that the long-term impacts of reducing levels of disease by improving water quality were large when measured in education and income, and not just lives saved: ‘Typhoid Fever, Water Quality, and Human Capital Formation’.  The Journal of Economic History 76, no. 1 (2016): 41–75. What was it that allowed European societies to largely defeat tuberculosis (TB) in the second half of the twentieth century? In an ambitious  paper, Sue Bowden, João Tovar Jalles, Álvaro Santos Pereira, and Alex Sadler, show that a mix of factors explains the decline in TB: nutrition, living conditions, and the supply of healthcare: ‘Respiratory Tuberculosis and Standards of Living in Postwar Europe’.  European Review of Economic History 18, no. 1 (2014): 57-81.

What We Can Learn (and Should Unlearn) From Albert Camus's The ...
Thomas Rowlandson, The English Dance of Death (1815 ca)

This article was compiled by: 

 

If you wish to read further, other papers on this topic are available on the journal websites:

 

*  Special thanks to Leigh Shaw-Taylor, Cambridge University Press, Elsevier, Oxford University Press, and Wiley for their advice and support.

The Long View on Epidemics, Disease and Public Health: Research from Economic History, Part A

This piece is the result of a collaboration between the Economic History Review, the Journal of Economic History, Explorations in Economic History and the European Review of Economic History. More details and special thanks below. Part B can be found here

 

Blackdeath,_tourmai
Exhibit depicting a miniature from a 14th century Belgium manuscript at the Diaspora Museum, Tel Aviv. Available at Wikimedia Commons.

As the world grapples with a pandemic, informed views based on facts and evidence have become all the more important. Economic history is a uniquely well-suited discipline to provide insights into the costs and consequences of rare events, such as pandemics, as it combines the tools of an economist with the long perspective and attention to context of historians. The editors of the main journals in economic history have thus gathered a selection of the recently-published articles on epidemics, disease and public health, generously made available by publishers to the public, free of access, so that we may continue to learn from the decisions of humans and policy makers confronting earlier episodes of widespread disease and pandemics.

Emergency_hospital_during_Influenza_epidemic,_Camp_Funston,_Kansas_-_NCP_1603
Emergency hospital during influenza epidemic, Camp Funston, Kansas. Available at Wikimedia Commons.

Generations of economic historians have studied disease and its impact on societies across history. However, as the discipline has continued to evolve with improvements in both data and methods, researchers have uncovered new evidence about episodes from the distant past, such as the Black Death, as well as more recent global pandemics, such as the Spanish Influenza of 1918. We begin with a recent overview of scholarship on the history of premodern epidemics, and group the remaining articles thematically, into two short reading lists. The first consists of research exploring the impact of diseases in the most direct sense: the patterns of mortality they produce. The second group of articles explores the longer-term consequences of diseases for people’s health later in life.

L0025221 Plague doctor
Plague doctor. Available at Wellcome Collection.

 

L0001879 Two men discovering a dead woman in the street during the gr
Two men discovering a dead woman in the street during the Great Plague of London, 1665. Available at Wellcome Collection.

 

Patterns of Mortality

Emblems_of_mortality_-_representing,_in_upwards_of_fifty_cuts,_Death_seizing_all_ranks_and_degrees_of_people_-_imitated_from_a_painting_in_the_cemetery_of_the_Dominican_church_at_Basil
Emblems of mortality: death seizing all ranks and degrees of people, 1789. Available at Wikimedia Commons.

The rich and complex body of historical work on epidemics is carefully surveyed by Guido Alfani and Tommy Murphy who provide an excellent  guide to the economic, social, and  demographic impact of plagues in human history: ‘Plague and Lethal Epidemics in the Pre-Industrial World’.  The Journal of Economic History 77, no. 1 (2017): 314–43. https://doi.org/10.1017/S0022050717000092.  The impact of epidemics varies over time and few studies have shown this so clearly as the penetrating article by Neil Cummins, Morgan Kelly and Cormac  Ó Gráda, who provide a finely-detailed map of how the plague evolved  in 16th and 17th century London to reveal who was most heavily burdened by this contagion.  ‘Living Standards and Plague in London, 1560–1665’. Economic History Review 69, no. 1 (2016): 3-34. https://dx.doi.org/10.1111/ehr.12098 .  Plagues shaped the history of nations  and, indeed, global history, but we must not assume that the impact of  plagues was as devastating as we might assume: in a classic piece of historical detective work, Ann  Carlos and Frank Lewis show that mortality among native Americans in the Hudson Bay area  was much lower than historians had suggested: ‘Smallpox and Native American Mortality: The 1780s Epidemic in the Hudson Bay Region’.  Explorations in Economic History 49, no. 3 (2012): 277-90. https://doi.org/10.1016/j.eeh.2012.04.003

The effects of disease reflect a complex interaction of individual and social factors.  A paper by Karen Clay, Joshua Lewis and Edson Severnini  explains  how the combination of air pollution and influenza was particularly deadly in the 1918 epidemic, and that  cities in the US which were heavy users of coal had all-age mortality  rates that were approximately  10 per cent higher than  those with lower rates of coal use:  ‘Pollution, Infectious Disease, and Mortality: Evidence from the 1918 Spanish Influenza Pandemic’.  The Journal of Economic History 78, no. 4 (2018): 1179–1209. https://doi.org/10.1017/S002205071800058X.  A remarkable analysis of how one of the great killers, smallpox, evolved during the 18th century, is provided by Romola Davenport, Leonard Schwarz and Jeremy Boulton, who concluded that it was a change in the transmissibility of the disease itself that mattered most for its impact: “The Decline of Adult Smallpox in Eighteenth‐century London.” Economic History Review 64, no. 4 (2011): 1289-314. https://dx.doi.org/10.1111/j.1468-0289.2011.00599.x   The question of which sections of society experienced the heaviest burden of sickness during outbreaks of disease outbreaks has long troubled historians and epidemiologists. Outsiders and immigrants have often been blamed for disease outbreaks. Jonathan Pritchett and Insan Tunali show that poverty and immunisation, not immigration, explain who was infected during the Yellow Fever epidemic in 1853 New Orleans: ‘Strangers’ Disease: Determinants of Yellow Fever Mortality during the New Orleans Epidemic of 1853’. Explorations in Economic History 32, no. 4 (1995): 517. https://doi.org/10.1006/exeh.1995.1022

 

The Long Run Consequences of Disease

Nuremberg_chronicles_-_Dance_of_Death_(CCLXIIIIv)
‘Dance of Death’. Illustrations from the Nuremberg Chronicle, by Hartmann Schedel (1440-1514). Available at Wikipedia.

The way epidemics affects families is complex. John Parman wrestles wit h one of the most difficult issues – how parents respond to the harms caused by exposure to an epidemic. Parman  shows that parents chose to concentrate resources on the children who were not affected by exposure to influenza in 1918, which reinforced the differences between their children: ‘Childhood Health and Sibling Outcomes: Nurture Reinforcing Nature during the 1918 Influenza Pandemic’, Explorations in Economic History 58 (2015): 22-43. https://doi.org/10.1016/j.eeh.2015.07.002.  Martin Saavedra addresses a related question: how did exposure to disease in early childhood affect life in the long run? Using late 19th century census data from the US, Saavedra  shows that children of immigrants who were exposed to yellow fever in the womb or early infancy, did less well in later life than their peers,  because they were only able to secure lower-paid  employment: ‘Early-life Disease Exposure and Occupational Status: The Impact of Yellow Fever during the 19th Century’.  Explorations in Economic History 64, no. C (2017): 62-81.  https://doi.org/10.1016/j.eeh.2017.01.003.  One of the great advantages of historical research is its  ability to reveal how the experiences of disease over a lifetime generates cumulative harms. Javier Birchenall’s extraordinary paper shows how soldiers’ exposure to disease during the American Civil War increased the probability  they would  contract tuberculosis later in life: ‘Airborne Diseases: Tuberculosis in the Union Army’. Explorations in Economic History 48, no. 2 (2011): 325-42. https://doi.org/10.1016/j.eeh.2011.01.004

 

V0010604 A street during the plague in London with a death cart and m
“Bring Out Your Dead” A street during the Great Plague in London, 1665. Available at Wellcome Collection.

 

Patrick Wallis, Giovanni Federico & John Turner, for the Economic History Review;

Dan Bogart, Karen Clay, William Collins, for the Journal of Economic History;

Kris James Mitchener, Carola Frydman, and Marianne Wanamaker, for Explorations in Economic History;

Joan Roses, Kerstin Enflo, Christopher Meissner, for the European Review of Economic History.

 

If you wish to read further, other papers on this topic are available on the journal websites:

https://onlinelibrary.wiley.com/doi/toc/10.1111/(ISSN)1468-0289.epidemics-disease-mortality

https://www.cambridge.org/core/journals/journal-of-economic-history/free-articles-on-pandemics

https://www.journals.elsevier.com/explorations-in-economic-history/featured-articles/contagious-disease-on-economics

 

* Thanks to Leigh Shaw-Taylor, Cambridge University Press, Elsevier, Oxford University Press, and Wiley, for their advice and support.

Integration in European coal markets, 1833-1913

by John E. Murray (Rhodes College) and Javier Silvestre (University of Zaragoza, Instituto Agroalimentario de Aragón, and Grupo de Estudios ‘Población y Sociedad’)

The full article from this blog was published on The Economic History Review and it is available here

 

The availability of coal is central to debates about the causes of the Industrial Revolution and modern economic growth in Europe.  To overcome regional limitations in supply,  it has been argued that coal could have been transported. However, despite references to the import option and transport costs, the evolution of coal markets in nineteenth-century Europe has received limited attention.  Interest in the extent of markets is motivated  by their effects on economic growth and welfare ( Federico 2019;  Lampe and Sharp 2019).

The literature on market integration in nineteenth-century Europe mostly refers to grain prices, usually wheat. Our paper extends the research to coal, a key commodity. The historical literature of coal market integration is scant—in contrast to the literature for more recent times (Wårell 2006; Li et al. 2010; Papież and Śmiech 2015). Previous historical studies usually report some price differences between- and within countries, while a  few provide statistical analyses, often  applied to a narrow geographical scope.

We examine intra- and international market integration in the principal coal producing countries, Britain, Germany, France and Belgium.  Our analysis includes three, largely  non-producing, Southern European countries, Italy, Spain and Portugal—for which necessary data are available. (Other countries were considered but ultimately not included). We have created a database of (annual) European coal prices at different spatial levels.

Based on our price data,  we consider prices in the main consumer cities and producing regions and estimate specific price differentials between areas in which the coal trade was well established. As a robustness check, we estimate trends in the coefficient of variation for a large number of markets. For the international market, we estimate price differentials between proven trading markets. Given available data,  focus on Europe’ main exporter, Britain, and the main import countries –  France, Germany, and Southern Europe. To confirm findings, we estimate the coefficient of variation of prices throughout coal producing Europe.

Picture 1
Figure 1. Coalmine in the Borinage, 1879, by Vincent van Gogh.Available at <https://www.theparisreview.org/blog/2015/12/31/idle-bird-2/&gt;

To estimate market integration within coal producing countries, we utilise Federico’s (2012) proposal for testing both price convergence and efficiency—the latter referring to a quick return to equilibrium after a shock. For the international market, we again estimate convergence equations. For selected international routes, and according to the available information, we complete the analysis with an econometric model on the determinants of integration—which includes the ‘second wave’ of research in market integration (Federico 2019). Finally, to verify our findings, we apply a variance analysis to prices for the producing countries.
Our results, based on quantitative and qualitative evidence, may be summarized as follows. First, within coal-producing countries, we find evidence of price convergence. Second, markets became more ‘efficient’ over time – suggesting reductions in information costs. Nevertheless, coal prices were subject to strong fluctuations and shocks, in relation to ‘coal famines’. Compared to agricultural produce, the process of integration in coal appears to have taken longer. However, price convergence in coal tended to stabilize at the end of our period, suggesting insignificant further reduction in transports costs and the existence of product heterogeneity. Finally, our evidence indicates that cartelization in Continental Europe from the late nineteenth century had limited impact on price convergence.

Turning to the international coal market, our econometric results confirm price convergence between Britain and importing countries. Like domestic markets, the speed with which price differentials between Britain and Continental Europe were eroded declined from the 1900s. Further, market integration between Britain and Continental Europe appears to have been largely influenced by changes in transportation costs, information costs and protectionism. Extending our analysis to other countries, (with, admittedly, limited data) suggests that price convergence started later in our period. Finally, our results indicate the limited ability of cartels to restrict competition beyond their most immediate area of influence.
Overall, we observe integration in both the domestic and international coal market. Future research might consider expanding the focus to other cross-country, Continental, markets to acquire a deeper comprehension of the causes and effects of market integration.

To contact the authors:

Javier Silvestre, javisil@unizar.es

References

Federico, G., ‘How much do we know about market integration in Europe?’, Economic History Review, 65 (2012), pp. 470-97.

Federico, G., ‘Market integration’, in C. Diebolt, and M. Haupert, editors, Handbook of Cliometrics (Berlin, 2019).

Lampe, M. and Sharp, P., ‘Cliometric approaches to international trade’, in C. Diebolt, and M. Haupert, editors, Handbook of Cliometrics (Berlin, 2019).

Li, R., Joyeux, R., and Ripple, R. D., ‘International steam coal market integration’, The Energy Journal 31 (2010), pp. 181-202.

Papież, M. and Śmiech, S., ‘Dynamic steam coal market integration: Evidence from rolling cointegration analysis’, Energy Economics 51 (2015), pp. 510-20.

Wårell, L., ‘Market integration in the international coal industry: A cointegration approach’, The Energy Journal 27 (2006), pp. 99-118.