Scarlet Fever and nineteenth-century mortality trends. A Reply to Romola Davenport

The full article from this blog post will be published on The Economic History Review, and it is now available on Early View at this link

Children affected by scarlet fever, 1910 ca. Available at <https://qz.com/651644/a-19th-century-disease-is-on-a-dramatic-rise-in-the-uk-what-do-we-know-about-it-so-far/>

by Simon Szreter (University of Cambridge) and Graham Mooney (Johns Hopkins University)

In 1998 we published in the Economic History Review an analysis showing that all the available robust demographic evidence testified to a deterioration of mortality conditions in fast-growing industrial towns and cities in the second quarter of the nineteenth century. We also demonstrated that although there was some alleviation in the 1850s from the terrible death rates experienced in the 1830s and 1840s, sustained and continuous improvement in the life expectancies of the larger British urban populations did not begin to occur until the 1870s. In other publications, we have each shown how it is most likely that an increasing range and density of politically-initiated public health interventions in the urban environments, starting in earnest in the late 1860s and 1870s and gaining depth and sophistication through to the 1900s, was primarily responsible for the observed demographic and epidemiological patterns.

In a new article in Economic History Review in 2020, Romola Davenport has argued that a single disease, scarlet fever, should be attributed primary significance as the cause of major urban mortality trends of the period, not only in Britain but across  Europe, Russia and North America.

In this response we critically examine the evidence adduced by Davenport for this hypothesis and find it entirely unconvincing. While scarlet fever was undoubtedly an important killer of young children, focusing on the chronology of the incidence of scarlet fever in Britain shows that it lags behind by a clear decade or more the major turning points in urban mortality trends. Scarlet fever did not make any significant recorded impact until the 1840s and it did not exert its most deadly effects until the 1850s. Its severe depredations then continued unabated in the 1860s and 1870s, thereafter declining sharply in the period 1880-85.

We therefore maintain that our original findings and interpretation of the main causes of Britain’s urban mortality patterns during the course of the nineteenth century remain entirely valid. 

Historical Social Stratification and Mobility in Costa Rica, 1840-2006

by Daniel Diaz Vidal (University of Tampa)

The full article from this post was published on The Economic History Review and is now available on Early View at this link

Banana Workers – available at <https://travelcostarica.nu/history>

The social mobility rate represents the degree to which the socioeconomic status of descendants varies relative to that of their progenitors. If the rate is very low then the social pyramid remains unchanged over many generations. Conversely, if the rate of social mobility is very high, then family, cultural, ethnic, and historical backgrounds are not useful in explaining the current social status of an individual. In essence, history determines present outcomes when there are lower rates of social mobility. Interest in social mobility research has grown since the Great Recession because of its relationship with socioeconomic inequality, or political upheaval.

This renewed interest in the study of social mobility has generated new approaches to this subject.  Recent social mobility studies which use surnames show that underlying social mobility rates in all cases studied are both very low and very similar across countries and time periods.[1] This research uses an enhanced surname methodology and previously unused historical data to study  social mobility in a new Spanish speaking, Central American economy. Costa Rica is particularly interesting as it has exhibited relatively egalitarian distributions of income since colonial times. This is significantly different to the previous Latin American economy, Chile, which had been the focus of a surname study of social mobility similar to this one. In order to study historical social mobility in Costa Rica over the past century and a half, one cannot use traditional father-son linkages since constructing such a dataset would be extremely difficult, if not impossible.  Traditional methods require panel datasets, such as the United States National Longitudinal Survey of Youth (NLSY), or rich population registries like those found in Sweden and  Iceland. This limits the historical and geographical contexts in which social mobility can be studied. Surnames facilitate research  by permitting the clustering of people to identify groups of sons who collectively originated from a group of fathers, without needing to follow the branches of each specific family tree.

One of the methodologies used in this research involves overrepresentation of surname groups within certain elite professions in the 2006 electoral census. The  central idea is to see how frequent a surname is within the census and then use that to predict how many we should find in a sample of elite professionals. If a certain surname group represents 1 per cent of the population but 5 per cent of the individuals in high skilled professions, then they are overrepresented, and of higher status. In order to study how long the rich stay rich in Costa Rica, the author compiled a dataset of historically advantaged groups before the beginning of the elite profession dataset, in order to avoid selection bias.  The groups are: top coffee growers from 1911, coffee exporters in 1934, teachers and professors between 1923-1933, Jamaican banana growers from 1908, and ethnically- mixed plantation owners.  Figure 1 shows how these elite groups were still overrepresented at the end of the twentieth century and that they will require an average of six to seven generations to regress to the mean. These results are comparable to those produced by Clark, for a completely different set of socioeconomic and historical backgrounds.[2]  Of particular interest is the comparison of the results with Chile, since the two countries had different colonial experiences and varying degrees of inequality throughout their histories.

Figure 1. Elite Group Representation in Costa Rica – Note: The vertical black line determines where the data end and the projections begin.
Sources: Tribunal Supremo de Elecciones, Padron Nacional Electoral; Costa Rica, Direccion General de Estadística y Censos, Lista de cultivadores de banano, anuario 1907; Costa Rica, Instituto de Defensa del Café de Costa Rica, Revista del Instituto de Defensa del Café de Costa Rica. See article for further details.

This research shows that regression to the socioeconomic mean in Costa Rica occurred at a slower pace than that predicted by the previous literature. This implies that the equality-driven policy maker should be more concerned with economic growth, which should increase the average income of every strata, at least under a Kaldor compensation criterion[3], and with compressing the distance between social strata, rather than concerning itself with social mobility. This study has shown how historical groups take fewer generations to regress to the mean in comparison to the Chilean case studied in Clark.[4] This is attributed to the fact that the historical groups were not that far apart to begin with.

To contact the author: DDIAZVIDAL@ut.edu


[1]           G. Clark, The Son Also Rises: Surnames and the History of Social Mobility. Princeton: Princeton University Press, 2015; G. Clark and N. Cummins, ‘Surnames and Social Mobility in England, 1170-2012’, Human Nature, 25 (2014), pp. 517-537.

[2]           Clark, The Son also rises.

[3]           This posits that an activity moves the economy closer to Pareto optimality if the maximum amount the gainers are prepared to pay to the losers to agree to the change is greater than the minimum amount losers are prepared to accept

[4]            Ibid.


Independent Women: Investing in British Railways, 1870-1922

by Graeme Acheson (University of Strathclyde Business School), Aine Gallagher, Gareth Campbell, and John D.Turner (Queen’s University Centre for Economic History)

The full article from this blog post has been published on The Economic History Review, and it is currently available on Early View here

Women have a long tradition of investing in financial instruments, and scholars have recently documented the rise of female shareholders in nineteenth-century Britain, the United States, Australia, and Europe. However, we know very little about how this progressed into the twentieth century, and whether women shareholders over a century ago behaved differently from their male counterparts. To address this, we turn to the shareholder constituencies of railways, which were the largest public companies a century ago.

Figure 1. Illustration of a female investor reading the ticker tape in the early twentieth century. Source: the authors

Railway companies in the UK popularised equity investment among the middle classes; they had been a major investment asset since the first railway boom of the mid-1830s. At the start of the 1900s, British railways made up about half of the market value of all domestic equity listed in the UK, and they constituted 49 of the 100 largest companies on the British stock market in 1911. The railways, therefore, make an interesting case through which to examine women investors. Detailed railway shareholder records, comparable to those for other sectors, have generally not been preserved. However, we have found Railway Shareholder Address Books for six of the largest railway companies between 1915 and 1922. We have supplemented these with several address books for these companies back to 1870, and have analysed the Shareholder Register for the Great Western Railway (GWR) from 1843, to place the latter period in context.

An analysis of these shareholder address books reveals the growing importance of women shareholders from 1843, when they made up about 11 per cent of the GWR shareholder base, to 1920, when they constituted about 40 per cent of primary shareholders. By the early twentieth century, women represent 30 to 40 per cent of shareholders in each railway company in our sample, which is in line with estimates of the number of women investing in other companies at this time (Rutterford, Green, Maltby and Owens, 2011). This implies that women were playing an important role in financial markets in the early twentieth century.

Although women were becoming increasingly prevalent in shareholder constituencies, we know little about how they were responding to changing social perceptions, and the increasing availability of financial information, in order to make informed investment decisions, or if they were influenced by male relatives. To examine this, we focus on joint shareholdings, where people would invest together, rather than buying shares on their own. This practice was extremely common, and from our data we are able to analyse the differences between solo shareholders, lead joint shareholders (i.e., individuals who owned shares with others but held the voting rights), and secondary joint shareholders (i.e., individuals who owned shares with others but did not hold the voting rights).

We find that women were much more likely to be solo shareholders than men, with 70 to 80 per cent of women investing on their own, compared to just 30 to 40 per cent of men. When women participated in joint shareholdings, there was no discernible difference as to whether they were the lead shareholder or the secondary shareholder, whereas the majority of men took up a secondary position. When women participated as a secondary shareholder, the lead was usually not a male relative. These findings are strong evidence that women shareholders were acting independently by choosing to take on the sole risks and rewards of share ownership when making their investments. 

We then analyse how the interaction between gender and joint shareholdings affected investment decisions. We begin by examining differences in terms of local versus arms-length investment, using geospatial analysis to calculate the distance between each shareholder’s address and the nearest station of the railway they had invested in. We find that women were more likely than men, and solo investors more likely than joint shareholders, to invest locally. This suggests that men may have used joint investments as a way of reducing the risks of investing at a distance. In contrast, women preferred to maintain their independence even if this meant focusing more on local investments.

We then examine the extent to which women and men invested across different railways. In the modern era, it is common to adopt a value-weighted portfolio which is most heavily concentrated in larger companies. As three of our sample companies were amongst the six largest companies of their era and a further two were in the top twenty-five, we would, a priori, expect to see some overlap of shareholders investing in different railways if they adopted this approach to diversification. From our analysis, we find that male and joint shareholders were more likely than female and solo shareholders to hold multiple railway stocks. This could imply that men were using joint shareholdings as a means of increasing diversification. In contrast, women may have been prioritising independence, even if it meant being less diversified.

We also consider whether there were differences in terms of how long each type of shareholder held onto their shares because modern studies suggest that women are much less likely than men to trade their shares. We find that only a minority of shareholders maintained a long-run buy and hold strategy, with little suggestion that this differed on the basis of gender or joint versus solo shareholders. This implies that our findings are not being driven by a cohort effect, and that the increasing numbers of women shareholders consciously chose to invest independently. 

To contact the authors:

Graeme Acheson, graeme.acheson@strath.ac.uk

Aine Gallagher, Aine.Galagher@qub.ac.uk

Gareth Campbell, gareth.campbell@qub.ac.uk

John D.Turner, j.turner@qub.ac.uk

A century of wind power: why did it take so long to develop to utility scale?

by Mercedes Galíndez, University of Cambridge

This blog is based on research funded by a bursary from the Economic History Society. More information here

Marcellus Jacobs on a 2.5kW machine in the 1940s. Available at <http://www.jacobswind.net/history&gt;

Seventeen years passed between Edison patenting his revolutionary incandescent light bulb in 1880, and Poul la Cour’s first test of a wind turbine for generating electricity. Yet it would be another hundred years before wind power would become an established industry in the 2000s. How can we explain the delay in harvesting the cheapest source of electricity generation?

In the early twentieth century wind power emerged to fill the gaps of nascent electricity grids. This technology was first adopted in rural areas. The incentive was purely economic: the need for decentralised access to electricity. In this early stage there were no concerns about the environmental implications of wind power.

The Jacobs Wind Electricity Company delivered 30,000 three-blade wind turbines in the US  between 1927 and 1957.[1] The basic mechanics of these units did not differ too much from their modern counterparts. Once the standard electrical grid reached rural areas, however, the business case for wind power weakened. Soon it became more economic to buy electricity from centralised utilities, which benefited from significant economics of scale.

It was not until the late 1970s that wind power became a potential substitute for electricity generated by fossil fuels or hydropower. Academic literature agrees on two main triggers for this change: the oil crises in the 1970s, and the politicisation of Climate Change. When the price of oil quadrupled in 1973, rising to nearly US $12 per barrel, industrialised countries’ dependency on foreign producers of oil was exposed. The reaction was to find new domestic sources of energy. Considerable  effort was devoted to nuclear power, but technologies like wind power were also revived.

In the late 1980s Climate Change became more politicised, and interest in wind energy as a technology that could mitigate environmental damage, was renewed.  California’s governor, Jerry Brown, was aligned with these ideals and in 1978, in a move ahead of its time, he provided extra tax incentives to renewable energy producers in his tate.[2] This soon created a ‘California Wind Rush’ which saw both local and European turbine manufacturers burst onto the market, with $1 billion US dollars being  invested in the region of Altamont Pass between 1981 and 1986.[3]

The California Wind Rush ended suddenly when central government support was withdrawn.   However, the European Union (EU) accepted the challenge to maintain the industry. In 2001, the EU introduced Directive 2001/77/EC for the promotion of renewable energy sources. This Directive required Member States to set renewable energy targets.[4] Many directives followed which triggered renewable energy programmes throughout the EU. Following the first directive in 2001, the installed capacity of wind power in the EU increased thirteen-fold, from 13GW to 169GW in 2017.

Whilst there is no doubt that the EU regulatory framework played a key role in the development of wind power, other factors were also at play. Nicolas Rochon, a green investment manager, published a memoir in 2020 in which he argued that clean energy development was also enabled by a change in the investment community. As interest rates decreased during the first two decades of the twenty-first century, investment managers revised downwards their expectations on future returns – which fostered more attention to clean energy assets offering lower profitability. Growing competition in the sector reduced the price of electricity obtained from renewable energy. [5]

My research aims to understand the macroeconomic conditions that enabled wind power to develop to national scale. In particular, how wind power developers accessed capital, and how bankers and investors took a leap of faith to invest in the technology. My research will utilise  oral history interviews with subjects like Nicolas Rochon, who made  financial decisions on wind power projects.

To contact the author:

Mercedes Galíndez (mg570@cam.ac.uk)


[1] Righter, Robert, Wind Energy in America: A History, Norman, University of Oklahoma Press, 1996, page 93

[2] Madrigal, Alexis. Powering the Dream: The History and Promise of Green Technology. Cambridge, MA: Da Capo Press, 2011, pages 239-239

[3] Jones, Geoffrey. Profits and Sustainability. A History of Green Entrepreneurship. Oxford: Oxford University Press, 2017, page 330

[4] EU Directive 2001/77/EC

[5] Rochon, Nicolas, Ma transition énergétique 2005 – 2020, Les Papiers Verts, Paris, 2020

Industrial, regional, and gender divides in British unemployment between the wars

By Meredith M. Paker (Nuffield College, Oxford)

This blog is part of a series of New Researcher blogs.

A view from Victoria Tower, depicts the position of London on both sides of the Thames, 1930. Available at Wikimedia Commons.

‘Sometimes I feel that unemployment is too big a problem for people to deal with … It makes things no better, but worse, to know that your neighbours are as badly off as yourself, because it shows to what an extent the evil of unemployment has grown. And yet no one does anything about it’.

A skilled millwright, Memoirs of the Unemployed, 1934.

At the end of the First World War, an inflationary boom collapsed into a global recession, and the unemployment rate in Britain climbed to over 20 per cent. While the unemployment rate in other countries recovered during the 1920s, in Britain it remained near 10 per cent for the entire decade before the Great Depression. This persistently high unemployment was then intensified by the early 1930s slump, leading to an additional two million British workers becoming unemployed.

What caused this prolonged employment downturn in Britain during the 1920s and early 1930s? Using newly digitized data and econometrics, my project provides new evidence that a structural transformation of the economy away from export-oriented heavy manufacturing industries toward light manufacturing and service industries contributed to the employment downturn.

At a time when few countries collected any reliable national statistics at all, in every month of the interwar period the Ministry of Labour published unemployment statistics for men and women in 100 industries. These statistics derived from Britain’s unemployment benefit program established in 1911—the first such program in the world. While many researchers have used portions of this remarkable data by manually entering the data into a computer, I was able to improve on this technique by developing a process using an optical-character recognition iPhone app. The digitization of all the printed tables in the Ministry of Labour’s Gazette from 1923 through 1936 enables the econometric analysis of four times as many industries as in previous research and permits separate analyses for male and female workers (Figure 1).

Figure 1: Data digitization. Left-hand side is a sample printed table in the Ministry of Labour Gazette. Right-hand side is the cleaned digitized table in Excel.

This new data and analysis reveal four key findings about interwar unemployment.  First, the data show that unemployment was different for men and women. The unemployment rate for men was generally higher than for women, averaging 16.1 percent and 10.3 per cent, respectively.  Unemployment increased faster for women at the onset of the Great Depression but also recovered quicker (Figure 2). One reason for these distinct experiences is that men and women generally worked in different industries. Many unemployed men had previously worked in coal mining, building, iron and steel founding, and shipbuilding, while many unemployed women came from the cotton-textile industry, retail, hotel and club services, the woolen and worsted industry, and tailoring.

Figure 2: Male and female monthly unemployment rates. Source: Author’s digitization of Ministry of Labour Gazettes.

Second, regional differences in unemployment rates in the interwar period were not due only to the different industries located in each region. There were large regional differences in unemployment above and beyond the effects of the composition of industries in a region.

Third, structural change played an important role in interwar unemployment. A series of regression models indicate that, ceteris paribus, industries that expanded to meet production needs during World War I had higher unemployment rates in the 1920s. Additionally, industries that exported much of their production also faced more unemployment. An important component of the national unemployment problem was thus the adjustments that some industries had to make due to the global trade disturbances following World War I.

Finally, the Great Depression accelerated this structural change. In almost every sector, more adjustment occurred in the early 1930s than in the 1920s. Workers were drawn into growing industries from declining industries, at a particularly fast rate during the Great Depression.

Taken together, these results suggest that there were significant industrial, regional, and gender divides in interwar unemployment that are obscured by national unemployment trends. The employment downturn between the wars was thus intricately linked with the larger structural transformation of the British economy.


Meredith M. Paker

meredith.paker@nuffield.ox.ac.uk

Twitter: @mmpaker

Britain’s inter-war super-rich, the 1928/9 ‘millionaire list’.

by Peter Scott (Henley Business School at the University of Reading)

The roaring 1920s. Available at <https://www.lovemoney.com/gallerylist/87193/the-roaring-1920s-richest-people-and-how-they-made-their-money&gt;

Most of our information on wealth distribution and top incomes is derived from data on wealth left at death, recorded in probates and estate duty statistics. This study utilises a unique list of all living millionaires for the 1928/9 tax year, compiled by the Inland Revenue to estimate how much a 40 per cent estate duty on them would raise in government revenue. Millionaires were identified by their incomes (over £50,000, or £3 million in 2018 prices), equivalent to a capitalised sum of over £1 million (£60 million in 2018 prices). Data for living millionaires are particularly valuable, given that even in the 1930s millionaires often had considerable longevity, and data on wealth at death typically reflected fortunes made, or inherited, several decades previously. Some millionaires’ names had been redacted – where their dates of birth or marriage were known – but cross-referencing with various data sources enabled the identification of 319 millionaires, equivalent to 72.8 percent of the number appearing on the millionaire list.

The tax year 1928 to 1929 is a very useful bench-mark for assessing the impact of the First World War and its aftermath on the composition of the super-rich. Prior to the 20th century, the highest echelons of wealth were dominated by the great landowners; reflecting a concentration of land-ownership unparalleled in Europe. William Rubinstein found that the wealth of the greatest landowners exceeded that of the richest businessmen until 1914, if not later. However, war-time inflation, higher taxes, and the post-war agricultural depression negatively impacted their fortunes. Meanwhile some industrialists benefitted enormously from the War.

By 1928 business fortunes had pushed even the wealthiest aristocrats, the Dukes of Bedford and Westminster, into seventh and eighth place on the list of top incomes. Their taxable incomes, £360,000 and £336,000 respectively, were dwarfed by those of the richest businessmen, such as the shipping magnate Sir John Ellerman (Britain’s richest man; the son of an immigrant corn broker who died in 1871, leaving £600) with a 1928 income of £1,553,000, or James Williamson, the first Baron Ashton, who  pioneered the mass production of linoleum – second on the list, with £760,000. Indeed, some 90 percent of named 1928/9 millionaires had fortunes based on (non-landed) business incomes. Moreover, the vast majority – 85.6 percent of non-landed males on the list – were active businesspeople, rather than rentiers.

 “Businesspeople millionaires” were highly clustered in certain sectors (relative to those sectors’ shares of all corporate profits): tobacco (5.40 times over-represented); shipbuilding (4.79); merchant and other banking (3.42); foods (3.20); ship-owning (3.02); other textiles (2.98); distilling (2.67), and brewing (2.59). These eight sectors collectively comprised 42.4 percent of all 1928/9 millionaires, but only 15.5 percent of aggregate profits. Meanwhile important sectors such as chemicals, cotton and woollen textiles, construction, and, particularly, distribution, are substantially under-represented.

The over-represented sectors were characterised by either rapid cartelisation and/or integration which, in most cases, had intensified during the War and its aftermath. Given that Britain had very limited tariffs, cartels and monopolies could only raise prices in sectors with other barriers to imports, principally “strategic assets”: assets that sustain competitive advantage through being valuable, rare, inimitable, and imperfectly substitutable. These included patents (rayon); control of distribution (brewing and tobacco); strong brands (whiskey; branded packaged foods); reputational assets (merchant banking); or membership of international cartels that granted territorial monopolies  (shipping; rayon). Conversely, there is very little evidence of “technical” barriers such as L-shaped cost curves that could have offset the welfare costs of industrial combination/concentration through scale economies. Instead, amalgamation or cartelisation were typically followed by rising real prices.

Another, less widespread but important tactic for gaining a personal and corporate competitive edge, was the use of sophisticated tax avoidance/evasion techniques to reduce tax liablity to a fraction of its headline rate. Tax avoidance was commonplace among Britain’s economic elite by the late 1920s, but a small proportion of business millionaires developed it to a level where most of their tax burden was removed, mainly via transmuting income into non-taxable capital gains and/or creating excessive depreciation tax allowances. Several leading British millionaires, including Ellerman, Lord Nuffield, Montague Burton, and the Vestey brothers (Union Cold Storage) were known to the Inland Revenue as skilled and successful tax avoiders.

These findings imply that the composition of economic elites should not simply be conflated with ‘wealth-creation’ prosperity (except for those elites), especially where their incomes include a substantial element of rent-seeking. Erecting or defending barriers to competition (through cartels, mergers, and strategic assets) may increase the number of very wealthy people, but  unlikely to have generated a positive influence on national economic growth and living standards — unless accompanied by rationalisation to substantially lower costs. In this respect typical inter-war business millionaires had strong commonalities with earlier, landed, British elites, in that they sustained their wealth through creating, and then perpetuating, scarcity in the markets for the goods and services they controlled.

To contact the author:

p.m.scott@henley.ac.uk

Spain’s tourism boom and the social mobility of migrant workers

By José Antonio García Barrero (University of Barcelona)

This blog is part of a series of New Researcher blogs.

Spain Balearic Islands Mediterranean Menorca. Available at Wikimedia Commons.

My research, which is based on a new database of the labour force in Spain’s tourism industry, analyses the assimilation of internal migrants in the Balearic Islands during the tourism boom between 1959 and 1973.

I show that tourism represented a context for upward social mobility for natives and migrants. But the extent of upward mobility was uneven among the different groups. While natives, foreigners and internal urban migrants achieved a significant level of upward mobility, the majority faced more difficulties to improve. The transferability of human capital to the services economy and the characteristics of their migratory fluxes determined the extent of the labour attainment of the migrants.

The tourism boom constituted one of the main scenarios of the path to modernisation in Spain in the twentieth century. Between 1959 and 1973, the country transformed into one of the top tourist economies of the world, mobilising a rapid and intense demographic and landscape transformation among coastal regions of the peninsula and the archipelagos.

The increasing demand for tourism services from West European societies triggered the massive arrival of tourists to the country. In 1959, four million tourists visited Spain; by 1973, the country hosted 31 million visitors. The epicentre of this phenomenon was the Balearic Islands.

In the Balearics, a profound transformation took place. In more than a decade, the capacity of the tourism industry skyrocketed from 215 to 1,534 hotels and pensions, and from 11,496 to 216,113 hotel beds. Between 1950 and 1981, the number of Spanish-born people from outside the Balearics increased from 33,000 inhabitants to 150,000, attracted by the high labour demand for tourism services. In 1950, they accounted for 9% of the total population; in 1981, that share had reached 34.4%.

In my research, I analyse whether the internal migrants who arrived at the archipelago – mostly seasonal migrants who ended up becoming permanent residents from stagnant rural agrarian areas in southern Spain – were able to take advantage of the rapid and profound transformation of the tourism industry. Instead of putting my focus on the process of movement from agrarian to services activities, my interest was in the potential possibilities of upward mobility in the host society.

I use a new database of the workforce, both men and women, in the tourism industry, comprising a total of 10,520 observations with a wide range of personal, professional and business data for each individual up to 1970. The features of this data make it possible to analyse the careers of these workers in the emerging service industry by cohort characteristics, including variables such as gender, place of birth, language skills or firm, among others. Using these variables, I examine the likelihood of belonging to four income categories.

My results suggest that the tourism explosion opened significant opportunities for upward labour mobility. Achieving high-income jobs was possible for workers involved in hospitality and tourism-related activities. But those who took advantage of this scenario were mainly male natives and urban migrants coming from northern Spain, mainly from Catalonia, and especially from European countries with clear advantages in terms of language skills.

For natives, human and social capital made the difference. For migrants, the importance of self-selection and the transferability of skills from urban cities to the new leisure economies were decisive.

Likewise, despite lagging behind, those from rural areas in southern Spain were able to achieve some degree of upward mobility, thus reducing progressively although not completely the gap with natives. Acquiring human capital through learning-by-doing and the formation of networks of support and information among migrants from the same areas increased the chances of improvement. Years of experience, knowing where to find job opportunities and holding personal contacts in the firms were important skills.

In that sense, the way that the migrants arrived at the archipelago mattered. Those more exposed to seasonal flows of migrants faced a lower capacity for upward mobility since they were recruited in their place of origin rather than through migrant networks or returned to their homes at the end of each season.

In comparison, those who relied on migratory networks and remained as residents in the archipelago had a greater chance of getting better jobs and reducing their socio-economic distance from the natives.

Baumol, Engel, and Beyond: Accounting for a century of structural transformation in Japan, 1885-1985

by Kyoji Fukao (Hitotsubashi University) and Saumik Paul (Newcastle University and IZA)

The full article from this blog post was published on The Economic History Review, and it is now available on Early View at this link

Bank of Japan, silver convertible yen. Available on Wiki Commons

Over the past two centuries, many industrialized countries have experienced dramatic changes in the sectoral composition of output and employment. The pattern of structural transformation, depicted for most of the developed countries, entails a steady fall in the primary sector, a steady increase in the tertiary sector, and a hump shape in the secondary sector. In the literature, the process of structural transformation is explained through two broad channels: the income effect, driven by the generalization of Engel’s law, and the substitution effect, following the differences in the rate of productivity across sectors, also known as “Baumol’s cost disease effect”.

At the same time, an input-output (I-O) model provides a comprehensive way to study the process of structural transformation. The input-output analysis accounts for intermediate input production by a sector, as many sectors predominantly produce intermediate inputs, and their outputs rarely enter directly into consumer preferences. Moreover, an input-output analysis relies on observed data and a national income identity to handle imports and exports. The input-output analysis has considerable advantages in the context of Japanese structural transformation first from agriculture to manufactured final consumption goods, and then to services, alongside transformations in Japanese exports and imports that have radically changed over time.

We examine the drivers of the long-run structural transformation in Japan over a period of 100 years, from 1885 to 1985. During this period, the value-added share of the primary sector dropped from 60 per cent  to less than 1 per cent, whereas that of the tertiary sector rose from 27 to nearly 60 per cent in Japan (Figure 1). We apply the Chenery, Shishido, and Watanabe framework to examine changes in the composition of sectoral output shares. Chenery, Shishido, and Watanabe used an inter-industry model to explain deviations from proportional growth in output in each sector and decomposed the deviation in sectoral output into two factors: the demand side effect, a combination of the Engel and Baumol effects (discussed above), and  the supply side effect, a change in the technique of production. However, the current input-output framework is unable to uniquely separate the demand side effect into forces labelled under the Engel and Baumol effects.

Figure 1. Structural transformation in Japan, 1874-2008. Source: Fukao and Paul (2017). 
Note: Sectoral shares in GDP are calculated using real GDP in constant 1934-36 prices for 1874-1940 and constant 2000 prices for 1955-2008. In the current study, the pre-WWII era is from 1885 to1935, and the post-WWII era is from 1955 to 1985. 

To conduct the decomposition analysis, we use seven I-O tables (every 10 years) in the prewar era from 1885 to 1935 and six I-O tables (every 5 years) in the postwar era from 1955 to 1985. These seven sectors include: agriculture, forestry, and fishery; commerce and services; construction;  food;  mining and manufacturing (excluding food and textiles); textiles, and  transport, communication, and utilities.

The results show that the annual growth rate of GDP more than doubled in the post-WWII era compared to the pre-WWII era. The real output growth was the highest in the commerce and services sector throughout the period under study, but there was also rapid growth of output in mining and manufacturing, especially in the second half of the 20th century. Sectoral output growth in mining and manufacturing (textile, food, and the other manufacturing), commerce and services, and transport, communications, and utilities outgrew the pace of growth in GDP in most of the periods. Detailed decomposition results show that in most of the sectors (agriculture, commerce and services, food, textiles, and transport, communication, and utilities), changes in private consumption were the dominant force behind the demand-side explanations. The demand-side effect was strongest in the commerce and services sector.

Overall, demand-side factors — a combination of the Baumol and Engel effects, were the main explanatory factors in the pre-WWII period, whereas  supply-side factors were the key driver of structural transformation in the post-WWII period.

To contact the authors:

Kyoji Fukao, k.fukao@r.hit-u.ac.jp

Saumik Paul, paulsaumik@gmail.com, @saumik78267353

Notes

Baumol, William J., “Macroeconomics of unbalanced growth: the anatomy of urban crisis”. American Economic Review 57, (1967) 415–426.

Chenery, Hollis B., Shuntaro Shishido and Tsunehiko Watanabe. “The pattern of Japanese growth, 1914−1954”, Econometrica30 (1962), 1, 98−139.

Fukao, Kyoji and Saumik Paul “The Role of Structural Transformation in Regional Convergence in Japan: 1874-2008.” Institute of Economic Research Discussion Paper No. 665. Tokyo: Institute of Economic Research (2017).

Growth Before Birth: The Relationship between Placental Weights and Infant and Maternal Health in early-twentieth century Barcelona

By Gregori Galofré-Vilà (Universitat Pompeu Fabra and Barcelona Graduate School of Economics) and Bernard Harris (University of Strathclyde)

R. Alcaraz, Maternitat, Ayuda al desvalido. Available at Wikicommons.

It is now widely accepted that early-life conditions have a significant effect on lifelong health (see e.g. Wells 2016).  Many researchers have sought to examine intrauterine health by studying birth weights, but the evidence of historical changes is mixed.  Although some researchers have argued that birth weights have increased over time (e.g. O’Brien et al. 2020), others have found little evidence of any significant change over the course of the last century (Roberts and Wood 2014).  These findings have led Schneider (2017: 25) to conclude either that, ‘fetal health has remained stagnant’ or that ‘the indicators used to measure fetal health … are not as helpful as research might hope’.

The absence of unequivocal evidence of changes in birth weight has encouraged researchers to pay more attention to other intrauterine health indicators, including the size and shape of the placenta and the ratio of placental weight to birth weight (e.g. Burton et al. 2010).  The placenta transfers oxygen and nutrients from the mother to foetus and provides the means of removing waste products.  Although the evidence regarding changes in placental weight is also mixed, it has been described as a ‘mirror’ reflecting the foetus’ intrauterine status (Kaur 2016: 185).

Historical studies of changes in placental weights are still very rare.  However, we have collected data on almost 4000 placentas which were weighed and measured at Barcelona’s Provincial House (La Casa Provincial de Maternitat i Expósits) between 1905 and 1920.  Our new paper (Galofré-Vilà and Harris, in press) examines the impact of short-term fluctuations in economic conditions on placental weights immediately before and during the First World War, together with the relationship between placental weights and other maternal and neonatal health indicators and long-term changes in placental weight over the course of the century.

Our first aim was to compare changes in birth weight with changes in placental weight.  As we can see from Figure 1, there was little change in average birth weights, but placental weights fluctuated more markedly.  In our paper, we show how these fluctuations may have been related to changes in real wage rates over the same period.

Figure 1. The development of birthweight and placental weight, 1905-1920. Source: as per article. Note: The dark blue line shows the monthly data and the red lines show the yearly averages with their associated 95 percent confidence intervals.

These findings support claims that the placenta is able to ‘adapt’ to changing economic circumstances, but our evidence also shows that such ‘adaptations’ may not be able to counteract the impact of maternal undernutrition entirely.  As Figure 2 demonstrates, although most neonatal markers show a reverse J-shaped curve (a higher risk of perinatal mortality with premature or small-for-gestational-age births), the relationship between placental weight and early-life mortality is U-shaped.

We also control for maternal characteristics using a Cox proportional hazards model.  Even if increases in placental weight can be regarded as a form of ‘adaptive response’, they are not cost-free, as both very low and very high placental weights are associated with increased risks of early-life mortality.  These findings are consistent with David Barker’s conclusion that elevated placental weight ratios lead to adverse outcomes in later life (Barker et al. 2010).

Figure 2. Early-life Mortality, Birthweight, Birth Length, Placental weight and BW:PW ratio. Source: as per article.

We have also compared the average value of placental weights in the Provincial House with modern Spanish data.  These data suggest that average placental weights have declined over the course of the last century.  However, the data from other countries are more mixed.  Placental weight also seems to have declined in Finland and Switzerland, but this is less obvious in other countries such as the United Kingdom and the United States.

Overall, whilst placental weights may well provide a sensitive guide to the intrauterine environment, we still know relatively little about the ways in which they may, or may not, have changed over time.  However, this picture may change if more historical series come to light.

To contact the authors: 

Gregori Galofré-Vilà, gregori.galofre@upf.edu, Twitter: @gregorigalofre

Bernard Harris, bernard.harris@strath.ac.uk

References: 

Barker, D. J. P., Thornburg, K. L., Osmond, C., Kajantie, E., and Eriksson, J. G. (2010), ‘The Surface Area of the Placenta and Hypertension in the Offspring in Later Life’, International Journal of Developmental Biology, 54, 525-530.

Burton, G., Jauniaux, E. and and Charnock-Jones, D.S. (2010), ‘The influence of the intrauterine environment on human placental development’, International Journal of Developmental Biology, 54, 303-11.

Galofré-Vilà, G. and Harris, B. (in press), ‘Growth Before birth: the relationship between placental weights and infant and maternal health in early-twentieth century Barcelona’, Economic History Review.

Kaur, D. (2016), ‘Assessment of placental weight, newborn birth weight in normal pregnant women and anaemic pregnant women: a correlation and comparative study’, International Journal of Health Sciences and Research, 6, 180-7.

O’Brien, O., Higgins, M. and Mooney, E. (2020), ‘Placental weights from normal deliveries in Ireland’, Irish Journal of Medical Science, 189, 581-3.

Roberts, E., and Wood, P. (2014), ‘Birth weight and adult health in historical perspective: Evidence from a New Zealand Cohort, 1907-1922’, Social Science and Medicine, 107, 154-161.

Schneider, E. (2017), ‘Fetal health stagnation: have health conditions in utero improved in the US and Western and Northern Europe over the past 150 years?’, Social Science and Medicine, 179, 18-26.

Wells, J.C.K. (2016), The metabolic ghetto: evolutionary perspectives on nutrition, power relations and chronic disease, Cambridge: Cambridge University Press.

COVID-19 and the food supply chain: Impacts on stock price returns and financial performance

This blog is  part of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’.

By Julia Höhler (Wageningen University)

As growing evidence about COVID-19 and its effects on the human body and transmission mechanisms emerges, economists are now making progress in understanding the impact of the global pandemic on the food supply chain. While it is apparent that many companies were affected, the nature and magnitude of the effects continue to require investigation.  A special issue of the Canadian Journal of Agricultural Economics on ‘COVID-19 and the Canadian agriculture and food sectors’, was among the first publications to examine the  possible effects of COVID-19 on food-supply.  In our ongoing work we take the next step and ask the question: How can we quantify the effects of COVID-19 on companies in the food supply chain?

Figure 1. Stylized image of supermarket shopping Source: Oleg Magni, Pexels

Stock prices as a proxy for the impact of COVID-19

One way to quantify the initial effects of COVID-19 on companies in the food supply chain is to analyse stock prices and their reaction over time. The theory of efficient markets states that stock prices reflect investors’ expectations regarding future dividends. If stock prices fluctuate strongly, this is a sign of lower expected returns and higher risks. Volatile stock markets can increase businesses’ financing costs and, in the worst case, threaten their liquidity. At the macroeconomic level, stock prices can also be useful to indicate the likelihood of a future recession. For our analysis of stock price reactions, we have combined data from different countries and regions. In total, stock prices for 71 large stock-listed companies from the US, Japan and European were collected. The companies’ activities in our sample cover the entire supply chain from farm equipment and supplies, agriculture, trade, food-processing, distribution, and retailing.

Impact on stock price returns comparable to the 2008 financial crisis

 We began by  calculating the logarithmic daily returns for the companies’ stocks and their average. Second, we compared these average returns with the performance of the S&P 500.  Figure 2, below,  shows the development of average daily returns from 2005 to 2020. Companies in the S&P 500 (top) achieved higher returns on average, but also exhibited higher fluctuations than the average of the companies we examined (bottom). Stock price returns fluctuated particularly strongly during the 2008 financial crisis. The fluctuations since the first notification of COVID-19 to the WHO in early January to the end of April 2020 (red area) are comparable in their magnitude. The negative fluctuations in this period are somewhat larger than in 2008. Based on the comparison of both charts, it can be assumed that stock price returns of large companies in the food supply chain were on average less affected by the two crises. Nevertheless, a look at the long-term consequences of the 2008 financial crisis suggests that a wave of bankruptcies, lower financial performance and a loss of food security may still follow.

Figure 2. Average daily returns, for the S & P 500 (top panel) and 71 food-supply companies (FSC), lower panel, 2005-2020. Source: Data derived from multiple sources. For further information, please contact the author.

Winners and losers in the sub-sectors

In order to obtain a more granular picture of the impact of COVID-19, the companies in our sample  were divided into sub-sectors, and their stock price volatility was calculated between January and April, 2020. Whereas food retailers and breweries experienced relatively low volatility in stock prices, food distributors and manufacturers of fertilizers and chemicals experienced relatively higher volatilities. In order to cross-validate these results, we collected information on realized profits or losses from the companies’ financial reports. The trends observed in  stock prices are also reflected in company results for the first quarter of 2020. Food retailers were able to increase their profits in times of crisis, while food distributors recorded high losses compared to the previous period. The results are likely related to the lockdowns and social distancing measures which altered food distribution channels.

Longer-term effects

Just as the vaccine for COVID-19 is still in the pipeline, research into the effects of COVID-19 needs time to show what makes companies resilient to the effects of unpredictable shocks of this magnitude. Possible research topics relate to the question of whether local value chains are better suited to cushion the effects of a pandemic and maintain food security. Further work is also needed to understand fully the associated trade-offs between food security, profitability, and climate change objectives. Another research question relates to the effects of government protective measures and company support programmes.  Cross-country studies can provide important insights here. Our project lays the groundwork for future research into the effects of shocks on companies in the food value chain. By combining different data sources, we were able to compare stock returns in times of COVID-19 with those of the 2008 crisis, and  identify differences between sub-sectors. In the next step we will use company characteristics such as profitability to explain differences in returns.

To contact the author: julia.hoehler[at] wur.nl