The labour market causes and consequences of general purpose technological progress: evidence from steam engines

by Leonardo Ridolfi (University of Siena), Mara Squicciarini (Bocconi University), and Jacob Weisdorf (Sapienza University of Rome)

Steam locomotive running gear. Available at Wikimedia Commons.

Should workers fear technical innovations? Economists have not provided a clear answer to this perennial question. Some believe machines make ‘one man to do the work of many’; that mechanisation will generate cheaper goods, more consumer spending, increased labour demand and thus more jobs. Others, instead, wor­ry that automation will be labour-cheapening, making workers – especially unskilled ones – redundant, and so result in increased unemployment and growing income inequality.

Our research seeks answers from the historical account. We focus on the first Industrial Revolution, when technical innovations became a key component of the production process.

The common understanding is that mechanisation during the early phases of industrialisation allowed firms to replace skilled with unskilled male workers (new technology was deskilling) and also male workers with less expensive female and child labourers. Much of this understanding is inspired by the Luddite movement – bands of nineteenth century workers who destroyed early industrial machinery that they believed was threatening their jobs.

To test these hypotheses, we investigate one of the major technological advancements in human history: the rise and spread of steam engines.

Nineteenth century France provides an exemplary setting to explore the effects. French historical statistics are extraordinarily detailed, and the first two national industry-level censuses – one from the 1840s, when steam power was just beginning to spread; and one from the 1860s, when they were more common – help us to observe the labour market conditions that led to the adoption of steam engines, as well as the effects of adopting the new technology on the demand for male, female and child labour, and on their wages.

Consistent with the argument that steam technology emerged for labour-cheapening purposes, our analysis shows that the adoption of steam technology was significantly higher in districts (arrondissements) where:

  1. industrial labour productivity was low, so that capital-deepening could serve to improve output per worker;
  2. the number of workers was high, so the potential for cutting labour costs by replacing them with machines was large;
  3. the share of male workers was high, so the potential for cutting labour costs by shifting towards women and children was large; and
  4. steam engines had already been installed in other industries, thus lowering the costs of adopting the new technology.

We also find, however, that steam technology, once adopted, was neither labour-saving nor skill-saving. Steam-powered industries did use higher shares of (cheaper) female and child workers than non-steam-powered industries. At the same time, though, since steam-operating industries employed considerably more workers in total, they ended up using also more male workers – and not just more women and children.

We also find that steam-powered industries paid significantly higher wages, both to men and women. In contrast with the traditional narrative of early industrial technologies being deskilling, this result provides novel empirical evidence that steam-use was instead skill-demanding.

Although workers seemed to have gained from the introduction of steam technology, both in terms of employment and payment opportunities, our findings show that labour’s share was lower in steam-run industries. This motivates Engels-Marx-Piketty-inspired concerns that advancing technology leaves workers with a shrinking share of output.

Our findings thus highlight the multi-sided effects of adopting general-purpose technological progress. On the positive side, the steam engine prompted higher wages and a growing demand for both male and female workers. On the negative side, steam-powered industries relied more heavily on child labour and also placed a larger share of output in the hands of capitalist.

North & South in the 1660s and 1670s: new understanding of the long-run origins of wealth inequality in England

By Andrew Wareham (University of Roehampton)

This blog is part of a series of New Researcher blogs.

Maps of England circa 1670, Darbie 10 of 40. Available at Wikimedia Commons.

New research shows that before the industrial revolution many more houses in south-east England had more fireplaces than houses in the Midlands and northern England. When Mrs Gaskell wrote North and South, she reflected on a theme which was nearly two centuries old and which continues to divide England.

Since the 1960s, historians have wanted to use the Restoration hearth tax to provide a national survey of distributions of population and wealth. But, for technical reasons until now, it has not been possible to move beyond city and county boundaries to make comparisons. 

Hearth Tax Digital, arising from a partnership between the Centre for Hearth Tax Research (Roehampton University, UK) and the Centre for Information Modelling (Graz University, Austria) overcomes these technical barriers. This digital resource provides free access to the tax returns, with full transcription of the records and links to archival shelf marks and location by county and parish. Data on around 188,000 households in London and 15 cities/counties can be searched, with the capacity to download search queries into a databasket, and work on GIS mapping is in development.

In the 1660s and 1670s, after London, the West Riding of Yorkshire and Norfolk stand out as densely populated regions. The early stages of industrialization meant that Leeds, Sheffield, Doncaster and Halifax were overtaking the former leading towns of Hull, Malton and Beverley. But the empty landscapes of north and east Norfolk, enjoyed by holiday makers today, were also densely populated then.

The hearth tax, as a nation-wide levy on domestic fireplaces, was charged against every hearth in each property, and the tax was collected twice a year at Lady Day (March) and Michaelmas (September).  In 1689 after 27 years it was abolished in perpetuity in England and Wales, but it continued to be levied in Ireland until the early nineteenth century and it was levied as a one- off tax in Scotland in 1691. Any property with three hearths and over was liable to pay the tax, and many properties with one or two hearths, such as those occupied by the ordinary poor, were exempt from the tax. (The destitute and those in receipt of poor relief were not included in the tax registers). A family living in a home with one hearth had to use it for all their cooking, heating and leisure purposes, but properties with more than three  hearths had at least one hearth in the kitchen, one in the parlour and one in an upstairs chamber. 

In a  substantial majority of parishes in northern England (County Durham, Westmorland, the East and North Ridings of Yorkshire) less than 20 per cent of households had three hearths and over, and only in the West Riding was there a significant number of parishes where 30 percent and more of households had three hearths and over. But, in southern England, across Middlesex, Surrey, southern Essex, western Kent and a patchwork of parishes across Norfolk, it was common for at least a third of the properties to have three hearths and over. 

There are many local contrasts to explore further. South-east Norfolk and north-east Essex were notably more prosperous than north-west Essex, independent of the influence of London, and the patchwork pattern of wealth distribution in Norfolk around its market towns and prosperous villages is repeated in the Midlands. Nonetheless, the general pattern is clear enough: the distribution of population in the late seventeenth century was quite different from patterns found today, but Samuel Pepys and Daniel Defoe would have recognized a world in which south-east England abounded with the signs of prosperity and comfort in contrast to the north.

How Indian cottons steered British industrialisation

By Alka Raman (LSE)

This blog is part of a series of New Researcher blogs.

“Methods of Conveying Cotton in India to the Ports of Shipment,” from the Illustrated London News, 1861. Available at Wikimedia Commons.

Technological advancements within the British cotton industry have widely been acknowledged as the beginning of industrialisation in eighteenth and nineteenth century Britain. My research reveals that these advances were driven by a desire to match the quality of handmade cotton textiles from India.

I highlight how the introduction of Indian printed cottons into British markets created a frenzy of demand for these exotic goods. This led to immediate imitations by British textile manufacturers, keen to gain footholds in the domestic and world markets where Indian cottons were much desired.

The process of imitation soon revealed that British spinners could not spin the fine cotton yarn required to hand make the fine cotton cloth needed for fine printing. And British printers could not print cloth in the multitudes of colourfast colours that the Indian artisans had mastered over centuries.

These two key limitations in British textile manufacturing spurred demand-induced technological innovations to match the quality of Indian handmade printed cottons.

In order to test this, I chart the quality of English cotton textiles from 1740-1820 and compare them with Indian cottons of the same period. Thread per inch count is used as the measure of quality, and digital microscopy is deployed to establish their yarn composition to determine whether they are all-cotton textiles or mixed linen-cottons.

My findings show that the earliest British ‘cotton’ textiles were mixed linen-cottons and not all-cottons. Technological evolution in the British cotton industry was a pursuit of first the coarse, yet all-cotton cloth, followed by the fine all-cotton cloth such as muslin.

The evidence shows that British cotton cloth quality improved by 60% between 1747 and 1782 during the decades of the famous inventions of James Hargreaves’ spinning jenny, Richard Arkwright’s waterframe and Samuel Crompton’s mule. It further improved by 24% between 1782 and 1816. Overall, cloth quality improved by a staggering 99% between 1747 and 1816.

My research challenges our current understanding of industrialisation as a British and West European phenomenon, commonly explained using rationales such as high wages, availability of local energy sources or access to New World resources. Instead, it reveals that learning from material goods and knowledge brought into Britain and Europe from the East directly and substantially affected the foundations of the modern world as we know it.

The results also pose a more fundamental question: how does technological change take place? Based on my findings, learning from competitor products – especially imitation of novel goods using indigenous processes – may be identified as one crucial pathway for the creation of new ideas that shape technological change.

Industrial, regional, and gender divides in British unemployment between the wars

By Meredith M. Paker (Nuffield College, Oxford)

This blog is part of a series of New Researcher blogs.

A view from Victoria Tower, depicts the position of London on both sides of the Thames, 1930. Available at Wikimedia Commons.

‘Sometimes I feel that unemployment is too big a problem for people to deal with … It makes things no better, but worse, to know that your neighbours are as badly off as yourself, because it shows to what an extent the evil of unemployment has grown. And yet no one does anything about it’.

A skilled millwright, Memoirs of the Unemployed, 1934.

At the end of the First World War, an inflationary boom collapsed into a global recession, and the unemployment rate in Britain climbed to over 20 per cent. While the unemployment rate in other countries recovered during the 1920s, in Britain it remained near 10 per cent for the entire decade before the Great Depression. This persistently high unemployment was then intensified by the early 1930s slump, leading to an additional two million British workers becoming unemployed.

What caused this prolonged employment downturn in Britain during the 1920s and early 1930s? Using newly digitized data and econometrics, my project provides new evidence that a structural transformation of the economy away from export-oriented heavy manufacturing industries toward light manufacturing and service industries contributed to the employment downturn.

At a time when few countries collected any reliable national statistics at all, in every month of the interwar period the Ministry of Labour published unemployment statistics for men and women in 100 industries. These statistics derived from Britain’s unemployment benefit program established in 1911—the first such program in the world. While many researchers have used portions of this remarkable data by manually entering the data into a computer, I was able to improve on this technique by developing a process using an optical-character recognition iPhone app. The digitization of all the printed tables in the Ministry of Labour’s Gazette from 1923 through 1936 enables the econometric analysis of four times as many industries as in previous research and permits separate analyses for male and female workers (Figure 1).

Figure 1: Data digitization. Left-hand side is a sample printed table in the Ministry of Labour Gazette. Right-hand side is the cleaned digitized table in Excel.

This new data and analysis reveal four key findings about interwar unemployment.  First, the data show that unemployment was different for men and women. The unemployment rate for men was generally higher than for women, averaging 16.1 percent and 10.3 per cent, respectively.  Unemployment increased faster for women at the onset of the Great Depression but also recovered quicker (Figure 2). One reason for these distinct experiences is that men and women generally worked in different industries. Many unemployed men had previously worked in coal mining, building, iron and steel founding, and shipbuilding, while many unemployed women came from the cotton-textile industry, retail, hotel and club services, the woolen and worsted industry, and tailoring.

Figure 2: Male and female monthly unemployment rates. Source: Author’s digitization of Ministry of Labour Gazettes.

Second, regional differences in unemployment rates in the interwar period were not due only to the different industries located in each region. There were large regional differences in unemployment above and beyond the effects of the composition of industries in a region.

Third, structural change played an important role in interwar unemployment. A series of regression models indicate that, ceteris paribus, industries that expanded to meet production needs during World War I had higher unemployment rates in the 1920s. Additionally, industries that exported much of their production also faced more unemployment. An important component of the national unemployment problem was thus the adjustments that some industries had to make due to the global trade disturbances following World War I.

Finally, the Great Depression accelerated this structural change. In almost every sector, more adjustment occurred in the early 1930s than in the 1920s. Workers were drawn into growing industries from declining industries, at a particularly fast rate during the Great Depression.

Taken together, these results suggest that there were significant industrial, regional, and gender divides in interwar unemployment that are obscured by national unemployment trends. The employment downturn between the wars was thus intricately linked with the larger structural transformation of the British economy.


Meredith M. Paker

meredith.paker@nuffield.ox.ac.uk

Twitter: @mmpaker

Seeing like the Chinese imperial state: how many government employees did the empire need?

By Ziang Liu (LSE)

This blog is part of a series of New Researcher blogs.

The Qianlong Emperor’s Southern Inspection Tour, Scroll Six Entering Suzhou and the Grand Canal. Available at Wikimedia Commons

How many government employees do we need? This has always been a question for both politicians and the public. We often see the debates from both sides arguing about whether the government should hire or reduce more employees for many reasons.

This was also a question for the Chinese imperial government centuries ago. As the Chinese state governs a vast territory with great cultural and socio-economic diversity, the size of government concerned not only the empire’s fiscal challenges, but also the effectiveness of the governance. My research finds that while a large-scale reduction in government expenditure may have short-term benefits of improving fiscal conditions, in the long term, the lack of investments in administration may harm the state’s ability to govern.

Using the Chinese case, we are interested to see how much the imperial central government counted as a ‘sufficient’ number of employees. How did the Chinese central government make the calculation? After all, the government has to know the ‘numbers’ before it takes any further actions.

Long before the late sixteenth century, the Chinese central government did not have a clear account of how much was spent on its local governments. It was only then, when the marketisation trend of China’s economy enabled the state to calculate the costs of its spending in silver currency, that the imperial central government began to ‘see’ the previously unknown amount of local spending in a unified and legible form.

Consequently, my research finds that over the sixteenth and eighteenth centuries, the Chinese imperial central state significantly improved its fiscal circumstances at the expense of local finance. For roughly a century’s long fiscal pressure between the late sixteenth and late seventeenth century (see Figure A), the central government continuously expanded its incomes and cut off local spending on government employees.

Eventually, at the turn of the eighteenth century, the central treasury’s annual income was roughly four to five times larger than the late sixteenth century level (see Figure B), and the accumulated fiscal surplus was in general one to two times greater than its annual budgetary income (see Figure C).

But what the central government left to the local, both manpower and funding, seems to have been too little to govern the empire. My research finds that either in terms of total numbers of government employees (see Figure D) or employees per thousand population (see Figure E), the size of China’s local states shrank quite dramatically from the late sixteenth century.

In the sample regions, we find that in the eighteenth century, every one to two government employees had to serve one thousand local population (Figure E). In the meantime, records also show that salary payments for local government employees remained completely unchanged from the late seventeenth century.

Therefore, my research considers that when the Chinese central state attempted to intervene local finance, it had the stronger intention of constraining rather than rationalising local finance. Even in the eighteenth century, when the empire’s fiscal circumstances were unprecedentedly good, the central state did not consider increasing investments in local administration.

Given the constant population growth in China from 100 million in the early seventeenth century to more than 300 million in the early nineteenth century, it is hardly persuasive that the size of China’s local governments could be effective in local governance, not even to mention that due to reductions in local finance, the Chinese local states from the late seventeenth century kept more personnel for state logistics and information network instead of local public services such as education and local security.

The Paradox of Redistribution in time: Social spending in 54 countries, 1967-2018

By Xabier García Fuente (Universitat de Barcelona)

This research is due to be presented in the sixth New Researcher Online Session: ‘Spending & Networks’.

Money of various currencies. Available at Wikimedia Commons.

Why are some countries more redistributive than others? This question is central to current welfare state politics, especially in view of rising levels of inequality and the ensuing social tensions. Since coming to power in 2019, Brazil’s far-right government has restricted access to Bolsa Familia—a conditional cash-transfer program—despite its success at reducing poverty with a very low cost (less than 0.5% of national GDP). In richer countries, the social-democratic project is said to be obsolete, as left-wing parties forsake egalitarian policies to cater to economic winners (Piketty, 2020).

How can we make sense of this sort of distributive conflict? Are there common patterns in rich and middle-income countries? My research suggests that welfare state institutions show great inertia, so we need to observe the origins of social policies to explain current redistributive outcomes. Initial policy positions —how pro-poor or pro-rich social transfers were— determine what groups emerge as net winners or net losers when social expenditure increases, which crucially affects the viability and direction of policy change.

Korpi and Palme (1998) famously suggested the existence of a Paradox of Redistribution: ‘the more we target benefits at the poor … the less likely we are to reduce poverty and inequality’. In their framework, progressive programs may be more redistributive per euro spent, but they generate zero-sum conflicts between the poor and the middle-class and obstruct the formation of redistributive political coalitions. In contrast, universal programs align the preferences of the poor and the middle-class and lead to bigger, more egalitarian welfare states. In sum, redistribution increases as transfers become bigger and less pro-poor.

Using survey micro-data provided by the Luxembourg Income Study (LIS), my research updates Korpi and Palme’s (1998) study and addresses two gaps. First, I extend the sample to 54 rich and middle-income countries, including elitist welfare states in Latin America and other middle-income countries. As Figure 1 shows, extending the sample would clearly refute the Paradox: redistribution is higher in more pro-poor countries.

Second, in line with the dynamic political arguments suggested in the Paradox, I explore the evolution of social transfers and redistribution within countries over time. Overall, countries have increased redistribution by making their transfers less pro-poor, which matches the predictions of the Paradox (see Figure 2). The relationship is especially strong in Ireland, Canada, United Kingdom and Norway. Parting from highly progressive (pro-poor) policy positions, these countries have improved redistribution increasing expenditure and reducing their bias towards the poor.

Latin American countries are a notorious exception to this pattern. They are markedly pro-rich and, contrary to the cases above, they have improved redistribution very modestly by becoming more pro-poor (see Figure 3).

What does it mean that redistribution increases as transfers become more or less pro-poor? United Kingdom and Mexico provide a good example (see Figure 4). In the United Kingdom, redistribution through social transfers increased from 7 Gini points in 1974 to 19 Gini points in 2016. In the same period, the share of total social transfers received by the poorest 20% of the population decreased from 35% to 18%. In Mexico, the share of total social transfers obtained by the poorest 20% went from 2% in 1984 to 10% in 2016, while the share obtained by the richest 20% decreased from 66% to 51%. Yet, despite these advances, redistribution through social transfers in Mexico remains very low (2.5 Gini points in 2016, from 0.1 Gini points in 1984).

Conclusions

In countries with pro-poor social transfers, extending coverage involves reaching up the income ladder to include richer constituencies, which narrows the gap between net winners and net losers. This reduces the salience of distributive conflicts and eases welfare state expansion, leading to higher redistribution. However, as transfers become more pro-rich the margin to leverage the progressivity-size trade-off narrows, which helps explain the inability of current welfare states to increase redistribution as inequality rises.

In countries with pro-rich social transfers, extending coverage involves reaching down the income ladder to include the poor. Launching programs for the poor requires rising taxes or cutting the benefits of privileged insiders, which creates a clearly delineated gap between net winners and net losers. This increases the salience of distributive conflicts, leading to smaller, less egalitarian welfare states.

In sum, social policy design is very persistent because it crucially shapes distributive conflicts. Advanced welfare states have increased redistribution by getting bigger and less progressive (less pro-poor). This fits with historical evidence that advanced welfare states grew from minimalist cores, but it also describes contemporary policy change. Following this same reasoning, elitist welfare states in developing regions will find it difficult to become more egalitarian. Figure 5 shows the persistency of distributive outcomes across welfare regimes.

References

Korpi, W. and Palme, J. (1998). The paradox of redistribution and strategies of equality: Welfare state institutions, inequality, and poverty in the western countries. American Sociological Review, 63(5):661–687.

Piketty, T. (2020). Capital and Ideology. Harvard University Press.


Xabier García Fuente

Twitter: @xabigarf

Coordinating Decline: Governmental Regulation of Disappearing Horse Markets in Britain, 1873-1957 (NR Online Session 5)

By Luise Elsaesser (European University Institute)

This research is due to be presented in the fifth New Researcher Online Session: ‘Government & Colonization’.

 

Elsaesser1
Milkman and horse-drawn cart – Alfred Denny, Victoria Dairy, Kew Gardens, Est 1900. Available at Wikimedia Commons.

The enormous horse drawn society of 1900 was new. An unprecedented amount of goods and people could only be moved by trains and ships between terminal points and therefore, horses were required by anybody and for everything to reach its final destination. But, the moment the need for horsepower peaked, new technologies had already started to make the working horse redundant for everyday economic life. The disappearance of the horse was a rapid process in the urban areas, whereas the horse remained an economic necessity much longer in other areas of use such as agriculture. The horses decline left behind deep traces causing fundamental changes in soundscapes, landscapes, and smells of human environment and economic life.

Elsaesser2

Against prevailing narratives of a laissez-faire approach, the British government monitored and shaped this major shift in the use of energy source actively. The exploration of the political economy of a disappearing commercial good examines the regulatory practices and ways the British government interacted with producers and consumers of markets. This demonstrates that governmental regulations are inseparable from modern British economy and that government intervention follows the careful assessment of costs and benefits as well as self-interest over the long time period.

Public pressure groups such as the RSPCA as well as social and business elites were often strongly connected to government circles embracing the opportunity to influence policy outcomes. For instance, the Royal Commission on Horse Breeding was formed in December 1887 is telling because it shows where policy making power that passed through Westminster originated. The commissionaires were without exception holders of heredity titles, members of the gentry, politicians, or businessmen, and all were avid horsemen and breeders. To name but two, Henry Chaplin, the President of the Board of Agriculture, had a family background of Tory country gentlemen and was a dedicated rider, and Mr. John Gilmour, whose merchant father grew rich in the Empire, owned a Clydesdale stud of national reputation. Their self-interest and devotion to horse breeding seems obvious, especially in the context of the agricultural depression when livestock proved more profitable than the cultivation of grain.

Although economic agents of the horse markets were often moving within government circles, they had to face regulations. For example, a legal framework was developed which fashioned the scope of manoeuvre for import and export markets for horses. The most prominent case during the transition from horse to motor-power was the emergence of an export market of horses for slaughter. British charitable organisations such as the RSPCA, the Women’s Guild for Empire, and the National Federation of Women’s Institute pressured the government to prevent the export of horses for slaughter on grounds of “national honour” since the 1930s. However, though the government never publicly admitted it, the meat market was endorsed to manage the declining utility of horsepower. With technologies becoming cheaper, horsemeat markets were greeted by large businesses such as railway companies as way to dispose of their working horses without making a financial loss. Hence, the markets for working horses were not merely associated with the economic use and demand for their muscle power but were linked to government regulation.

Ultimately, an analysis of governmental coordination can be linked to wider socio-cultural and economic systems of consumption because policy outcome indeed influenced the use of the horse but likewise coordination was monitored by the agents of the working horse markets.


Luise Elsaesser

luise.elsaesser@eui.eu

Twitter: @Luise_Elsaesser

Did the Ottomans Import the Low Wages of the British in the 19th Century? An Examination of Ottoman Textile Factories (NR Online Session 4)

By Tamer Güven (Istanbul University)

This research is due to be presented in the fourth New Researcher Online Session: ‘Equality & Wages’.

 

Guven1
The Istanbul Grand Bazaar in the 1890s. Available at Wikimedia Commons.

Compared to the UK and Western Europe, there are a limited number of studies on wages and standards of living in the Ottoman empire. For the Ottoman empire the only source that can provide regular wage data for industry are the Ottoman state factories established in the 1840s to meet the needs of the state’s growing and centralized military and bureaucracy. Limitations in the sources of data are explained by the relative absence of industrial wage series in the monographies on Ottoman industrial institutions, and that manufacturing mainly comprised small manufacturers who did not keep records. This paucity in data may change as the Ottoman Archives become fully catalogued. The main aim of this study is to construct a wage series using the wage ledgers of those working in state factories. Consequently, I examined four prominent textile-related factories: Hereke Imperial Factory, Veliefendi Calico Factory, Bursa Silk Factory, and İzmit Cloth Factory. Only the Hereke Factory offers a 52-year wage series between 1848-1899. The data for the Veliefendi Factory started in 1848 but are disrupted in 1876 as the factory was transferred to military rule; the same applies to the İzmit Factory, which was established in 1844, but transferred to military rule in 1849.

I created two separate daily and monthly wage series to determine how many days workers worked per month and how this changed during the nineteenth century. Thus, not only the workers’ potential wages but also the workers’ observed monthly wages can be analysed. Some groups of workers were eliminated from the dataset for a variety of reasons. For example, civilian officials and masters working in factories were excluded because of their relatively high wages. Conversely, because of their relatively low wages, I also exclude carpet weavers —  mostly young girls and children.  I preferred to use median values for monthly wage series to include as many workers as possible in the analysis. As with much historical data, the wage series created in this study are incomplete. To overcome this I complement data for the Hereke Factory wage series with data from the  Veliefendi and Bursa Factories.

My results indicate that daily real wages increased by only by 0.03 per cent,  per annum,  between  1852 and 1899.  However, the real monthly wages of Hereke Factory workers rose by 0.11 per cent, per annum,  between 1848 and 1899, but by 0.24 per cent per annum using 1852 as a starting point.  Monthly wages increased faster than daily wages, but at the cost of more workdays for workers. Average workdays increased by 0.44 per cent, per annum over the span of the period. Although the Veliefendi Factory provides a narrower wage series from 1848 to 1876, it supports this pattern. Limited,  but prominent examinations of Ottoman wage history claim that construction, urban, and agricultural workers’ wages increased, albeit at different rates in the same period. How can we explain the increase in wages of other sectors when the wages of textile workers were stagnant?

Many observations on the Ottoman cities has shown that industrial production, particularly in the textile sector, shifted from urban to rural,  or from craft workshops to houses, to compete with cheap British yarn and fabric in the 19th century. According to my calculations, imports of Ottoman cotton yarn increased by a factor of 25 to 50 in the 19th century. This trend was most pronounced after the 1838 Anglo-Turkish Convention, when cheap English products were imported into the Ottoman Empire, and Ottoman producers sought cheaper labour.  Labour-saving machines both facilitated the export of  British yarns and fabrics to, and lowered wages in, the Ottoman empire.  Although the wage series for the Hereke factory,  and, to a more limited extent,  the Veliefendi factory provide evidence in support of this hypothesis, numerous studies on Ottoman industry in the 19th-century support the same argument, though without a wage series.

Women in the German Economy: A Long Way to Gender Equality (NR Session 4)

By Theresa Neef (Freie Universität Berlin)

This research is due to be presented in the fourth New Researcher Online Session: ‘Equality & Wages’.

 

Neef2
Scanned image of a mid-1930s postcard depicting Unter den Linden in Berlin. Available at Wikimedia Commons.

Female employees in the European Union (EU-27) earn, on average, about 85 per cent of the wages received by male employees. While some countries such as France and Sweden exhibit closer pay equality, women in Germany face a larger gap and receive just 79 per cent of the average male wage, according to the 2018 results from Eurostat 2020. How did this state of affairs emerge?

To understand contemporary pay inequality, it is vital to take a long-run perspective and look at the development of the gender pay ratio in Germany since 1913.  An in-depth analysis of historical inquiry reports and publications by the statistical offices reveals that in 1913 women in Germany earned around 44 per cent of male wages. Although  World War I led to a temporary increase in women’s pay in blue-collar occupations, this trend was soon reversed and the gender-segregated labour market was re-established following demobilization.

The interwar period brought about the most dynamic leap in gender relations during the 20th century. While in 1920 German women earned on average 45% of a man’s average pay, by 1937 this share had increased to 61%, a consequence of women’s occupational transition and the more progressive institutional framework adopted during the Weimar Republic.

With the growing number of white-collar jobs, young females had job opportunities that were better paid and more socially accepted than the work in low-paid domestic services or agriculture. That was an opportunity they took: from 1910 to 1960, women increased their share in those fast-growing occupations from 18% to 45%, while their share decreased in agricultural work. This trend most likely contributed to women’s wage gains relative to men.

During the Weimar Republic, a new constitution and a more progressive institutional framework fostered further equalization of earnings, especially in the white-collar occupations. In 1919, the Weimar constitution introduced compulsory schooling for all youths under 18 years irrespective of gender. For the first time, this law provided girls with the same chances to receive vocational education and an apprenticeship as their male peers. All youths that worked in commercial and industrial firms were obliged to attend vocational commercial school at least once a week for two to three years.  Before the introduction of this law, employers hardly invested in girls’ apprenticeships because women were seen as transient employees leaving the labour force upon marriage. This non-gendered schooling obligation led to a dynamic convergence of vocational training between boys and girls.

In the post-1945 period, the gender pay gap decreased in Germany from 65 percent in 1960 to 74 per cent  twenty years later. In contrast,  Sweden took the lead among European countries and by 1980, the gender pay gap was just 14 percentage points. However, since the 1980s, the gender pay gap has stagnated in many European countries.

 

Neef1
Figure 1: Gender pay ratio, Germany, Sweden, and the USA. Swedish and German series based on mean earnings; US-American time series based on median earnings if not indicated differently. The German time series covers the German Reich, the Federal Republic of Germany and reunified Germany (hollow items).

 

All in all, the long-run perspective shows that since the beginning of the 20th century Germany has persistently exhibited a lower gender pay equality than other European economies, such as Sweden, despite the important improvement observed in the interwar period. In the postwar period, the gap between Germany and Sweden widened further due to slower progress in the young Federal Republic. These results suggest that differences in gender pay inequality across countries can be traced back to historical roots that go beyond the developments in the past forty years.

The Growth Pattern of British Children, 1850-1975

By Pei Gao (NYU Shanghai) & Eric B. Schneider (LSE)

The full article from this blog is forthcoming in the Economic History Review and is currently available on Early View.

 

Gao4
HMS Indefatigable with HMS Diadem (1898) in the Gulf of St. Lawrence 1901. Available at Wikimedia Commons.

Since the mid-nineteenth century, the average height of adult British men increased by 11 centimetres. This increase in final height reflects improvements in living standards and health, and provides insights on the growth pattern of children which has been comparatively neglected. Child growth is very sensitive to economic and social conditions: children with limited nutrition or who suffer from chronic disease, grow more slowly than healthy children. Thus, to achieve such a large increase in adult height, health conditions must have improved dramatically for children since the mid-nineteenth century.

Our paper seeks to understand how child growth changed over time as adult height was increasing. Child growth follows a typical pattern shown in Figure 1.  The graph on the left shows the height by age curve for modern healthy children, and the graph on the right shows the change in height at each age (height velocity). We look at three dimensions of the growth pattern of children: the final adult height that children achieve, i.e. what historians have predominantly focused on to date; the timing (age) when the growth velocity peaks during puberty,  and, finally,   the overall speed of maturation which affects the velocity of growth across all ages and the length of the growing years.

 

Figure 1.         Weights and Heights for boys who trained on HMS Indefatigable, 1860s-1990s.

Gao1
Source: as per article

 

To understand how growth changed over time, we collected information about 11,548 boys who were admitted to the training ship Indefatigable from the 1860s to 1990s (Figure 2).  This ship was located on the River Mersey near Liverpool for much of its history and it trained boys for careers in the merchant marine and navy. Crucially, the administrators recorded the boys’ heights and weights at admission and discharge, allowing us to calculate growth velocities for each individual.

 

Figure 2.         HMS Indefatigable

Gao2
Source: By permission, the Indefatigable Old Boys Society

 

We trace the boys’ heights over time (grouping them by birth decade) and find that they grew most rapidly during the interwar period. In addition, the most novel finding was that for boys born in the nineteenth century there is little evidence that they experienced a strong pubertal growth spurt unlike healthy boys today. Their growth velocity was relatively flat across puberty.  However, starting with the 1910 birth decade, boys began experiencing more rapid pubertal growth similar to the right-hand graph in Figure 1. The appearance of rapid pubertal growth is a product of two factors: an increase in the speed of maturation, which meant that boys grew more rapidly during puberty than before and, secondly,  a decrease in the variation in the timing of the pubertal growth spurt, which meant that boys were experiencing their pubertal growth at more similar ages.

 

Figure 3.         Adjusted height-velocity for boys who trained on HMS Indefatigable.

Gao3
Source: as per article

 

This sudden change in the growth pattern of children is a new finding that is not predicted by the historical or medical literature.  In the paper, we show that this change cannot be explained by improvements in living standards on the ship and that it is robust to a number of potential alternative explanations.   We argue that reductions in disease exposure and illness were likely the biggest contributing factor. Infant mortality rates, an indicator of chronic illness in childhood, declined only after 1900 in England and Wales, so a decline in illness in childhood could have mattered. In addition, although general levels of nutrition were more than adequate by the turn of the twentieth century, the introduction of free school meals and the milk-in-schools programme in the early twentieth century,  likely also helped ensure that children had access to key protein and nutrients necessary for growth.

Our findings matter for two reasons. First, they help complete the fragmented picture in the existing historical literature on how children’s growth changed over time. Second, they highlight the importance of the 1910s and the interwar period as a turning point in child growth. Existing research on adult heights has already shown that the interwar period was a period of rapid growth for children, but our results further explain how and why child growth accelerated in that period.

 


Pei Gao

p.gao@nyu.edu

 

Eric B. Schneider

e.b.schneider@lse.ac.uk

Twitter: @ericbschneider