How Indian cottons steered British industrialisation

By Alka Raman (LSE)

This blog is part of a series of New Researcher blogs.

“Methods of Conveying Cotton in India to the Ports of Shipment,” from the Illustrated London News, 1861. Available at Wikimedia Commons.

Technological advancements within the British cotton industry have widely been acknowledged as the beginning of industrialisation in eighteenth and nineteenth century Britain. My research reveals that these advances were driven by a desire to match the quality of handmade cotton textiles from India.

I highlight how the introduction of Indian printed cottons into British markets created a frenzy of demand for these exotic goods. This led to immediate imitations by British textile manufacturers, keen to gain footholds in the domestic and world markets where Indian cottons were much desired.

The process of imitation soon revealed that British spinners could not spin the fine cotton yarn required to hand make the fine cotton cloth needed for fine printing. And British printers could not print cloth in the multitudes of colourfast colours that the Indian artisans had mastered over centuries.

These two key limitations in British textile manufacturing spurred demand-induced technological innovations to match the quality of Indian handmade printed cottons.

In order to test this, I chart the quality of English cotton textiles from 1740-1820 and compare them with Indian cottons of the same period. Thread per inch count is used as the measure of quality, and digital microscopy is deployed to establish their yarn composition to determine whether they are all-cotton textiles or mixed linen-cottons.

My findings show that the earliest British ‘cotton’ textiles were mixed linen-cottons and not all-cottons. Technological evolution in the British cotton industry was a pursuit of first the coarse, yet all-cotton cloth, followed by the fine all-cotton cloth such as muslin.

The evidence shows that British cotton cloth quality improved by 60% between 1747 and 1782 during the decades of the famous inventions of James Hargreaves’ spinning jenny, Richard Arkwright’s waterframe and Samuel Crompton’s mule. It further improved by 24% between 1782 and 1816. Overall, cloth quality improved by a staggering 99% between 1747 and 1816.

My research challenges our current understanding of industrialisation as a British and West European phenomenon, commonly explained using rationales such as high wages, availability of local energy sources or access to New World resources. Instead, it reveals that learning from material goods and knowledge brought into Britain and Europe from the East directly and substantially affected the foundations of the modern world as we know it.

The results also pose a more fundamental question: how does technological change take place? Based on my findings, learning from competitor products – especially imitation of novel goods using indigenous processes – may be identified as one crucial pathway for the creation of new ideas that shape technological change.

Industrial, regional, and gender divides in British unemployment between the wars

By Meredith M. Paker (Nuffield College, Oxford)

This blog is part of a series of New Researcher blogs.

A view from Victoria Tower, depicts the position of London on both sides of the Thames, 1930. Available at Wikimedia Commons.

‘Sometimes I feel that unemployment is too big a problem for people to deal with … It makes things no better, but worse, to know that your neighbours are as badly off as yourself, because it shows to what an extent the evil of unemployment has grown. And yet no one does anything about it’.

A skilled millwright, Memoirs of the Unemployed, 1934.

At the end of the First World War, an inflationary boom collapsed into a global recession, and the unemployment rate in Britain climbed to over 20 per cent. While the unemployment rate in other countries recovered during the 1920s, in Britain it remained near 10 per cent for the entire decade before the Great Depression. This persistently high unemployment was then intensified by the early 1930s slump, leading to an additional two million British workers becoming unemployed.

What caused this prolonged employment downturn in Britain during the 1920s and early 1930s? Using newly digitized data and econometrics, my project provides new evidence that a structural transformation of the economy away from export-oriented heavy manufacturing industries toward light manufacturing and service industries contributed to the employment downturn.

At a time when few countries collected any reliable national statistics at all, in every month of the interwar period the Ministry of Labour published unemployment statistics for men and women in 100 industries. These statistics derived from Britain’s unemployment benefit program established in 1911—the first such program in the world. While many researchers have used portions of this remarkable data by manually entering the data into a computer, I was able to improve on this technique by developing a process using an optical-character recognition iPhone app. The digitization of all the printed tables in the Ministry of Labour’s Gazette from 1923 through 1936 enables the econometric analysis of four times as many industries as in previous research and permits separate analyses for male and female workers (Figure 1).

Figure 1: Data digitization. Left-hand side is a sample printed table in the Ministry of Labour Gazette. Right-hand side is the cleaned digitized table in Excel.

This new data and analysis reveal four key findings about interwar unemployment.  First, the data show that unemployment was different for men and women. The unemployment rate for men was generally higher than for women, averaging 16.1 percent and 10.3 per cent, respectively.  Unemployment increased faster for women at the onset of the Great Depression but also recovered quicker (Figure 2). One reason for these distinct experiences is that men and women generally worked in different industries. Many unemployed men had previously worked in coal mining, building, iron and steel founding, and shipbuilding, while many unemployed women came from the cotton-textile industry, retail, hotel and club services, the woolen and worsted industry, and tailoring.

Figure 2: Male and female monthly unemployment rates. Source: Author’s digitization of Ministry of Labour Gazettes.

Second, regional differences in unemployment rates in the interwar period were not due only to the different industries located in each region. There were large regional differences in unemployment above and beyond the effects of the composition of industries in a region.

Third, structural change played an important role in interwar unemployment. A series of regression models indicate that, ceteris paribus, industries that expanded to meet production needs during World War I had higher unemployment rates in the 1920s. Additionally, industries that exported much of their production also faced more unemployment. An important component of the national unemployment problem was thus the adjustments that some industries had to make due to the global trade disturbances following World War I.

Finally, the Great Depression accelerated this structural change. In almost every sector, more adjustment occurred in the early 1930s than in the 1920s. Workers were drawn into growing industries from declining industries, at a particularly fast rate during the Great Depression.

Taken together, these results suggest that there were significant industrial, regional, and gender divides in interwar unemployment that are obscured by national unemployment trends. The employment downturn between the wars was thus intricately linked with the larger structural transformation of the British economy.


Meredith M. Paker

meredith.paker@nuffield.ox.ac.uk

Twitter: @mmpaker

Seeing like the Chinese imperial state: how many government employees did the empire need?

By Ziang Liu (LSE)

This blog is part of a series of New Researcher blogs.

The Qianlong Emperor’s Southern Inspection Tour, Scroll Six Entering Suzhou and the Grand Canal. Available at Wikimedia Commons

How many government employees do we need? This has always been a question for both politicians and the public. We often see the debates from both sides arguing about whether the government should hire or reduce more employees for many reasons.

This was also a question for the Chinese imperial government centuries ago. As the Chinese state governs a vast territory with great cultural and socio-economic diversity, the size of government concerned not only the empire’s fiscal challenges, but also the effectiveness of the governance. My research finds that while a large-scale reduction in government expenditure may have short-term benefits of improving fiscal conditions, in the long term, the lack of investments in administration may harm the state’s ability to govern.

Using the Chinese case, we are interested to see how much the imperial central government counted as a ‘sufficient’ number of employees. How did the Chinese central government make the calculation? After all, the government has to know the ‘numbers’ before it takes any further actions.

Long before the late sixteenth century, the Chinese central government did not have a clear account of how much was spent on its local governments. It was only then, when the marketisation trend of China’s economy enabled the state to calculate the costs of its spending in silver currency, that the imperial central government began to ‘see’ the previously unknown amount of local spending in a unified and legible form.

Consequently, my research finds that over the sixteenth and eighteenth centuries, the Chinese imperial central state significantly improved its fiscal circumstances at the expense of local finance. For roughly a century’s long fiscal pressure between the late sixteenth and late seventeenth century (see Figure A), the central government continuously expanded its incomes and cut off local spending on government employees.

Eventually, at the turn of the eighteenth century, the central treasury’s annual income was roughly four to five times larger than the late sixteenth century level (see Figure B), and the accumulated fiscal surplus was in general one to two times greater than its annual budgetary income (see Figure C).

But what the central government left to the local, both manpower and funding, seems to have been too little to govern the empire. My research finds that either in terms of total numbers of government employees (see Figure D) or employees per thousand population (see Figure E), the size of China’s local states shrank quite dramatically from the late sixteenth century.

In the sample regions, we find that in the eighteenth century, every one to two government employees had to serve one thousand local population (Figure E). In the meantime, records also show that salary payments for local government employees remained completely unchanged from the late seventeenth century.

Therefore, my research considers that when the Chinese central state attempted to intervene local finance, it had the stronger intention of constraining rather than rationalising local finance. Even in the eighteenth century, when the empire’s fiscal circumstances were unprecedentedly good, the central state did not consider increasing investments in local administration.

Given the constant population growth in China from 100 million in the early seventeenth century to more than 300 million in the early nineteenth century, it is hardly persuasive that the size of China’s local governments could be effective in local governance, not even to mention that due to reductions in local finance, the Chinese local states from the late seventeenth century kept more personnel for state logistics and information network instead of local public services such as education and local security.

The Paradox of Redistribution in time: Social spending in 54 countries, 1967-2018

By Xabier García Fuente (Universitat de Barcelona)

This research is due to be presented in the sixth New Researcher Online Session: ‘Spending & Networks’.

Money of various currencies. Available at Wikimedia Commons.

Why are some countries more redistributive than others? This question is central to current welfare state politics, especially in view of rising levels of inequality and the ensuing social tensions. Since coming to power in 2019, Brazil’s far-right government has restricted access to Bolsa Familia—a conditional cash-transfer program—despite its success at reducing poverty with a very low cost (less than 0.5% of national GDP). In richer countries, the social-democratic project is said to be obsolete, as left-wing parties forsake egalitarian policies to cater to economic winners (Piketty, 2020).

How can we make sense of this sort of distributive conflict? Are there common patterns in rich and middle-income countries? My research suggests that welfare state institutions show great inertia, so we need to observe the origins of social policies to explain current redistributive outcomes. Initial policy positions —how pro-poor or pro-rich social transfers were— determine what groups emerge as net winners or net losers when social expenditure increases, which crucially affects the viability and direction of policy change.

Korpi and Palme (1998) famously suggested the existence of a Paradox of Redistribution: ‘the more we target benefits at the poor … the less likely we are to reduce poverty and inequality’. In their framework, progressive programs may be more redistributive per euro spent, but they generate zero-sum conflicts between the poor and the middle-class and obstruct the formation of redistributive political coalitions. In contrast, universal programs align the preferences of the poor and the middle-class and lead to bigger, more egalitarian welfare states. In sum, redistribution increases as transfers become bigger and less pro-poor.

Using survey micro-data provided by the Luxembourg Income Study (LIS), my research updates Korpi and Palme’s (1998) study and addresses two gaps. First, I extend the sample to 54 rich and middle-income countries, including elitist welfare states in Latin America and other middle-income countries. As Figure 1 shows, extending the sample would clearly refute the Paradox: redistribution is higher in more pro-poor countries.

Second, in line with the dynamic political arguments suggested in the Paradox, I explore the evolution of social transfers and redistribution within countries over time. Overall, countries have increased redistribution by making their transfers less pro-poor, which matches the predictions of the Paradox (see Figure 2). The relationship is especially strong in Ireland, Canada, United Kingdom and Norway. Parting from highly progressive (pro-poor) policy positions, these countries have improved redistribution increasing expenditure and reducing their bias towards the poor.

Latin American countries are a notorious exception to this pattern. They are markedly pro-rich and, contrary to the cases above, they have improved redistribution very modestly by becoming more pro-poor (see Figure 3).

What does it mean that redistribution increases as transfers become more or less pro-poor? United Kingdom and Mexico provide a good example (see Figure 4). In the United Kingdom, redistribution through social transfers increased from 7 Gini points in 1974 to 19 Gini points in 2016. In the same period, the share of total social transfers received by the poorest 20% of the population decreased from 35% to 18%. In Mexico, the share of total social transfers obtained by the poorest 20% went from 2% in 1984 to 10% in 2016, while the share obtained by the richest 20% decreased from 66% to 51%. Yet, despite these advances, redistribution through social transfers in Mexico remains very low (2.5 Gini points in 2016, from 0.1 Gini points in 1984).

Conclusions

In countries with pro-poor social transfers, extending coverage involves reaching up the income ladder to include richer constituencies, which narrows the gap between net winners and net losers. This reduces the salience of distributive conflicts and eases welfare state expansion, leading to higher redistribution. However, as transfers become more pro-rich the margin to leverage the progressivity-size trade-off narrows, which helps explain the inability of current welfare states to increase redistribution as inequality rises.

In countries with pro-rich social transfers, extending coverage involves reaching down the income ladder to include the poor. Launching programs for the poor requires rising taxes or cutting the benefits of privileged insiders, which creates a clearly delineated gap between net winners and net losers. This increases the salience of distributive conflicts, leading to smaller, less egalitarian welfare states.

In sum, social policy design is very persistent because it crucially shapes distributive conflicts. Advanced welfare states have increased redistribution by getting bigger and less progressive (less pro-poor). This fits with historical evidence that advanced welfare states grew from minimalist cores, but it also describes contemporary policy change. Following this same reasoning, elitist welfare states in developing regions will find it difficult to become more egalitarian. Figure 5 shows the persistency of distributive outcomes across welfare regimes.

References

Korpi, W. and Palme, J. (1998). The paradox of redistribution and strategies of equality: Welfare state institutions, inequality, and poverty in the western countries. American Sociological Review, 63(5):661–687.

Piketty, T. (2020). Capital and Ideology. Harvard University Press.


Xabier García Fuente

Twitter: @xabigarf

Coordinating Decline: Governmental Regulation of Disappearing Horse Markets in Britain, 1873-1957 (NR Online Session 5)

By Luise Elsaesser (European University Institute)

This research is due to be presented in the fifth New Researcher Online Session: ‘Government & Colonization’.

 

Elsaesser1
Milkman and horse-drawn cart – Alfred Denny, Victoria Dairy, Kew Gardens, Est 1900. Available at Wikimedia Commons.

The enormous horse drawn society of 1900 was new. An unprecedented amount of goods and people could only be moved by trains and ships between terminal points and therefore, horses were required by anybody and for everything to reach its final destination. But, the moment the need for horsepower peaked, new technologies had already started to make the working horse redundant for everyday economic life. The disappearance of the horse was a rapid process in the urban areas, whereas the horse remained an economic necessity much longer in other areas of use such as agriculture. The horses decline left behind deep traces causing fundamental changes in soundscapes, landscapes, and smells of human environment and economic life.

Elsaesser2

Against prevailing narratives of a laissez-faire approach, the British government monitored and shaped this major shift in the use of energy source actively. The exploration of the political economy of a disappearing commercial good examines the regulatory practices and ways the British government interacted with producers and consumers of markets. This demonstrates that governmental regulations are inseparable from modern British economy and that government intervention follows the careful assessment of costs and benefits as well as self-interest over the long time period.

Public pressure groups such as the RSPCA as well as social and business elites were often strongly connected to government circles embracing the opportunity to influence policy outcomes. For instance, the Royal Commission on Horse Breeding was formed in December 1887 is telling because it shows where policy making power that passed through Westminster originated. The commissionaires were without exception holders of heredity titles, members of the gentry, politicians, or businessmen, and all were avid horsemen and breeders. To name but two, Henry Chaplin, the President of the Board of Agriculture, had a family background of Tory country gentlemen and was a dedicated rider, and Mr. John Gilmour, whose merchant father grew rich in the Empire, owned a Clydesdale stud of national reputation. Their self-interest and devotion to horse breeding seems obvious, especially in the context of the agricultural depression when livestock proved more profitable than the cultivation of grain.

Although economic agents of the horse markets were often moving within government circles, they had to face regulations. For example, a legal framework was developed which fashioned the scope of manoeuvre for import and export markets for horses. The most prominent case during the transition from horse to motor-power was the emergence of an export market of horses for slaughter. British charitable organisations such as the RSPCA, the Women’s Guild for Empire, and the National Federation of Women’s Institute pressured the government to prevent the export of horses for slaughter on grounds of “national honour” since the 1930s. However, though the government never publicly admitted it, the meat market was endorsed to manage the declining utility of horsepower. With technologies becoming cheaper, horsemeat markets were greeted by large businesses such as railway companies as way to dispose of their working horses without making a financial loss. Hence, the markets for working horses were not merely associated with the economic use and demand for their muscle power but were linked to government regulation.

Ultimately, an analysis of governmental coordination can be linked to wider socio-cultural and economic systems of consumption because policy outcome indeed influenced the use of the horse but likewise coordination was monitored by the agents of the working horse markets.


Luise Elsaesser

luise.elsaesser@eui.eu

Twitter: @Luise_Elsaesser

Did the Ottomans Import the Low Wages of the British in the 19th Century? An Examination of Ottoman Textile Factories (NR Online Session 4)

By Tamer Güven (Istanbul University)

This research is due to be presented in the fourth New Researcher Online Session: ‘Equality & Wages’.

 

Guven1
The Istanbul Grand Bazaar in the 1890s. Available at Wikimedia Commons.

Compared to the UK and Western Europe, there are a limited number of studies on wages and standards of living in the Ottoman empire. For the Ottoman empire the only source that can provide regular wage data for industry are the Ottoman state factories established in the 1840s to meet the needs of the state’s growing and centralized military and bureaucracy. Limitations in the sources of data are explained by the relative absence of industrial wage series in the monographies on Ottoman industrial institutions, and that manufacturing mainly comprised small manufacturers who did not keep records. This paucity in data may change as the Ottoman Archives become fully catalogued. The main aim of this study is to construct a wage series using the wage ledgers of those working in state factories. Consequently, I examined four prominent textile-related factories: Hereke Imperial Factory, Veliefendi Calico Factory, Bursa Silk Factory, and İzmit Cloth Factory. Only the Hereke Factory offers a 52-year wage series between 1848-1899. The data for the Veliefendi Factory started in 1848 but are disrupted in 1876 as the factory was transferred to military rule; the same applies to the İzmit Factory, which was established in 1844, but transferred to military rule in 1849.

I created two separate daily and monthly wage series to determine how many days workers worked per month and how this changed during the nineteenth century. Thus, not only the workers’ potential wages but also the workers’ observed monthly wages can be analysed. Some groups of workers were eliminated from the dataset for a variety of reasons. For example, civilian officials and masters working in factories were excluded because of their relatively high wages. Conversely, because of their relatively low wages, I also exclude carpet weavers —  mostly young girls and children.  I preferred to use median values for monthly wage series to include as many workers as possible in the analysis. As with much historical data, the wage series created in this study are incomplete. To overcome this I complement data for the Hereke Factory wage series with data from the  Veliefendi and Bursa Factories.

My results indicate that daily real wages increased by only by 0.03 per cent,  per annum,  between  1852 and 1899.  However, the real monthly wages of Hereke Factory workers rose by 0.11 per cent, per annum,  between 1848 and 1899, but by 0.24 per cent per annum using 1852 as a starting point.  Monthly wages increased faster than daily wages, but at the cost of more workdays for workers. Average workdays increased by 0.44 per cent, per annum over the span of the period. Although the Veliefendi Factory provides a narrower wage series from 1848 to 1876, it supports this pattern. Limited,  but prominent examinations of Ottoman wage history claim that construction, urban, and agricultural workers’ wages increased, albeit at different rates in the same period. How can we explain the increase in wages of other sectors when the wages of textile workers were stagnant?

Many observations on the Ottoman cities has shown that industrial production, particularly in the textile sector, shifted from urban to rural,  or from craft workshops to houses, to compete with cheap British yarn and fabric in the 19th century. According to my calculations, imports of Ottoman cotton yarn increased by a factor of 25 to 50 in the 19th century. This trend was most pronounced after the 1838 Anglo-Turkish Convention, when cheap English products were imported into the Ottoman Empire, and Ottoman producers sought cheaper labour.  Labour-saving machines both facilitated the export of  British yarns and fabrics to, and lowered wages in, the Ottoman empire.  Although the wage series for the Hereke factory,  and, to a more limited extent,  the Veliefendi factory provide evidence in support of this hypothesis, numerous studies on Ottoman industry in the 19th-century support the same argument, though without a wage series.

Women in the German Economy: A Long Way to Gender Equality (NR Session 4)

By Theresa Neef (Freie Universität Berlin)

This research is due to be presented in the fourth New Researcher Online Session: ‘Equality & Wages’.

 

Neef2
Scanned image of a mid-1930s postcard depicting Unter den Linden in Berlin. Available at Wikimedia Commons.

Female employees in the European Union (EU-27) earn, on average, about 85 per cent of the wages received by male employees. While some countries such as France and Sweden exhibit closer pay equality, women in Germany face a larger gap and receive just 79 per cent of the average male wage, according to the 2018 results from Eurostat 2020. How did this state of affairs emerge?

To understand contemporary pay inequality, it is vital to take a long-run perspective and look at the development of the gender pay ratio in Germany since 1913.  An in-depth analysis of historical inquiry reports and publications by the statistical offices reveals that in 1913 women in Germany earned around 44 per cent of male wages. Although  World War I led to a temporary increase in women’s pay in blue-collar occupations, this trend was soon reversed and the gender-segregated labour market was re-established following demobilization.

The interwar period brought about the most dynamic leap in gender relations during the 20th century. While in 1920 German women earned on average 45% of a man’s average pay, by 1937 this share had increased to 61%, a consequence of women’s occupational transition and the more progressive institutional framework adopted during the Weimar Republic.

With the growing number of white-collar jobs, young females had job opportunities that were better paid and more socially accepted than the work in low-paid domestic services or agriculture. That was an opportunity they took: from 1910 to 1960, women increased their share in those fast-growing occupations from 18% to 45%, while their share decreased in agricultural work. This trend most likely contributed to women’s wage gains relative to men.

During the Weimar Republic, a new constitution and a more progressive institutional framework fostered further equalization of earnings, especially in the white-collar occupations. In 1919, the Weimar constitution introduced compulsory schooling for all youths under 18 years irrespective of gender. For the first time, this law provided girls with the same chances to receive vocational education and an apprenticeship as their male peers. All youths that worked in commercial and industrial firms were obliged to attend vocational commercial school at least once a week for two to three years.  Before the introduction of this law, employers hardly invested in girls’ apprenticeships because women were seen as transient employees leaving the labour force upon marriage. This non-gendered schooling obligation led to a dynamic convergence of vocational training between boys and girls.

In the post-1945 period, the gender pay gap decreased in Germany from 65 percent in 1960 to 74 per cent  twenty years later. In contrast,  Sweden took the lead among European countries and by 1980, the gender pay gap was just 14 percentage points. However, since the 1980s, the gender pay gap has stagnated in many European countries.

 

Neef1
Figure 1: Gender pay ratio, Germany, Sweden, and the USA. Swedish and German series based on mean earnings; US-American time series based on median earnings if not indicated differently. The German time series covers the German Reich, the Federal Republic of Germany and reunified Germany (hollow items).

 

All in all, the long-run perspective shows that since the beginning of the 20th century Germany has persistently exhibited a lower gender pay equality than other European economies, such as Sweden, despite the important improvement observed in the interwar period. In the postwar period, the gap between Germany and Sweden widened further due to slower progress in the young Federal Republic. These results suggest that differences in gender pay inequality across countries can be traced back to historical roots that go beyond the developments in the past forty years.

The Growth Pattern of British Children, 1850-1975

By Pei Gao (NYU Shanghai) & Eric B. Schneider (LSE)

The full article from this blog is forthcoming in the Economic History Review and is currently available on Early View.

 

Gao4
HMS Indefatigable with HMS Diadem (1898) in the Gulf of St. Lawrence 1901. Available at Wikimedia Commons.

Since the mid-nineteenth century, the average height of adult British men increased by 11 centimetres. This increase in final height reflects improvements in living standards and health, and provides insights on the growth pattern of children which has been comparatively neglected. Child growth is very sensitive to economic and social conditions: children with limited nutrition or who suffer from chronic disease, grow more slowly than healthy children. Thus, to achieve such a large increase in adult height, health conditions must have improved dramatically for children since the mid-nineteenth century.

Our paper seeks to understand how child growth changed over time as adult height was increasing. Child growth follows a typical pattern shown in Figure 1.  The graph on the left shows the height by age curve for modern healthy children, and the graph on the right shows the change in height at each age (height velocity). We look at three dimensions of the growth pattern of children: the final adult height that children achieve, i.e. what historians have predominantly focused on to date; the timing (age) when the growth velocity peaks during puberty,  and, finally,   the overall speed of maturation which affects the velocity of growth across all ages and the length of the growing years.

 

Figure 1.         Weights and Heights for boys who trained on HMS Indefatigable, 1860s-1990s.

Gao1
Source: as per article

 

To understand how growth changed over time, we collected information about 11,548 boys who were admitted to the training ship Indefatigable from the 1860s to 1990s (Figure 2).  This ship was located on the River Mersey near Liverpool for much of its history and it trained boys for careers in the merchant marine and navy. Crucially, the administrators recorded the boys’ heights and weights at admission and discharge, allowing us to calculate growth velocities for each individual.

 

Figure 2.         HMS Indefatigable

Gao2
Source: By permission, the Indefatigable Old Boys Society

 

We trace the boys’ heights over time (grouping them by birth decade) and find that they grew most rapidly during the interwar period. In addition, the most novel finding was that for boys born in the nineteenth century there is little evidence that they experienced a strong pubertal growth spurt unlike healthy boys today. Their growth velocity was relatively flat across puberty.  However, starting with the 1910 birth decade, boys began experiencing more rapid pubertal growth similar to the right-hand graph in Figure 1. The appearance of rapid pubertal growth is a product of two factors: an increase in the speed of maturation, which meant that boys grew more rapidly during puberty than before and, secondly,  a decrease in the variation in the timing of the pubertal growth spurt, which meant that boys were experiencing their pubertal growth at more similar ages.

 

Figure 3.         Adjusted height-velocity for boys who trained on HMS Indefatigable.

Gao3
Source: as per article

 

This sudden change in the growth pattern of children is a new finding that is not predicted by the historical or medical literature.  In the paper, we show that this change cannot be explained by improvements in living standards on the ship and that it is robust to a number of potential alternative explanations.   We argue that reductions in disease exposure and illness were likely the biggest contributing factor. Infant mortality rates, an indicator of chronic illness in childhood, declined only after 1900 in England and Wales, so a decline in illness in childhood could have mattered. In addition, although general levels of nutrition were more than adequate by the turn of the twentieth century, the introduction of free school meals and the milk-in-schools programme in the early twentieth century,  likely also helped ensure that children had access to key protein and nutrients necessary for growth.

Our findings matter for two reasons. First, they help complete the fragmented picture in the existing historical literature on how children’s growth changed over time. Second, they highlight the importance of the 1910s and the interwar period as a turning point in child growth. Existing research on adult heights has already shown that the interwar period was a period of rapid growth for children, but our results further explain how and why child growth accelerated in that period.

 


Pei Gao

p.gao@nyu.edu

 

Eric B. Schneider

e.b.schneider@lse.ac.uk

Twitter: @ericbschneider

 

 

Overcoming the Egyptian cotton crisis in the interwar period: the role of irrigation, drainage, new seeds and access to credit

By Ulas Karakoc (TOBB ETU, Ankara & Humboldt University Berlin) & Laura Panza (University of Melbourne)

The full article from this blog is forthcoming in the Economic History Review.

 

Panza1
A study of diversity in Egyptian cotton, 1909. Available at Wikimedia Commons.

By 1914, Egypt’s large agricultural sector was negatively hit by declining yields in cotton production. Egypt at the time was a textbook case of export-led development.  The decline in cotton yields — the ‘cotton crisis’ — was coupled with two other constraints: land scarcity and high population density. Nonethless, Egyptian agriculture was able to overcome this crisis in the interwar period, despite unfavourable price shocks. The output stagnation between 1900 and the 1920s clearly contrasts with the following recovery (Figure 1). In this paper, we empirically examine how this happened, by focusing on the role of government investment in irrigation infrastructure, farmers crop choices (intra-cotton shifts), and access to credit.

 

Figure 1: Cotton output, acreage and yields, 1895-1940

Panza2
Source: Annuaire Statistique (various issues)

 

The decline in yields was caused by expanded irrigation without sufficient drainage, leading to a higher water table, increased salination, and increased pest attacks on cotton (Radwan, 1974; Owen, 1968; Richards, 1982).  The government introduced an extensive public works programme, to reverse soil degradation and restore production. Simultaneously, Egypt’s farmers changed the type of cotton they were cultivating, shifting from the long staple and low yielding Sakellaridis to the medium-short staple and high yielding Achmouni, which reflected income maximizing preferences (Goldberg 2004 and 2006). Another important feature of the Egyptian economy between the 1920s and 1940s, was the expansion of credit facilities and the connected increase in farmers’ accessibility to agricultural loans. The interwar years witnessed the establishment of cooperatives to facilitate small landowners’ access to inputs (Issawi,1954), and the foundation of the Crèdit Agricole in 1931, offering small loans (Eshag and Kamal, 1967). These credit institutions coexisted with a number of mortgage banks, among which the Credit Foncièr was the largest, servicing predominantly large owners. Figure 2 illustrates the average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939).

 

Figure 2: Average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939)

Panza3
Source: Annuaire Statistique (various issues)

 

Our work investigates the extent to which these factors contributed to the recovery of the raw cotton industry. Specifically: to what extent can intra-cotton shifts explain changes in total output? How did the increase in public works, mainly investment in the canal and drainage network, help boost production? And what role did differential access to credit play? To answer these questions, we construct a new dataset by exploiting official statistics (Annuaire Statistique de l’Egypte) covering 11 provinces and 17 years during 1923-1939. These data allow us to provide the first empirical estimates of Egyptian cotton output at the province level.

Access to finance and improved seeds significantly increased cotton output. The declining price premium of Sakellaridis led to a large-scale switch to Achmouni, which indicates that farmers responded to market incentives in their cultivation choices. Our study shows that cultivators’ response to market changes was fundamental in the recovery of the cotton sector. Access to credit was also a strong determinant of cotton output, especially to the benefit of large landowners. That access to credit plays a vital role in enabling the adoption of productivity-enhancing innovations is consonant with the literature on the Green Revolution, (Glaeser, 2010).

Our results show that the expansion of irrigation and drainage did not have a direct effect on output. However, we cannot rule out completely the role played by improved irrigation infrastructure because we do not observe investment in private drains, so we cannot assess complementarities between private and public drainage. Further, we find some evidence of a cumulative effect of drainage pipes, two to three years after installation.

The structure of land ownership, specifically the presence of large landowners, contributed to output recovery. Thus, despite institutional innovations designed to give small farmers better access to credit, large landowners benefitted disproportionally from credit availability. This is not a surprising finding: extreme inequality of land holdings had been a central feature of the country’s agricultural system for centuries.

 

References

Eshag, Eprime, and M. A. Kamal. “A Note on the Reform of the Rural Credit System in U.A.R (Egypt).” Bulletin of the Oxford University Institute of Economics & Statistics 29, no. 2 (1967): 95–107. https://doi.org/10.1111/j.1468-0084.1967.mp29002001.x.

Glaeser, Bernhard. The Green Revolution Revisited: Critique and Alternatives. Taylor & Francis, 2010.

Goldberg, Ellis. “Historiography of Crisis in the Egyptian Political Economy.” In Middle Eastern Historiographies: Narrating the Twentieth Century, edited by I. Gershoni, Amy Singer, and Hakan Erdem, 183–207. University of Washington Press, 2006.

———. Trade, Reputation and Child Labour in the Twentieth-Century Egypt. Palgrave Macmillan, 2004.

Issawi, Charles. Egypt at Mid-Century. Oxford University Press, 1954.

Owen, Roger. “Agricultural Production in Historical Perspective: A Case Study of the Period 1890-1939.” In Egypt Since the Revolution, edited by P. Vatikiotis, 40–65, 1968.

Radwan, Samir. Capital Formation in Egyptian Industry and Agriculture, 1882-1967. Ithaca Press, 1974.

Richards, Alan Egypt’s Agricultural Development, 1800-1980: Technical and Social Change. Westview Press, 1982.

 


Ulas Karakoc

ulaslar@gmail.com

 

Laura Panza

lpanza@unimelb.edu.au

 

 

 

 

 

Patents and Invention in Jamaica and the British Atlantic before 1857

By Aaron Graham (Oxford University)

This article will be published in the Economic History Review and is currently available on Early View.

 

Cardiff Hall, St. Ann's.
A Picturesque Tour of the Island of Jamaica, by James Hakewill (1875). Available at Wikimedia Commons.

For a long time the plantation colonies of the Americas were seen as backward and undeveloped, dependent for their wealth on the grinding enslavement of hundreds of thousands of people.  This was only part of the story, albeit a major one. Sugar, coffee, cotton, tobacco and indigo plantations were also some of the largest and most complex economic enterprises of the early industrial revolution, exceeding many textile factories in size and relying upon sophisticated technologies for the processing of raw materials.  My article looks at the patent system of Jamaica and the British Atlantic which supported this system, arguing that it facilitated a process of transatlantic invention, innovation and technological diffusion.

The first key finding concerns the nature of the patent system in Jamaica.  As in British America, patents were granted by colonial legislatures rather than by the Crown, and besides merely registering the proprietary right to an invention they often included further powers, to facilitate the process of licensing and diffusion.  They were therefore more akin to industrial subsidies than modern patents.  The corollary was that inventors had to demonstrate not just novelty but practicality and utility; in 1786, when two inventors competed to patent the same invention, the prize went to the one who provided a successful demonstration (Figure 1).   As a result, the bar was higher, and only about sixty patents were passed in Jamaica between 1664 and 1857, compared to the many thousands in Britain and the United States.

 

Figure 1. ‘Elevation & Plan of an Improved SUGAR MILL by Edward Woollery Esq of Jamaica’

Graham1
Source: Bryan Edwards, The History, Civil and Commercial, of the British Colonies of the West Indies (London, 1794).

 

However, the second key finding is that this ‘bar’ was enough to make Jamaica one of the centres of colonial technological innovation before 1770, along with Barbados and South Carolina, which accounted for about two-thirds of the patents passed in that period.  All three were successful plantation colonies, where planters earned large amounts of money and had both the incentive and the means to invest heavily in technological innovations intended to improve efficiency and profits.  Patenting peaked in Jamaica between the 1760s and 1780s, as the island adapted to sudden economic change, as part of a package of measures that included opening up new lands, experimenting with new cane varieties, engaging in closer accounting, importing more slaves and developing new ways of working them harder.

A further finding of the article is that the English and Jamaican patent systems until 1852 were complementary.  Inventors in Britain could purchase an English patent with a ‘colonial clause’ extending it to colonial territories, but a Jamaican patent offered them additional powers and flexibility as they brought their inventions to Jamaica and adapted it to local conditions.  Inventors in Jamaica could obtain a local patent to protect their invention while they perfected it and prepared to market it in Britain.  The article shows how inventors used varies strategies within the two systems to help support the process of turning their inventions into viable technologies.

Finally, the colonial patents operated alongside a system of grants, premiums and prizes operated by the Jamaican Assembly, which helped to support innovation by plugging the gaps left by the patent system.  Inventors who felt that their designs were too easily pirated, or that they themselves lacked the capacity to develop them properly, could ask for a grant instead that recompensed them for the costs of invention and made the new technology widely available.  Like the imperial and colonial patents, the grants were part of the strategies used to promote invention.

Indeed, sometimes the Assembly stepped in directly.  In 1799, Jean Baptiste Brouet asked the House for a patent for a machine for curing coffee.  The committee agreed that the invention was novel, useful and practical, ‘but as the petitioner has not been naturalised and is totally unable to pay the fees for a private bill’, they suggested granting him £350 instead, ‘as a full reward for his invention; [and] the machines constructed according to the model whereof may then be used by any person desirous of the same, without any license from or fee paid to the petitioner’.

The article therefore argues that Jamaican patents were part of wider transatlantic system that acted to facilitate invention, innovation and technological diffusion in support of the plantation economy and slave society.

 


 

Aaron Graham

aaron.graham@history.ox.ac.uk