How Indian cottons steered British industrialisation

By Alka Raman (LSE)

This blog is part of a series of New Researcher blogs.

“Methods of Conveying Cotton in India to the Ports of Shipment,” from the Illustrated London News, 1861. Available at Wikimedia Commons.

Technological advancements within the British cotton industry have widely been acknowledged as the beginning of industrialisation in eighteenth and nineteenth century Britain. My research reveals that these advances were driven by a desire to match the quality of handmade cotton textiles from India.

I highlight how the introduction of Indian printed cottons into British markets created a frenzy of demand for these exotic goods. This led to immediate imitations by British textile manufacturers, keen to gain footholds in the domestic and world markets where Indian cottons were much desired.

The process of imitation soon revealed that British spinners could not spin the fine cotton yarn required to hand make the fine cotton cloth needed for fine printing. And British printers could not print cloth in the multitudes of colourfast colours that the Indian artisans had mastered over centuries.

These two key limitations in British textile manufacturing spurred demand-induced technological innovations to match the quality of Indian handmade printed cottons.

In order to test this, I chart the quality of English cotton textiles from 1740-1820 and compare them with Indian cottons of the same period. Thread per inch count is used as the measure of quality, and digital microscopy is deployed to establish their yarn composition to determine whether they are all-cotton textiles or mixed linen-cottons.

My findings show that the earliest British ‘cotton’ textiles were mixed linen-cottons and not all-cottons. Technological evolution in the British cotton industry was a pursuit of first the coarse, yet all-cotton cloth, followed by the fine all-cotton cloth such as muslin.

The evidence shows that British cotton cloth quality improved by 60% between 1747 and 1782 during the decades of the famous inventions of James Hargreaves’ spinning jenny, Richard Arkwright’s waterframe and Samuel Crompton’s mule. It further improved by 24% between 1782 and 1816. Overall, cloth quality improved by a staggering 99% between 1747 and 1816.

My research challenges our current understanding of industrialisation as a British and West European phenomenon, commonly explained using rationales such as high wages, availability of local energy sources or access to New World resources. Instead, it reveals that learning from material goods and knowledge brought into Britain and Europe from the East directly and substantially affected the foundations of the modern world as we know it.

The results also pose a more fundamental question: how does technological change take place? Based on my findings, learning from competitor products – especially imitation of novel goods using indigenous processes – may be identified as one crucial pathway for the creation of new ideas that shape technological change.

Baumol, Engel, and Beyond: Accounting for a century of structural transformation in Japan, 1885-1985

by Kyoji Fukao (Hitotsubashi University) and Saumik Paul (Newcastle University and IZA)

The full article from this blog post was published on The Economic History Review, and it is now available on Early View at this link

Bank of Japan, silver convertible yen. Available on Wiki Commons

Over the past two centuries, many industrialized countries have experienced dramatic changes in the sectoral composition of output and employment. The pattern of structural transformation, depicted for most of the developed countries, entails a steady fall in the primary sector, a steady increase in the tertiary sector, and a hump shape in the secondary sector. In the literature, the process of structural transformation is explained through two broad channels: the income effect, driven by the generalization of Engel’s law, and the substitution effect, following the differences in the rate of productivity across sectors, also known as “Baumol’s cost disease effect”.

At the same time, an input-output (I-O) model provides a comprehensive way to study the process of structural transformation. The input-output analysis accounts for intermediate input production by a sector, as many sectors predominantly produce intermediate inputs, and their outputs rarely enter directly into consumer preferences. Moreover, an input-output analysis relies on observed data and a national income identity to handle imports and exports. The input-output analysis has considerable advantages in the context of Japanese structural transformation first from agriculture to manufactured final consumption goods, and then to services, alongside transformations in Japanese exports and imports that have radically changed over time.

We examine the drivers of the long-run structural transformation in Japan over a period of 100 years, from 1885 to 1985. During this period, the value-added share of the primary sector dropped from 60 per cent  to less than 1 per cent, whereas that of the tertiary sector rose from 27 to nearly 60 per cent in Japan (Figure 1). We apply the Chenery, Shishido, and Watanabe framework to examine changes in the composition of sectoral output shares. Chenery, Shishido, and Watanabe used an inter-industry model to explain deviations from proportional growth in output in each sector and decomposed the deviation in sectoral output into two factors: the demand side effect, a combination of the Engel and Baumol effects (discussed above), and  the supply side effect, a change in the technique of production. However, the current input-output framework is unable to uniquely separate the demand side effect into forces labelled under the Engel and Baumol effects.

Figure 1. Structural transformation in Japan, 1874-2008. Source: Fukao and Paul (2017). 
Note: Sectoral shares in GDP are calculated using real GDP in constant 1934-36 prices for 1874-1940 and constant 2000 prices for 1955-2008. In the current study, the pre-WWII era is from 1885 to1935, and the post-WWII era is from 1955 to 1985. 

To conduct the decomposition analysis, we use seven I-O tables (every 10 years) in the prewar era from 1885 to 1935 and six I-O tables (every 5 years) in the postwar era from 1955 to 1985. These seven sectors include: agriculture, forestry, and fishery; commerce and services; construction;  food;  mining and manufacturing (excluding food and textiles); textiles, and  transport, communication, and utilities.

The results show that the annual growth rate of GDP more than doubled in the post-WWII era compared to the pre-WWII era. The real output growth was the highest in the commerce and services sector throughout the period under study, but there was also rapid growth of output in mining and manufacturing, especially in the second half of the 20th century. Sectoral output growth in mining and manufacturing (textile, food, and the other manufacturing), commerce and services, and transport, communications, and utilities outgrew the pace of growth in GDP in most of the periods. Detailed decomposition results show that in most of the sectors (agriculture, commerce and services, food, textiles, and transport, communication, and utilities), changes in private consumption were the dominant force behind the demand-side explanations. The demand-side effect was strongest in the commerce and services sector.

Overall, demand-side factors — a combination of the Baumol and Engel effects, were the main explanatory factors in the pre-WWII period, whereas  supply-side factors were the key driver of structural transformation in the post-WWII period.

To contact the authors:

Kyoji Fukao, k.fukao@r.hit-u.ac.jp

Saumik Paul, paulsaumik@gmail.com, @saumik78267353

Notes

Baumol, William J., “Macroeconomics of unbalanced growth: the anatomy of urban crisis”. American Economic Review 57, (1967) 415–426.

Chenery, Hollis B., Shuntaro Shishido and Tsunehiko Watanabe. “The pattern of Japanese growth, 1914−1954”, Econometrica30 (1962), 1, 98−139.

Fukao, Kyoji and Saumik Paul “The Role of Structural Transformation in Regional Convergence in Japan: 1874-2008.” Institute of Economic Research Discussion Paper No. 665. Tokyo: Institute of Economic Research (2017).

Settler capitalism: company colonisation and the rage for speculation (NR Online Session 5)

By Matthew Birchall (Cambridge University)

This research is due to be presented in the fifth New Researcher Online Session: ‘Government & Colonization’.

 

Birchall1
Scan from “Historical Atlas” by William R. Shepherd, New York, Henry Holt and Company, 1923. Available at Wikimedia Commons.

My research explores the little-known story of how company colonisation propelled the settler revolution. Characterised by mass emigration to Britain’s settler colonies during the long nineteenth century, the settler revolution transformed Chicago and Melbourne, London and New York, drawing all into a vast cultural and political network that straddled the globe. But while the settler revolution is now well integrated into recent histories of the British Empire, it remains curiously disconnected from the history of global capitalism.

Prising open what I call the inner lives of colonial corporations, I tell the story of how and why companies remade the settler world. It takes a fresh look at the colonial history of Australia and New Zealand in an attempt to map a new history of chartered colonial enterprise, one that is as sensitive to rhetoric as it is to ledgers documenting profit and loss. We tend to understand companies in terms of their institutional make-up, that is to say their legal and economic structure, but we sometimes forget that they are also cultural constructions with very human histories.

The story that I narrate takes us from the boardrooms of the City of London back out to the pastures of the colonial frontier: it is a snapshot of settler capitalism from the inside out. From the alleys and byways immortalised in Walter Bagehot’s Lombard Street (1873) to the sheep-runs of New South Wales and the South Canterbury plains, company colonisation has a global history – a history that links the Atlantic and the antipodes, Māori and metropolitan capital, country and the City of London. My study marks a first attempt at bringing this history to light.

In digging deep into the social and cultural history of company colonisation, I focus in particular on the legitimating narratives that underwrote visions of colonial reform. How did these company men make sense of their own ventures? What traditions of thought did they draw on to justify the appropriation of indigenous lands? How did the customs and norms of the City shape the boundaries of what was deemed possible, let alone appropriate in the extra-European world? I aim to show that company colonisation was as much an act of the imagination as it was the product of prudent capital investment.

My research engages with large questions of contemporary relevance: the role of corporations in the making of the modern world; the relationship between empire and global capitalism; and the salience of social and cultural factors in the development of corporate enterprise. I hope to enrich these debates by injecting the discussion with greater historical context.

 

 

How to Keep Society Equal: The Case of Pre-industrial East Asia (NR Online Session 4)

By Yuzuru Kumon (Bocconi University)

This research is due to be presented in the fourth New Researcher Online Session: ‘Equality & Wages’.

Kumon2

Theatrum orbis terrarum: Map Tartaria, by Abraham Ortelius. Available at State Library Victoria.

 

 

Is high inequality destiny? The established view is that societies naturally converge towards high inequality in the absence of catastrophes (world wars or revolutions) or the progressive taxation of the rich. Yet, I show that rural Japan, 1700-1870, is an unexpected historical case in which a stable equality was sustained without such aids. Most peasants owned land, the most valuable asset in an agricultural economy, and Japan remained a society of land-owning peasants. This contrasts with the landless laborer societies of contemporary Western Europe which were highly unequal. Why were the outcomes so different?

My research shows that the relative equality of pre-industrial Japan can partly be explained by the widespread use of adoptions in Japan, which was used as a means of securing a male heir. The reasoning becomes clear if we first consider the case of the Earls Cowper in 18th century England where adoption was not practiced. The first Earl Cowper was a modest landowner and married Mary Clavering in 1706. When Mary’s brother subsequently died, she became the heiress and the couple inherited the Clavering estate. Similar (miss)fortunes for their heirs led the Cowpers to become one of the greatest landed families of England. The Cowpers were not particularly lucky, as one quarter of families were heirless during this era of high child mortality. The outcome of this death lottery was inequality.

Had the Cowpers lived in contemporary Japan, they would have remained modest landowners. An heirless household in Japan would adopt a son. Hence, the Claverings would have an adopted son and the family estate would have remained in the family. To keep the blood in the family, the adopted son may have married a daughter if available. If unavailable, the next generation could be formed by total strangers but they would continue the family line. Amassing a fortune in Japan was unrelated to demographic luck.

Widespread adoptions were not a peculiarity of Japan and this mechanism can also explain why East Asian societies were landowning peasant societies. China also had high rates of adoption in addition to equal distributions of land according to surveys from the 1930s. Perhaps more surprisingly, adoptions were common in ancient Europe where the Greeks and Romans practiced adoptions to secure heirs. For example, Augustus, the first emperor of the Roman Empire, was adopted. Adoptions were a natural means of keeping wealth under the control of the family.

Europe changed due to the church discouraging adoptions from the early middle ages, leading to adoptions becoming rarities by the 11th century. The church was partially motivated by theology but also by the possibility that heir-less wealth would get willed to the church. They almost certainly did not foresee that their policies would lead to greater wealth inequality during the subsequent eras.

 

Figure 1. Land Distribution under Differing Adoption Regimes and Impartible Inheritance

Kumon1

 

My study shows by simulation that a large portion of the difference in wealth inequality outcomes between east and west can be explained by adoption (see figure 1). Societies without adoption have wealth distribution that are heavily skewed with many landless households unlike those with adoptions. Therefore, family institutions played a key role in determining inequality which had huge implications for the way society was organized in these two regions.

Interestingly, East Asian societies still have greater equality in wealth distributions today. Moreover, adoptions still amount to 10% of marriages in Japan which is a remarkably large share. Adoption may have continued creating a relatively equal society in Japan up to today.

Did the Ottomans Import the Low Wages of the British in the 19th Century? An Examination of Ottoman Textile Factories (NR Online Session 4)

By Tamer Güven (Istanbul University)

This research is due to be presented in the fourth New Researcher Online Session: ‘Equality & Wages’.

 

Guven1
The Istanbul Grand Bazaar in the 1890s. Available at Wikimedia Commons.

Compared to the UK and Western Europe, there are a limited number of studies on wages and standards of living in the Ottoman empire. For the Ottoman empire the only source that can provide regular wage data for industry are the Ottoman state factories established in the 1840s to meet the needs of the state’s growing and centralized military and bureaucracy. Limitations in the sources of data are explained by the relative absence of industrial wage series in the monographies on Ottoman industrial institutions, and that manufacturing mainly comprised small manufacturers who did not keep records. This paucity in data may change as the Ottoman Archives become fully catalogued. The main aim of this study is to construct a wage series using the wage ledgers of those working in state factories. Consequently, I examined four prominent textile-related factories: Hereke Imperial Factory, Veliefendi Calico Factory, Bursa Silk Factory, and İzmit Cloth Factory. Only the Hereke Factory offers a 52-year wage series between 1848-1899. The data for the Veliefendi Factory started in 1848 but are disrupted in 1876 as the factory was transferred to military rule; the same applies to the İzmit Factory, which was established in 1844, but transferred to military rule in 1849.

I created two separate daily and monthly wage series to determine how many days workers worked per month and how this changed during the nineteenth century. Thus, not only the workers’ potential wages but also the workers’ observed monthly wages can be analysed. Some groups of workers were eliminated from the dataset for a variety of reasons. For example, civilian officials and masters working in factories were excluded because of their relatively high wages. Conversely, because of their relatively low wages, I also exclude carpet weavers —  mostly young girls and children.  I preferred to use median values for monthly wage series to include as many workers as possible in the analysis. As with much historical data, the wage series created in this study are incomplete. To overcome this I complement data for the Hereke Factory wage series with data from the  Veliefendi and Bursa Factories.

My results indicate that daily real wages increased by only by 0.03 per cent,  per annum,  between  1852 and 1899.  However, the real monthly wages of Hereke Factory workers rose by 0.11 per cent, per annum,  between 1848 and 1899, but by 0.24 per cent per annum using 1852 as a starting point.  Monthly wages increased faster than daily wages, but at the cost of more workdays for workers. Average workdays increased by 0.44 per cent, per annum over the span of the period. Although the Veliefendi Factory provides a narrower wage series from 1848 to 1876, it supports this pattern. Limited,  but prominent examinations of Ottoman wage history claim that construction, urban, and agricultural workers’ wages increased, albeit at different rates in the same period. How can we explain the increase in wages of other sectors when the wages of textile workers were stagnant?

Many observations on the Ottoman cities has shown that industrial production, particularly in the textile sector, shifted from urban to rural,  or from craft workshops to houses, to compete with cheap British yarn and fabric in the 19th century. According to my calculations, imports of Ottoman cotton yarn increased by a factor of 25 to 50 in the 19th century. This trend was most pronounced after the 1838 Anglo-Turkish Convention, when cheap English products were imported into the Ottoman Empire, and Ottoman producers sought cheaper labour.  Labour-saving machines both facilitated the export of  British yarns and fabrics to, and lowered wages in, the Ottoman empire.  Although the wage series for the Hereke factory,  and, to a more limited extent,  the Veliefendi factory provide evidence in support of this hypothesis, numerous studies on Ottoman industry in the 19th-century support the same argument, though without a wage series.

The Growth Pattern of British Children, 1850-1975

By Pei Gao (NYU Shanghai) & Eric B. Schneider (LSE)

The full article from this blog is forthcoming in the Economic History Review and is currently available on Early View.

 

Gao4
HMS Indefatigable with HMS Diadem (1898) in the Gulf of St. Lawrence 1901. Available at Wikimedia Commons.

Since the mid-nineteenth century, the average height of adult British men increased by 11 centimetres. This increase in final height reflects improvements in living standards and health, and provides insights on the growth pattern of children which has been comparatively neglected. Child growth is very sensitive to economic and social conditions: children with limited nutrition or who suffer from chronic disease, grow more slowly than healthy children. Thus, to achieve such a large increase in adult height, health conditions must have improved dramatically for children since the mid-nineteenth century.

Our paper seeks to understand how child growth changed over time as adult height was increasing. Child growth follows a typical pattern shown in Figure 1.  The graph on the left shows the height by age curve for modern healthy children, and the graph on the right shows the change in height at each age (height velocity). We look at three dimensions of the growth pattern of children: the final adult height that children achieve, i.e. what historians have predominantly focused on to date; the timing (age) when the growth velocity peaks during puberty,  and, finally,   the overall speed of maturation which affects the velocity of growth across all ages and the length of the growing years.

 

Figure 1.         Weights and Heights for boys who trained on HMS Indefatigable, 1860s-1990s.

Gao1
Source: as per article

 

To understand how growth changed over time, we collected information about 11,548 boys who were admitted to the training ship Indefatigable from the 1860s to 1990s (Figure 2).  This ship was located on the River Mersey near Liverpool for much of its history and it trained boys for careers in the merchant marine and navy. Crucially, the administrators recorded the boys’ heights and weights at admission and discharge, allowing us to calculate growth velocities for each individual.

 

Figure 2.         HMS Indefatigable

Gao2
Source: By permission, the Indefatigable Old Boys Society

 

We trace the boys’ heights over time (grouping them by birth decade) and find that they grew most rapidly during the interwar period. In addition, the most novel finding was that for boys born in the nineteenth century there is little evidence that they experienced a strong pubertal growth spurt unlike healthy boys today. Their growth velocity was relatively flat across puberty.  However, starting with the 1910 birth decade, boys began experiencing more rapid pubertal growth similar to the right-hand graph in Figure 1. The appearance of rapid pubertal growth is a product of two factors: an increase in the speed of maturation, which meant that boys grew more rapidly during puberty than before and, secondly,  a decrease in the variation in the timing of the pubertal growth spurt, which meant that boys were experiencing their pubertal growth at more similar ages.

 

Figure 3.         Adjusted height-velocity for boys who trained on HMS Indefatigable.

Gao3
Source: as per article

 

This sudden change in the growth pattern of children is a new finding that is not predicted by the historical or medical literature.  In the paper, we show that this change cannot be explained by improvements in living standards on the ship and that it is robust to a number of potential alternative explanations.   We argue that reductions in disease exposure and illness were likely the biggest contributing factor. Infant mortality rates, an indicator of chronic illness in childhood, declined only after 1900 in England and Wales, so a decline in illness in childhood could have mattered. In addition, although general levels of nutrition were more than adequate by the turn of the twentieth century, the introduction of free school meals and the milk-in-schools programme in the early twentieth century,  likely also helped ensure that children had access to key protein and nutrients necessary for growth.

Our findings matter for two reasons. First, they help complete the fragmented picture in the existing historical literature on how children’s growth changed over time. Second, they highlight the importance of the 1910s and the interwar period as a turning point in child growth. Existing research on adult heights has already shown that the interwar period was a period of rapid growth for children, but our results further explain how and why child growth accelerated in that period.

 


Pei Gao

p.gao@nyu.edu

 

Eric B. Schneider

e.b.schneider@lse.ac.uk

Twitter: @ericbschneider

 

 

Overcoming the Egyptian cotton crisis in the interwar period: the role of irrigation, drainage, new seeds and access to credit

By Ulas Karakoc (TOBB ETU, Ankara & Humboldt University Berlin) & Laura Panza (University of Melbourne)

The full article from this blog is forthcoming in the Economic History Review.

 

Panza1
A study of diversity in Egyptian cotton, 1909. Available at Wikimedia Commons.

By 1914, Egypt’s large agricultural sector was negatively hit by declining yields in cotton production. Egypt at the time was a textbook case of export-led development.  The decline in cotton yields — the ‘cotton crisis’ — was coupled with two other constraints: land scarcity and high population density. Nonethless, Egyptian agriculture was able to overcome this crisis in the interwar period, despite unfavourable price shocks. The output stagnation between 1900 and the 1920s clearly contrasts with the following recovery (Figure 1). In this paper, we empirically examine how this happened, by focusing on the role of government investment in irrigation infrastructure, farmers crop choices (intra-cotton shifts), and access to credit.

 

Figure 1: Cotton output, acreage and yields, 1895-1940

Panza2
Source: Annuaire Statistique (various issues)

 

The decline in yields was caused by expanded irrigation without sufficient drainage, leading to a higher water table, increased salination, and increased pest attacks on cotton (Radwan, 1974; Owen, 1968; Richards, 1982).  The government introduced an extensive public works programme, to reverse soil degradation and restore production. Simultaneously, Egypt’s farmers changed the type of cotton they were cultivating, shifting from the long staple and low yielding Sakellaridis to the medium-short staple and high yielding Achmouni, which reflected income maximizing preferences (Goldberg 2004 and 2006). Another important feature of the Egyptian economy between the 1920s and 1940s, was the expansion of credit facilities and the connected increase in farmers’ accessibility to agricultural loans. The interwar years witnessed the establishment of cooperatives to facilitate small landowners’ access to inputs (Issawi,1954), and the foundation of the Crèdit Agricole in 1931, offering small loans (Eshag and Kamal, 1967). These credit institutions coexisted with a number of mortgage banks, among which the Credit Foncièr was the largest, servicing predominantly large owners. Figure 2 illustrates the average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939).

 

Figure 2: Average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939)

Panza3
Source: Annuaire Statistique (various issues)

 

Our work investigates the extent to which these factors contributed to the recovery of the raw cotton industry. Specifically: to what extent can intra-cotton shifts explain changes in total output? How did the increase in public works, mainly investment in the canal and drainage network, help boost production? And what role did differential access to credit play? To answer these questions, we construct a new dataset by exploiting official statistics (Annuaire Statistique de l’Egypte) covering 11 provinces and 17 years during 1923-1939. These data allow us to provide the first empirical estimates of Egyptian cotton output at the province level.

Access to finance and improved seeds significantly increased cotton output. The declining price premium of Sakellaridis led to a large-scale switch to Achmouni, which indicates that farmers responded to market incentives in their cultivation choices. Our study shows that cultivators’ response to market changes was fundamental in the recovery of the cotton sector. Access to credit was also a strong determinant of cotton output, especially to the benefit of large landowners. That access to credit plays a vital role in enabling the adoption of productivity-enhancing innovations is consonant with the literature on the Green Revolution, (Glaeser, 2010).

Our results show that the expansion of irrigation and drainage did not have a direct effect on output. However, we cannot rule out completely the role played by improved irrigation infrastructure because we do not observe investment in private drains, so we cannot assess complementarities between private and public drainage. Further, we find some evidence of a cumulative effect of drainage pipes, two to three years after installation.

The structure of land ownership, specifically the presence of large landowners, contributed to output recovery. Thus, despite institutional innovations designed to give small farmers better access to credit, large landowners benefitted disproportionally from credit availability. This is not a surprising finding: extreme inequality of land holdings had been a central feature of the country’s agricultural system for centuries.

 

References

Eshag, Eprime, and M. A. Kamal. “A Note on the Reform of the Rural Credit System in U.A.R (Egypt).” Bulletin of the Oxford University Institute of Economics & Statistics 29, no. 2 (1967): 95–107. https://doi.org/10.1111/j.1468-0084.1967.mp29002001.x.

Glaeser, Bernhard. The Green Revolution Revisited: Critique and Alternatives. Taylor & Francis, 2010.

Goldberg, Ellis. “Historiography of Crisis in the Egyptian Political Economy.” In Middle Eastern Historiographies: Narrating the Twentieth Century, edited by I. Gershoni, Amy Singer, and Hakan Erdem, 183–207. University of Washington Press, 2006.

———. Trade, Reputation and Child Labour in the Twentieth-Century Egypt. Palgrave Macmillan, 2004.

Issawi, Charles. Egypt at Mid-Century. Oxford University Press, 1954.

Owen, Roger. “Agricultural Production in Historical Perspective: A Case Study of the Period 1890-1939.” In Egypt Since the Revolution, edited by P. Vatikiotis, 40–65, 1968.

Radwan, Samir. Capital Formation in Egyptian Industry and Agriculture, 1882-1967. Ithaca Press, 1974.

Richards, Alan Egypt’s Agricultural Development, 1800-1980: Technical and Social Change. Westview Press, 1982.

 


Ulas Karakoc

ulaslar@gmail.com

 

Laura Panza

lpanza@unimelb.edu.au

 

 

 

 

 

Patents and Invention in Jamaica and the British Atlantic before 1857

By Aaron Graham (Oxford University)

This article will be published in the Economic History Review and is currently available on Early View.

 

Cardiff Hall, St. Ann's.
A Picturesque Tour of the Island of Jamaica, by James Hakewill (1875). Available at Wikimedia Commons.

For a long time the plantation colonies of the Americas were seen as backward and undeveloped, dependent for their wealth on the grinding enslavement of hundreds of thousands of people.  This was only part of the story, albeit a major one. Sugar, coffee, cotton, tobacco and indigo plantations were also some of the largest and most complex economic enterprises of the early industrial revolution, exceeding many textile factories in size and relying upon sophisticated technologies for the processing of raw materials.  My article looks at the patent system of Jamaica and the British Atlantic which supported this system, arguing that it facilitated a process of transatlantic invention, innovation and technological diffusion.

The first key finding concerns the nature of the patent system in Jamaica.  As in British America, patents were granted by colonial legislatures rather than by the Crown, and besides merely registering the proprietary right to an invention they often included further powers, to facilitate the process of licensing and diffusion.  They were therefore more akin to industrial subsidies than modern patents.  The corollary was that inventors had to demonstrate not just novelty but practicality and utility; in 1786, when two inventors competed to patent the same invention, the prize went to the one who provided a successful demonstration (Figure 1).   As a result, the bar was higher, and only about sixty patents were passed in Jamaica between 1664 and 1857, compared to the many thousands in Britain and the United States.

 

Figure 1. ‘Elevation & Plan of an Improved SUGAR MILL by Edward Woollery Esq of Jamaica’

Graham1
Source: Bryan Edwards, The History, Civil and Commercial, of the British Colonies of the West Indies (London, 1794).

 

However, the second key finding is that this ‘bar’ was enough to make Jamaica one of the centres of colonial technological innovation before 1770, along with Barbados and South Carolina, which accounted for about two-thirds of the patents passed in that period.  All three were successful plantation colonies, where planters earned large amounts of money and had both the incentive and the means to invest heavily in technological innovations intended to improve efficiency and profits.  Patenting peaked in Jamaica between the 1760s and 1780s, as the island adapted to sudden economic change, as part of a package of measures that included opening up new lands, experimenting with new cane varieties, engaging in closer accounting, importing more slaves and developing new ways of working them harder.

A further finding of the article is that the English and Jamaican patent systems until 1852 were complementary.  Inventors in Britain could purchase an English patent with a ‘colonial clause’ extending it to colonial territories, but a Jamaican patent offered them additional powers and flexibility as they brought their inventions to Jamaica and adapted it to local conditions.  Inventors in Jamaica could obtain a local patent to protect their invention while they perfected it and prepared to market it in Britain.  The article shows how inventors used varies strategies within the two systems to help support the process of turning their inventions into viable technologies.

Finally, the colonial patents operated alongside a system of grants, premiums and prizes operated by the Jamaican Assembly, which helped to support innovation by plugging the gaps left by the patent system.  Inventors who felt that their designs were too easily pirated, or that they themselves lacked the capacity to develop them properly, could ask for a grant instead that recompensed them for the costs of invention and made the new technology widely available.  Like the imperial and colonial patents, the grants were part of the strategies used to promote invention.

Indeed, sometimes the Assembly stepped in directly.  In 1799, Jean Baptiste Brouet asked the House for a patent for a machine for curing coffee.  The committee agreed that the invention was novel, useful and practical, ‘but as the petitioner has not been naturalised and is totally unable to pay the fees for a private bill’, they suggested granting him £350 instead, ‘as a full reward for his invention; [and] the machines constructed according to the model whereof may then be used by any person desirous of the same, without any license from or fee paid to the petitioner’.

The article therefore argues that Jamaican patents were part of wider transatlantic system that acted to facilitate invention, innovation and technological diffusion in support of the plantation economy and slave society.

 


 

Aaron Graham

aaron.graham@history.ox.ac.uk

Famine, institutions, and indentured migration in colonial India

By Ashish Aggarwal (University of Warwick)

This blog is part of a series of New Researcher blogs.

 

Aggarwal1
Women fetching water in India in the late 19th century. Available at Wikimedia Commons.

A large share of the working population in developing countries is still engaged in agricultural activities. In India, for instance, over 40% of the employed population works in the agricultural sector and nearly three-quarters of the households depend on rural incomes (World Bank[1]). In addition, the agricultural sector in developing countries is plagued with low investments, forcing workers to rely on natural sources for irrigation as opposed to perennial man-made sources. Gadgil and Gadgil (2006) study the agricultural sector in India during 1951-2003 and find that despite a decline in share of agriculture in GDP in India, severe droughts still adversely impact GDP by 2-5%. In such a context, any unanticipated deviation from normal in rainfall is bound to have adverse effects on productivity and consequently, on incomes of these workers. In this paper, I study whether workers adopt migration as a coping strategy in response to income risks arising out of negative shocks to agriculture. And, if local institutions facilitate or hinder the use of this strategy. In a nutshell, the answers are yes and yes.

I study these questions in the context of indentured migration from colonial India to several British colonies. The abolition of slavery in the 1830s led to a demand for new sources of labour to work on plantations in the colonies. Starting with the “great experiment” in Mauritius (Carter, 1993), over a million Indians became indentured migrants with Mauritius, British Guyana, Natal, and Trinidad being the major destinations. The indentured migration from India was a system of voluntary migration, wherein passages were paid-for and migrants earned fixed wages and rations. The exact terms varied across different colonies, but generally the contracts were specified for a period of five years and after ten years of residency in the colony, a paid-for return passage was also available.

Using a unique dataset on annual district-level outflows of indentured migrants from colonial lndia to several British colonies in the period 1860-1912, I find that famines increased indentures. However, this effect varied according to the land-revenue collection system established by the British. Using the year the district was annexed by Britain to construct an instrument for the land revenue system (Banerjee and Iyer, 2005), I find that emigration responded less to famines in British districts where landlords collected revenue (as opposed to places where individual was responsible for revenue payments). I also find this to be the case in Princely States. However, the reasons for these results are markedly different. Qualitative evidence suggests that landlords were unlikely to grant remissions to their tenants; this increased tenant debt, preventing them from migrating. Interlinked transactions and a general fear of the landlords prevented the tenants from defaulting on their debts. Such coercion was not witnessed in areas where landlords were not the revenue collectors making it easier for people to migrate in times of distress. On the other hand, in Princely states, local rulers adopted liberal measures during famine years in order to help the population. These findings are robust to various placebo and robustness checks. The results are in line with Persaud (2019) who shows that people engaged in indentured migration to escape local price volatility.

 

[1] https://www.worldbank.org/en/news/feature/2012/05/17/india-agriculture-issues-priorities

 

References

Banerjee, Abhijit, and Lakshmi Iyer (2005): “History, Institutions, and Economic Performance: The Legacy of Colonial Land Tenure Systems in India”, American Economic Review, Vol. 95, No. 4, pp. 1190-1213.

Carter, Marina (1993): “The Transition from Slave to Indentured Labour in Mauritius”, Slavery and Abolition, 14:1, pp. 114-130.

Gadgil, Sulochana, and Siddhartha Gadgil (2006): “The Indian Monsoon, GDP and Agriculture”, Economic and Political Weekly, Vol. 41, No. 47, 4887-4895.

Persaud, Alexander (2019): “Escaping Local Risk by Entering Indentureship: Evidence from Nineteenth-Century Indian Migration”, Journal of Economic History, Vol. 79, No. 2, pp. 447-476.

 

 

Before the fall: quantity versus quality in pre–demographic transition Quebec (NR Online Session 3)

By Matthew Curtis (University of California, Davis)

This research is due to be presented in the third New Researcher Online Session: ‘Human Capital & Development’.


 

Curtis1
Map of East Canada or Quebec and New Brunswick, by John Tallis c.1850. Available at Wikimedia Commons.

While it plays a key role in theories of the transition to modern economic growth, there are few estimates of the quantity-quality trade-off from before the demographic transition. Using a uniquely suitable new dataset of vital records, I use two instrumental variable (IV) strategies to estimate the trade-off in Quebec between 1620 and 1850. I find that one additional child who survived past age one decreased the literacy rate (proxied by signatures) of their older siblings by 5 percentage points.

The first strategy exploits the fact that twin births, conditional on mother’s age and parity, are a random increase in family size. While twins are often used to identify the trade-off in contemporary studies, sufficiently large and reliable historical datasets containing twins are rare. I compare two families, one whose mother gave birth to twins and one whose mother gave birth to a singleton, both at the same parity and age. I then look at the probability that each older non-twin sibling signed their marriage record.

For the second strategy, I posit that aggregate, province-wide infant mortality rate during the year a younger child was born is exogenous to individual family characteristics. I compare two families, one whose mother gave birth during a year with relatively high infant mortality rate, both at the same parity and age. I then look at older siblings from both families who were born in the same year, controlling for potential time trends in literacy. As the two different IV techniques result in very similar estimates, I argue there is strong evidence of a modest trade-off.

By using two instruments, I am able to rule out one major source of potential bias. In many settings, IV estimates of the trade-off may be biased if parents reallocate resources towards (reinforcement) or away from (compensation) children with higher birth endowments. I show that both twins and children born in high mortality years have, on average, lower literacy rates than their older siblings. As one shock increases and one shock decreases family size, but both result in older siblings having relatively higher human capital, reinforcement or compensation would bias the estimates in different directions. As the estimates are very similar, I conclude there is no evidence that my estimates suffer from this bias.

Is the estimated trade-off economically significant? I compare Quebec to a society with similar culture and institutions: pre-Revolutionary rural France. Between  1628 and 1788, a woman surviving to age 40 in Quebec would expect to have 1.7 additional children surviving past age one compared to her rural French peers. The average literacy rate (again proxied by signatures) in France was about 9.5 percentage points higher than in Quebec. Assuming my estimate of the trade-off is a linear and constant effect (instead of just a local average), reducing family sizes to French levels would have increased literacy by 8.6 percentage points in the next generation, thereby eliminating most of the gap.

However, pre-Revolutionary France was hardly a human capital-rich society. Proxying for the presence of the primary educators of the period (clergy and members of religious orders) with unmarried adults, I find plausible evidence that the trade-off was steeper in boroughs and decades with greater access to education. Altogether, I interpret my results as evidence that a trade-off existed which explains some of the differences across societies.

 

Data Sources

Henry, Louis, 1978. “Fécondité des mariages dans le quart Sud-Est de la France de 1670 a 1829,” Population (French Edition), 33 (4/5), 855–883.

IMPQ. 2019. Infrastructure intégrée des microdonnées historiques de la population du Québec (XVIIe – XXe siècle) (IMPQ). [Dataset].Centre interuniversitaires d’études              québécoises (CIEQ).

Programme de recherche en démographie historique (PRDH). 2019. Registre de la population du Québec ancien (RPQA). [Dataset]. Département de Démographie, Université de Montréal.

Projet BALSAC. 2019. Le fichier BALSAC. [Dataset]. L’Université du Québec à Chicoutimi.