Overcoming the Egyptian cotton crisis in the interwar period: the role of irrigation, drainage, new seeds and access to credit

By Ulas Karakoc (TOBB ETU, Ankara & Humboldt University Berlin) & Laura Panza (University of Melbourne)

The full article from this blog is forthcoming in the Economic History Review.

 

Panza1
A study of diversity in Egyptian cotton, 1909. Available at Wikimedia Commons.

By 1914, Egypt’s large agricultural sector was negatively hit by declining yields in cotton production. Egypt at the time was a textbook case of export-led development.  The decline in cotton yields — the ‘cotton crisis’ — was coupled with two other constraints: land scarcity and high population density. Nonethless, Egyptian agriculture was able to overcome this crisis in the interwar period, despite unfavourable price shocks. The output stagnation between 1900 and the 1920s clearly contrasts with the following recovery (Figure 1). In this paper, we empirically examine how this happened, by focusing on the role of government investment in irrigation infrastructure, farmers crop choices (intra-cotton shifts), and access to credit.

 

Figure 1: Cotton output, acreage and yields, 1895-1940

Panza2
Source: Annuaire Statistique (various issues)

 

The decline in yields was caused by expanded irrigation without sufficient drainage, leading to a higher water table, increased salination, and increased pest attacks on cotton (Radwan, 1974; Owen, 1968; Richards, 1982).  The government introduced an extensive public works programme, to reverse soil degradation and restore production. Simultaneously, Egypt’s farmers changed the type of cotton they were cultivating, shifting from the long staple and low yielding Sakellaridis to the medium-short staple and high yielding Achmouni, which reflected income maximizing preferences (Goldberg 2004 and 2006). Another important feature of the Egyptian economy between the 1920s and 1940s, was the expansion of credit facilities and the connected increase in farmers’ accessibility to agricultural loans. The interwar years witnessed the establishment of cooperatives to facilitate small landowners’ access to inputs (Issawi,1954), and the foundation of the Crèdit Agricole in 1931, offering small loans (Eshag and Kamal, 1967). These credit institutions coexisted with a number of mortgage banks, among which the Credit Foncièr was the largest, servicing predominantly large owners. Figure 2 illustrates the average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939).

 

Figure 2: Average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939)

Panza3
Source: Annuaire Statistique (various issues)

 

Our work investigates the extent to which these factors contributed to the recovery of the raw cotton industry. Specifically: to what extent can intra-cotton shifts explain changes in total output? How did the increase in public works, mainly investment in the canal and drainage network, help boost production? And what role did differential access to credit play? To answer these questions, we construct a new dataset by exploiting official statistics (Annuaire Statistique de l’Egypte) covering 11 provinces and 17 years during 1923-1939. These data allow us to provide the first empirical estimates of Egyptian cotton output at the province level.

Access to finance and improved seeds significantly increased cotton output. The declining price premium of Sakellaridis led to a large-scale switch to Achmouni, which indicates that farmers responded to market incentives in their cultivation choices. Our study shows that cultivators’ response to market changes was fundamental in the recovery of the cotton sector. Access to credit was also a strong determinant of cotton output, especially to the benefit of large landowners. That access to credit plays a vital role in enabling the adoption of productivity-enhancing innovations is consonant with the literature on the Green Revolution, (Glaeser, 2010).

Our results show that the expansion of irrigation and drainage did not have a direct effect on output. However, we cannot rule out completely the role played by improved irrigation infrastructure because we do not observe investment in private drains, so we cannot assess complementarities between private and public drainage. Further, we find some evidence of a cumulative effect of drainage pipes, two to three years after installation.

The structure of land ownership, specifically the presence of large landowners, contributed to output recovery. Thus, despite institutional innovations designed to give small farmers better access to credit, large landowners benefitted disproportionally from credit availability. This is not a surprising finding: extreme inequality of land holdings had been a central feature of the country’s agricultural system for centuries.

 

References

Eshag, Eprime, and M. A. Kamal. “A Note on the Reform of the Rural Credit System in U.A.R (Egypt).” Bulletin of the Oxford University Institute of Economics & Statistics 29, no. 2 (1967): 95–107. https://doi.org/10.1111/j.1468-0084.1967.mp29002001.x.

Glaeser, Bernhard. The Green Revolution Revisited: Critique and Alternatives. Taylor & Francis, 2010.

Goldberg, Ellis. “Historiography of Crisis in the Egyptian Political Economy.” In Middle Eastern Historiographies: Narrating the Twentieth Century, edited by I. Gershoni, Amy Singer, and Hakan Erdem, 183–207. University of Washington Press, 2006.

———. Trade, Reputation and Child Labour in the Twentieth-Century Egypt. Palgrave Macmillan, 2004.

Issawi, Charles. Egypt at Mid-Century. Oxford University Press, 1954.

Owen, Roger. “Agricultural Production in Historical Perspective: A Case Study of the Period 1890-1939.” In Egypt Since the Revolution, edited by P. Vatikiotis, 40–65, 1968.

Radwan, Samir. Capital Formation in Egyptian Industry and Agriculture, 1882-1967. Ithaca Press, 1974.

Richards, Alan Egypt’s Agricultural Development, 1800-1980: Technical and Social Change. Westview Press, 1982.

 


Ulas Karakoc

ulaslar@gmail.com

 

Laura Panza

lpanza@unimelb.edu.au

 

 

 

 

 

Patents and Invention in Jamaica and the British Atlantic before 1857

By Aaron Graham (Oxford University)

This article will be published in the Economic History Review and is currently available on Early View.

 

Cardiff Hall, St. Ann's.
A Picturesque Tour of the Island of Jamaica, by James Hakewill (1875). Available at Wikimedia Commons.

For a long time the plantation colonies of the Americas were seen as backward and undeveloped, dependent for their wealth on the grinding enslavement of hundreds of thousands of people.  This was only part of the story, albeit a major one. Sugar, coffee, cotton, tobacco and indigo plantations were also some of the largest and most complex economic enterprises of the early industrial revolution, exceeding many textile factories in size and relying upon sophisticated technologies for the processing of raw materials.  My article looks at the patent system of Jamaica and the British Atlantic which supported this system, arguing that it facilitated a process of transatlantic invention, innovation and technological diffusion.

The first key finding concerns the nature of the patent system in Jamaica.  As in British America, patents were granted by colonial legislatures rather than by the Crown, and besides merely registering the proprietary right to an invention they often included further powers, to facilitate the process of licensing and diffusion.  They were therefore more akin to industrial subsidies than modern patents.  The corollary was that inventors had to demonstrate not just novelty but practicality and utility; in 1786, when two inventors competed to patent the same invention, the prize went to the one who provided a successful demonstration (Figure 1).   As a result, the bar was higher, and only about sixty patents were passed in Jamaica between 1664 and 1857, compared to the many thousands in Britain and the United States.

 

Figure 1. ‘Elevation & Plan of an Improved SUGAR MILL by Edward Woollery Esq of Jamaica’

Graham1
Source: Bryan Edwards, The History, Civil and Commercial, of the British Colonies of the West Indies (London, 1794).

 

However, the second key finding is that this ‘bar’ was enough to make Jamaica one of the centres of colonial technological innovation before 1770, along with Barbados and South Carolina, which accounted for about two-thirds of the patents passed in that period.  All three were successful plantation colonies, where planters earned large amounts of money and had both the incentive and the means to invest heavily in technological innovations intended to improve efficiency and profits.  Patenting peaked in Jamaica between the 1760s and 1780s, as the island adapted to sudden economic change, as part of a package of measures that included opening up new lands, experimenting with new cane varieties, engaging in closer accounting, importing more slaves and developing new ways of working them harder.

A further finding of the article is that the English and Jamaican patent systems until 1852 were complementary.  Inventors in Britain could purchase an English patent with a ‘colonial clause’ extending it to colonial territories, but a Jamaican patent offered them additional powers and flexibility as they brought their inventions to Jamaica and adapted it to local conditions.  Inventors in Jamaica could obtain a local patent to protect their invention while they perfected it and prepared to market it in Britain.  The article shows how inventors used varies strategies within the two systems to help support the process of turning their inventions into viable technologies.

Finally, the colonial patents operated alongside a system of grants, premiums and prizes operated by the Jamaican Assembly, which helped to support innovation by plugging the gaps left by the patent system.  Inventors who felt that their designs were too easily pirated, or that they themselves lacked the capacity to develop them properly, could ask for a grant instead that recompensed them for the costs of invention and made the new technology widely available.  Like the imperial and colonial patents, the grants were part of the strategies used to promote invention.

Indeed, sometimes the Assembly stepped in directly.  In 1799, Jean Baptiste Brouet asked the House for a patent for a machine for curing coffee.  The committee agreed that the invention was novel, useful and practical, ‘but as the petitioner has not been naturalised and is totally unable to pay the fees for a private bill’, they suggested granting him £350 instead, ‘as a full reward for his invention; [and] the machines constructed according to the model whereof may then be used by any person desirous of the same, without any license from or fee paid to the petitioner’.

The article therefore argues that Jamaican patents were part of wider transatlantic system that acted to facilitate invention, innovation and technological diffusion in support of the plantation economy and slave society.

 


 

Aaron Graham

aaron.graham@history.ox.ac.uk

Famine, institutions, and indentured migration in colonial India

By Ashish Aggarwal (University of Warwick)

This blog is part of a series of New Researcher blogs.

 

Aggarwal1
Women fetching water in India in the late 19th century. Available at Wikimedia Commons.

A large share of the working population in developing countries is still engaged in agricultural activities. In India, for instance, over 40% of the employed population works in the agricultural sector and nearly three-quarters of the households depend on rural incomes (World Bank[1]). In addition, the agricultural sector in developing countries is plagued with low investments, forcing workers to rely on natural sources for irrigation as opposed to perennial man-made sources. Gadgil and Gadgil (2006) study the agricultural sector in India during 1951-2003 and find that despite a decline in share of agriculture in GDP in India, severe droughts still adversely impact GDP by 2-5%. In such a context, any unanticipated deviation from normal in rainfall is bound to have adverse effects on productivity and consequently, on incomes of these workers. In this paper, I study whether workers adopt migration as a coping strategy in response to income risks arising out of negative shocks to agriculture. And, if local institutions facilitate or hinder the use of this strategy. In a nutshell, the answers are yes and yes.

I study these questions in the context of indentured migration from colonial India to several British colonies. The abolition of slavery in the 1830s led to a demand for new sources of labour to work on plantations in the colonies. Starting with the “great experiment” in Mauritius (Carter, 1993), over a million Indians became indentured migrants with Mauritius, British Guyana, Natal, and Trinidad being the major destinations. The indentured migration from India was a system of voluntary migration, wherein passages were paid-for and migrants earned fixed wages and rations. The exact terms varied across different colonies, but generally the contracts were specified for a period of five years and after ten years of residency in the colony, a paid-for return passage was also available.

Using a unique dataset on annual district-level outflows of indentured migrants from colonial lndia to several British colonies in the period 1860-1912, I find that famines increased indentures. However, this effect varied according to the land-revenue collection system established by the British. Using the year the district was annexed by Britain to construct an instrument for the land revenue system (Banerjee and Iyer, 2005), I find that emigration responded less to famines in British districts where landlords collected revenue (as opposed to places where individual was responsible for revenue payments). I also find this to be the case in Princely States. However, the reasons for these results are markedly different. Qualitative evidence suggests that landlords were unlikely to grant remissions to their tenants; this increased tenant debt, preventing them from migrating. Interlinked transactions and a general fear of the landlords prevented the tenants from defaulting on their debts. Such coercion was not witnessed in areas where landlords were not the revenue collectors making it easier for people to migrate in times of distress. On the other hand, in Princely states, local rulers adopted liberal measures during famine years in order to help the population. These findings are robust to various placebo and robustness checks. The results are in line with Persaud (2019) who shows that people engaged in indentured migration to escape local price volatility.

 

[1] https://www.worldbank.org/en/news/feature/2012/05/17/india-agriculture-issues-priorities

 

References

Banerjee, Abhijit, and Lakshmi Iyer (2005): “History, Institutions, and Economic Performance: The Legacy of Colonial Land Tenure Systems in India”, American Economic Review, Vol. 95, No. 4, pp. 1190-1213.

Carter, Marina (1993): “The Transition from Slave to Indentured Labour in Mauritius”, Slavery and Abolition, 14:1, pp. 114-130.

Gadgil, Sulochana, and Siddhartha Gadgil (2006): “The Indian Monsoon, GDP and Agriculture”, Economic and Political Weekly, Vol. 41, No. 47, 4887-4895.

Persaud, Alexander (2019): “Escaping Local Risk by Entering Indentureship: Evidence from Nineteenth-Century Indian Migration”, Journal of Economic History, Vol. 79, No. 2, pp. 447-476.

 

 

Before the fall: quantity versus quality in pre–demographic transition Quebec (NR Online Session 3)

By Matthew Curtis (University of California, Davis)

This research is due to be presented in the third New Researcher Online Session: ‘Human Capital & Development’.


 

Curtis1
Map of East Canada or Quebec and New Brunswick, by John Tallis c.1850. Available at Wikimedia Commons.

While it plays a key role in theories of the transition to modern economic growth, there are few estimates of the quantity-quality trade-off from before the demographic transition. Using a uniquely suitable new dataset of vital records, I use two instrumental variable (IV) strategies to estimate the trade-off in Quebec between 1620 and 1850. I find that one additional child who survived past age one decreased the literacy rate (proxied by signatures) of their older siblings by 5 percentage points.

The first strategy exploits the fact that twin births, conditional on mother’s age and parity, are a random increase in family size. While twins are often used to identify the trade-off in contemporary studies, sufficiently large and reliable historical datasets containing twins are rare. I compare two families, one whose mother gave birth to twins and one whose mother gave birth to a singleton, both at the same parity and age. I then look at the probability that each older non-twin sibling signed their marriage record.

For the second strategy, I posit that aggregate, province-wide infant mortality rate during the year a younger child was born is exogenous to individual family characteristics. I compare two families, one whose mother gave birth during a year with relatively high infant mortality rate, both at the same parity and age. I then look at older siblings from both families who were born in the same year, controlling for potential time trends in literacy. As the two different IV techniques result in very similar estimates, I argue there is strong evidence of a modest trade-off.

By using two instruments, I am able to rule out one major source of potential bias. In many settings, IV estimates of the trade-off may be biased if parents reallocate resources towards (reinforcement) or away from (compensation) children with higher birth endowments. I show that both twins and children born in high mortality years have, on average, lower literacy rates than their older siblings. As one shock increases and one shock decreases family size, but both result in older siblings having relatively higher human capital, reinforcement or compensation would bias the estimates in different directions. As the estimates are very similar, I conclude there is no evidence that my estimates suffer from this bias.

Is the estimated trade-off economically significant? I compare Quebec to a society with similar culture and institutions: pre-Revolutionary rural France. Between  1628 and 1788, a woman surviving to age 40 in Quebec would expect to have 1.7 additional children surviving past age one compared to her rural French peers. The average literacy rate (again proxied by signatures) in France was about 9.5 percentage points higher than in Quebec. Assuming my estimate of the trade-off is a linear and constant effect (instead of just a local average), reducing family sizes to French levels would have increased literacy by 8.6 percentage points in the next generation, thereby eliminating most of the gap.

However, pre-Revolutionary France was hardly a human capital-rich society. Proxying for the presence of the primary educators of the period (clergy and members of religious orders) with unmarried adults, I find plausible evidence that the trade-off was steeper in boroughs and decades with greater access to education. Altogether, I interpret my results as evidence that a trade-off existed which explains some of the differences across societies.

 

Data Sources

Henry, Louis, 1978. “Fécondité des mariages dans le quart Sud-Est de la France de 1670 a 1829,” Population (French Edition), 33 (4/5), 855–883.

IMPQ. 2019. Infrastructure intégrée des microdonnées historiques de la population du Québec (XVIIe – XXe siècle) (IMPQ). [Dataset].Centre interuniversitaires d’études              québécoises (CIEQ).

Programme de recherche en démographie historique (PRDH). 2019. Registre de la population du Québec ancien (RPQA). [Dataset]. Département de Démographie, Université de Montréal.

Projet BALSAC. 2019. Le fichier BALSAC. [Dataset]. L’Université du Québec à Chicoutimi.

Honest, sober and willing: Oxford college servants 1850-1939 (NR Online Session 3)

By Kathryne Crossley (University of Oxford)

This research is due to be presented in the third New Researcher Online Session: ‘Human Capital & Development’.


 

Crossley1
The library of Christ Church, Oxford from Rudolph Ackermann’s History of Oxford (1813). Available at Wikimedia Commons.

 

 

Oxford colleges were among the earliest employers in England to offer organised pension schemes for their workers. These schemes were remarkable for several reasons: they were early, the first was established in 1852; they included domestic servants, rather than white-collar workers; and colleges were unlike typical early adopters of pension schemes, which tended to be large bureaucratic organisations, such as railways or the civil service.

The schemes developed from various motives: from preventing poverty in workers’ old age to promoting middle-class values, like thrift and sobriety, through compulsory savings.

Until the Second World War, college servants were often described as a ‘labour aristocracy’, and while there were many successful senior servants, equally there were many casual, part-time and seasonal workers. The experience of these workers provides an unusually detailed look at the precarity of working-class life in the nineteenth and early twentieth centuries, and the strategies that workers developed to manage uncertainty, especially in old age.

My research uses a wide variety of archival sources, many previously unknown, from 19 Oxford colleges to consider why these colleges decided to overhaul servants’ pension provisions during this period, how retirement savings schemes were designed and implemented, and to try and understand what workers thought of these fundamental changes to the labour contract.

During this period, Oxford was a highly seasonal, low-waged economy. It was hard for many people to find enough work during the year to earn an adequate living, much less save for an old age they usually did not expect to see. Most men and women worked as long as they were capable, often past what we think of as a typical retirement age today.

It’s no surprise then that the protections against illness, disability, old age and death offered by these paternalistic employers encouraged a highly competitive labour market for college work, and the promise of an ex gratia, or traditional non-contributory pension, was one of the most attractive features of college employment.

For centuries, colleges awarded these traditional pensions to workers. Rights to these pensions, which usually replaced about a quarter to a third of a worker’s total earnings, were insecure and awards were made entirely at the discretion of the college.

In 1852, the first retirement savings scheme for Oxford college servants was created at Balliol College. By the 1920s, traditional non-contributory pensions had been replaced by contributory schemes at most Oxford colleges, shifting the risk of old age from employers to employees. Even though making contributions often meant a decrease in take-home pay, servants always preferred a guaranteed pension entitlement over traditional non-contributory pensions.

The earliest savings schemes mandated the purchase of life insurance policies. These were intended not only to protect a servant’s dependent family members, but also to limit the college’s financial liability in the event of a servant’s death. Servants were similarly risk-averse and often purchased multiple policies when they could afford to; many joined friendly societies and purchased insurance privately, in addition to employer-directed schemes.

The popularity of these schemes among Oxford colleges mirrors the growth of the insurance industry and the development of actuarial science during this period. By the 1870s, nearly all schemes included annuities or endowment assurance policies, which provided a guaranteed income for servants, usually at age 60-65, and facilitated the introduction of mandatory retirement ages for these workers.

Traditional paternalism remained influential throughout the period. Colleges insisted on controlling insurance policies, naming themselves as beneficiaries and directing the proceeds. Women, who were more likely to be in low-waged seasonal work, were nearly always excluded from these schemes and had to depend on ex gratia pension awards much longer than their male colleagues.

These early pension schemes offered no protection against inflation and colleges were usually slow to increase pension awards in response to rising prices. By the end of the Great War, dissatisfaction with inadequate pensions was one of several factors that pushed college servants to form a trade union in 1919.

 

Strangling Speculation: The Effects of the 1903 Viennese Futures Trading Ban

By Laura Wurm (Queen’s University Belfast)

This blog is part of a series of New Researcher blogs.

 

101123-A-0193C-001
Farmland in Dalat, Vietnam. Available at Wikimedia Commons.

 

Ever since the emergence of futures markets and speculation, the effects of futures trading on spot price volatility have been subject to intense debate. While the populist discourse affirms the adverse and price-disturbing consequences of futures trading, the work of scholars stresses the risk allocation and information transmission function of futures towards spot markets, essential for pricing cash transactions. My research tests whether these volatility-lowering effects of futures trading towards the cash market hold true by assuming the opposite: what happens if futures trading no longer exists?

To do so, I go back to the early 20th century, when futures trading in the Viennese grain market was, unlike at other trade locations at the time, such as Germany, England, or Texas, banned permanently. The 1903 parliament-enforced prohibition of futures trading was the consequence of an aversion against speculators, who were blamed for “never having held actual grain in their hands”. Putting an end to the vibrant futures market of the Agricultural Products Exchange, the city’s gathering place for farmers, millers, large-scale customers, and speculators, was thought to be the last resort to curb undue speculation. Up to the present day, futures trading has not been resumed. The uniqueness of this ban makes it an ideally suited natural experiment to test the effects of futures trading and its abolishment on spot price volatility. Prices from the Budapest Stock and Commodity Exchange, which was not affected by the ban, are used as a synthetic control. The Budapest market, as part of the Austro-Hungarian Empire, operated under similar legal-economic and geographic conditions, and was, in addition to Vienna, the only Austro-Hungarian market offering a trade in futures. This makes Budapest an ideally suited control.

My project examines the information transmission function of futures to spot markets and finds a heightened spot price volatility in Vienna and a lower accuracy in pricing cash transactions after futures trading was banned. The intra-day variation of spot prices increased after the ban. Without futures trading, the Viennese market lacked pricing accuracy and efficiency. The effect on volatility holds true when using a commodity traded exclusively on the Viennese spot market as a control. In addition, assessing Granger causality, information flows between the futures and spot markets of the two cities are found to have existed prior to the ban, which links to the information transmission function of futures towards cash markets and the close ties between the two markets. After futures trading was prohibited in Vienna, Budapest futures prices with 3-6 months maturity continued to significantly Granger-cause Viennese spot prices.

 

 


Laura Wurm

lwurm01@qub.ac.uk

 

Food Security, Trade and Violence: From the First to the Second Globalization in Colombia 1916-2016 (NR Online Session 2)

By Alexander Urrego-Mesa (Universitat de Barcelona)

This research is due to be presented in the second New Researcher Online Session: ‘Industry, Trade & Technology’


 

 

With world population forecasted at 9 billion by 2050, and climate change hazards, maintaining the capacity to provide food becomes paramount for national governments, especially in developing countries. After the Second War World and the Oil Crises, food aid and trade liberalisation helped to distribute food from surplus countries to those in deficit. (Dithmer, J., & Abdulai, A. 2017).

Nonetheless, some scholars suggest that trade liberalization and agro-export specialization threaten domestic food security in developing countries (Kumar Sharma, 2016), for the following reasons: it promotes increasing dependence on food exporters, increases vulnerability when confronting volatile agrarian prices, and jeopardizes domestic production by promoting agro-exports. Moreover, the rise of agrarian trade contributes to rising consumption of fossil-fuel-based inputs like nutrients, fuel, machinery, and intensifies the use of natural resources, thereby contributing to soil erosion, greenhouse gas emissions and the loss of biodiversity. (D’Odorico et al., 2014).

Nonetheless, little is known about how the relationship between trade, food availability, and food production has evolved. This research contributes to the food security debate at the national level by introducing a historic long-rung approach and comparing the two main periods of food trade, the First and the Second Globalization in a developing context.

 

Figure 1. Food trade balance (1916-2016)

Urrego-Mesa1
Note: Positive values indicate imports, negative values refer to exports, and the solid black line indicates net imports.

 

I analyse the long-run trend of food security in Colombia, a relevant developing country and one of the most biodiverse countries on the planet. I build on data from the early 20th century to the present, on agrarian trade, food availability, and self-sufficiency (SS) -understood as the capacity of the agrarian system to meet domestic demand.

I find that the country shifted from tropical exporter to food-dependent importer between the First and Second Globalisations. However, this change has not led to setting tropical exports aside, but to an increase in the amount of food imported from abroad. New cash crops like tropical fruits and sugar cane took the role of coffee under the FMI’s structural reforms (Figure 1). In ecological terms, this meant a change towards more intensive farming in water, land, fuel, and the use of fertilisers.

Although the import of wheat and rice served to face food shortages at the end of the 1920s and in the early 1950s, importing maize has become the rule to guarantee the availability of food during the Second Globalization. This increase in imports allowed the gains in per capita consumption but eroded the SS capacity of the domestic agrarian system to provide food. On the other side, the long-rung agro-export specialisation led to increasing the capacity to supply international markets with tropical products (Figure 2).

 

Figure 2. Self-sufficiency index by groups of products

Urrego-Mesa2
Note: A result greater than one indicates the agrarian system is a domestic and international supplier of food, whereas a value less than one means the domestic consumption relies on imports.

 

Finally, I explore the role of international agrarian prices and political violence in shaping tropical specialisation and food dependence in the 1980s.  I find a negative relationship (-0.75) between the prices of cereals relative to tropical products (coffee, banana, sugarcane, and oil palm), and the trade balance. Regarding internal factors,  the self-sufficiency index and political violence are negatively correlated for cereals and pulses, and positively correlated for sugar cane.

Violence in Colombia contributes to land grabbing to develop agribusiness projects. It leads to the displacement of the labour force in the countryside. Thus, political violence is responsible for growing demand for food in cities and, at the same time, for the lack of capacity to provide it from the agrarian sector. If the evolution of relative prices is the incentive to deepen food dependence and tropical specialisation, violence is the means to achieve this.

 

 

References

Dithmer, J., & Abdulai, A. (2017). Does trade openness contribute to food security? A dynamic panel analysis. Food Policy, 69, 218-230.

D’Odorico, P., Carr, J. A., Laio, F., Ridolfi, L., & Vandoni, S. (2014). Feeding humanity through global food trade. Earth’s Future, 2(9), 458-469.

Kumar Sharma, Sachin (2016). The WTO and Food Security. Implications for Developing Countries. Singapore. Springer.

 


Alexander Urrego-Mesa

alex.urrego.mesa@ub.edu

@AlexUrrego3

 

Why is Switzerland so rich? The role of early electricity adoption (NR Online Session 2)

by Björn Brey (University of Nottingham)

This research is due to be presented in the second New Researcher Online Session: ‘Industry, Trade & Technology’


 

After its first commercial usage in 1879, Switzerland experienced a drastic increase in electricity production reflected in it having the highest per capita production in the world by 1900. During the same time period, Swiss GDP growth accelerated considerably compared with other industrialising countries (see Figure 1).

 

Figure 1: Real GDP per capita in 1990US$ across leading industrial countries from 1850-1970. 1879 reflects the first commercial usage of electricity in Switzerland.

Bjoern1
Source: Maddison Project Database, version 2018

 

In line with this observation, the diffusion of general-purpose technologies (such as the steam engine, electricity and information technologies) is seen as a main driver of economic growth at the global level. But much less is known about the local effect of adopting these technologies. This raises the question to which extend the early adoption of electricity contributed to industrialisation and economic development in the short and long run?

My research, to be presented at the annual conference of the Economic History Society in Oxford in April 2020, answers these questions by analysing the impact of the early adoption of electricity across Switzerland on economic development through exploiting the quasi-random potential to generate electricity from waterpower.

My study finds that the adoption of electricity between 1880 and 1900 considerably increased the contemporaneous manufacturing employment share. This initial effect persists up to today with the average district observing a 1.5% higher manufacturing employment share in 2011 due to the adoption of electricity up to 1900.

This effect of early electricity adoption on employment shares in agriculture, manufacturing and services in the long run is depicted in Figure 2. Notably, the growth in manufacturing employment observed can be attributed in particular to chemical industries, which relied on access to electricity for newly developed production processes.

This effect on economic development is also observable in incomes across districts with a one standard deviation higher exposure to electricity between 1880 and 1900 leading to a 1949 Swiss Francs ($2004) higher yearly median income in 2010.

 

Figure 2: Estimated IV-coefficients on the effect of a one horsepower increase in electricity production 1880-1900 on the change in the employment share across sectors from 1880 to the respective year as well as the pre-trend period 1860-1880.

Bjoern2

 

For the analysis, I newly digitised historic information on electricity production by waterpower plants across the whole of Switzerland and construct geocoded data on all potential waterpower plants that could be built as estimated by a plan of Swiss government engineers at the time.

These data are illustrated in Figure 3. Combining this information allows me to use the potential to produce electricity across districts to infer the causal impact of electricity adoption on economic development across Switzerland.

 

Figure 3: The map shows the exploited and potential waterpower in Switzerland in 1914. Blue-dots represent exploited waterpower, red-dots represent potential waterpower, both of existing natural sources and grey-dots represent existing and potential water-power plants that requires the building of an embankment dam. The sites are coded into 5 categories: 20-99HP, 100-999HP, 1000-4999HP, 5000-9999HP and above 10000HP.

Bjoern3

 

These results provide new insight into how early access to electricity at the end of the nineteenth century helps to explain differences in economic development today. Interestingly, the long-run effect of electricity appears not to be explained by persistent differences in electricity consumption across Switzerland after the roll-out of the electricity grid in the 1920s, but rather due to increased investment into education that was complementary to the newly industries that had newly developed.

 


 

 

 

 

The Impact of the Central African Federation on Industrial Development in Northern Rhodesia, 1953-1963 (NR Online Session 2)

By Mostafa Abdelaal (University of Cambridge)

This research is due to be presented in the second New Researcher Online Session: ‘Industry, Trade & Technology’


 

Northern Rhodesia joined Southern Rhodesia and Nyasaland to form the Central African Federation (CAF), which lasted from 1953 to 1963 (Figure 1). During this period, two contrasting images were formed about the Federation’s economic prospects.   The first depicts the exploitation of the revenue surpluses of Northern Rhodesia in favour of Southern Rhodesia, (Figure 2). The second typifies Kitwe, one of the main mining-town in the Copperbelt in Northern Rhodesia, as the most rapidly developed town in terms of industrial and commercial sectors (Figure 3). My research examines whether the Federation stimulated or undermined manufacturing growth.

 

Figure 1: Map of the Central African Federation, 1953-1963

Mostafa1
Source:  Papers relating to Central African Federation [1952-1958], British Library, EAP121/1/3/16, https://eap.bl.uk/archive-file/EAP121-1-3-16

This paper argues that, despite protests to the contrary, manufacturing in Northern Rhodesia grew rapidly under the Federal tariff and might be attributed to the natural protection for some industries against the high costs of transport goods from remote suppliers and to the Federal tariff against South Africa imports.

 

Figure 2.  Cartoon published by Central African Post mirroring one of the readers’ views on the Federation

Mostafa2
Source: Papers relating to Central African Federation [1952-1958], British Library, EAP121/1/3/16, https://eap.bl.uk/archive-file/EAP121-1-3-16

Figure 3: Kitwe: A model of a mining town in the Copperbelt

Mostafa3
Source: NZA, ZIMCO,  1/1/3/ 10, From Mr Gresh the Managing Director of the Northern Rhodesia Industrial Development Corporation Ltd., to the Managing Director of Marcheurop, Brussels, 5th July 1962.

 

My research offers new insights into the rapid growth of the market in Northern Rhodesia. Specifically, to what extent did local industry respond to the mining-led economic expansion in Northern Rhodesia.  The first census of industrial production occurred in 1947, which provides a benchmark against which to measure growth rates before and during the Federal tariff system. Industrial production in Northern and Southern Rhodesia grew from 3.3 percent to 6 percent, and from 12 percent to 16.3 percent, respectively, between 1954 and 1963.  The net value added of manufacturing in Northern Rhodesia grew from less than £1 million in 1947 to £6.40 million in 1955, then it reached £12.68 million in 1963. Southern Rhodesia witnessed a significant increase in the net value added of manufacturing, from £20 million in 1953 to £50.2 million in 1963.

The composition of manufacturing output before and during the Federation reveals that rapid growth in Northern Rhodesia’s production of food, drinks, textiles, and chemicals — which constituted the majority of domestically manufactured goods (Table 2).

Table 2

Net value added of manufacturing output in Northern Rhodesia (£ million, nominal)

Sector/Year 1947 1955 1963
Food, drink and tobacco 0.20 1.56 5.13
Textiles, clothing, footwear 0.019 0.14 0.43
Metal engineering and repairs 0.082 a 1.58
Non-metallic minerals a a 1.83
Wood industries 0.21 0.46b 0.85
Building materials 0.010 c c
Transport equipment a 1.36 1.43
Printing and publishing 0.11 0.36 0.64
Others d 0.071 2.53 0.80
Total 0.710 6.40 12.68
a) Included in ‘others’

b) excluding furniture

c) building materials excluded from manufacturing sector since 1953.

d) includes manufacture of tobacco, made-up textiles other than apparel, furniture, retreading of tyres, chemicals, non-metallic minerals, metal industries other than transport equipment and other, not elsewhere specified (n.e.s.) in 1955, leather and rubber products, chemicals, pottery and other (n.e.s.) in 1963.

Sources: TNA CO 798/24, Blue Books of Statistics, 1947; NAZ, Monthly Digest of Statistics, Central Statistics Office, Lusaka, December 1955, 1964; Young (1973).

 

My research suggests a more nuanced interpretation is required of the importance of Northern Rhodesia to the South. The Federation curbed Northern Rhodesia’s development of specific industries that existed in Southern Rhodesia, especially steel and textiles, thereby disrupting the optimum allocation resources to industrial production in the South.

However, Northern Rhodesia’s net value added of manufacturing output benefited from the application of Federal tariffs in certain consumer industries that grew rapidly, such as processed foods and drinks. Consequently, the Federation was beneficial to the growth of manufacturing in Northern Rhodesia (Zambia).

 


 

Mostafa Abdelaal

ma710@cam.ac.uk

Land distribution and Inequality in a Black Settler Colony: The Case of Sierra Leone, 1792–1831

by Stefania Galli and Klas Rönnbäck (University of Gothenburg)

The full article from this blog is published on the European Review of Economic History and is available as open source at this link

Governor_John_Thomas's_house_in_Sierra_Leone,_mid-17th_century,_artist's_recreation
“Houses at Sierra-Leone”, Wesleyan Juvenile Offering: A Miscellany of Missionary Information for Young Persons, volume X, May 1853, pp. 55–57, illustration on p. 55. Available on Wikimedia

Land distribution has been identified as a key contributor to economic inequality in pre-industrial societies. Historical evidence on the link between land distribution and inequality for the African continent is scant, unlike the large body of research available for Europe and the Americas. Our article examines inequality in land ownership in Sierra Leone during the early nineteenth century. Our contribution is  unique because it studies land inequality at a particularly early stage for African economic history.

In 1787 the Sierra Leone colony was born, the first British colony to be founded after the American War of Independence. The colony had some peculiar features. Although populated by settlers, they were not of European origin, as in most settler colonies founded at the time. Rather, Sierra Leone came to be populated by people of African descent — a mix of former and liberated slaves from America, Europe and Africa. Furthermore, Sierra Leone had deeply egalitarian foundations, which rendered it more similar to a utopian society, than to other colonies founded on the African continent in subsequent  decades. The founders of the colony intended egalitarian land distribution for all settlers, aiming to create a black yeoman settler society.

In our study, we rely on a new dataset constructed from multiple different sources pertaining to the early years of Sierra Leone, which provide evidence on household  land distribution for three benchmark years: 1792, 1800 and 1831. The first two benchmarks refer to a time when demographic pressure in the Colony was limited, while the last benchmark represents a period of  rapidly increasing  demographic pressure due to the inflow of ‘liberated slaves’ from captured slave ships landed at Freetown.

Our findings show that, in its early days, the colony was characterized by highly egalitarian land distribution, possibly the most equal distribution calculated to date. All households possessed some land, in a distribution determined to a large extent by household size. Not only were there no landless households in 1792 and 1800, but land was normally distributed around the mean. Based on these results, we conclude that the ideological foundations of the colony were manifested in egalitarian distribution of land.

Such ideological convictions were, however, hard to maintain in the long run due to mounting demographic pressure and limited government funding. Land inequality thus increased substantially by the last benchmark year (Figure 1). In 1831, land distribution was positively skewed, with a substantial proportion of households in the sample being landless or owning plots much smaller than the median, while a few households held very large plots. We argue that these findings are consistent with an institutional shift in redistributive policy, which enabled inequality to grow rapidly. In the early days, all settlers received a set amount of land. However, by 1831, land could be appropriated freely by the settlers, enabling households to appropriate land according to their ability, but also according to their wish to participate in agricultural production. Specifically, households in more fertile regions appear to have specialized in agricultural production, whereas households in regions unsuitable to agriculture increasingly came to focus upon other economic activities.

Picture 1new
Figure 1. Land distribution in Sierra Leone, 1792, 1800 and 1831. Source: as per article

Our results have two implications for the debate on the origin of inequality. First, Sierra Leone shows how idealist motives had important consequences for inequality. This is of key importance for wider discussions on the extent to which politics generates tangible changes in society. Second, our results show how difficult it was to effect idealism when confronted by  mounting material challenges.

 

To contact the authors:

Stefania Galli (stefania.galli@gu.se)

Twitter: https://twitter.com/galli_stef