A Policy of Credit Disruption: The Punjab Land Alienation Act of 1900

by Latika Chaudhary (Naval Postgraduate School) and Anand V. Swamy (Williams College)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

Agriculture_in_Punjab_India
Farming, farms, crops, agriculture in North India. Available at Wikimedia Commons.

In the late 19th century the British-Indian government (the Raj) became preoccupied with default on debt and the consequent transfer of land in rural India. In many regions Raj officials made the following argument: British rule had created or clarified individual property rights in land, which had for the first time made land available as collateral for debt. Consequently, peasants could now borrow up to the full value of their land. The Raj had also replaced informal village-based forms of dispute resolution with a formal legal system operating outside the village, which favored the lender over the borrower. Peasants were spendthrift and naïve, and unable to negotiate the new formal courts created by British rule, whereas lenders were predatory and legally savvy. Borrowers were frequently defaulting, and land was rapidly passing from long-standing resident peasants to professional moneylenders who were often either immigrant, of another religion, or sometimes both.  This would lead to social unrest and threaten British rule. To preserve British rule it was essential that one of the links in the chain be broken, even if this meant abandoning cherished notions of sanctity of property and contract.

The Punjab Land Alienation Act (PLAA) of 1900 was the most ambitious policy intervention motivated by this thinking. It sought to prevent professional moneylenders from acquiring the property of traditional landowners. To this end it banned, except under some conditions, the permanent transfer of land from an owner belonging to an ‘agricultural tribe’ to a buyer or creditor who was not from this tribe. Moreover, a lender who was not from an agricultural tribe could no longer seize the land of a defaulting debtor who was from an agricultural tribe.

The PLAA made direct restrictions on the transfer of land a respectable part of the policy apparatus of the Raj and its influence persists to the present-day. There is a substantial literature on the emergence of the PLAA, yet there is no econometric work on two basic questions regarding its impact. First, did the PLAA reduce the availability of mortgage-backed credit? Or were borrowers and lenders able to use various devices to evade the Act, thereby neutralizing it?  Second, if less credit was available, what were the effects on agricultural outcomes and productivity? We use panel data methods to address these questions, for the first time, so far as we know.

Our work provides evidence regarding an unusual policy experiment that is relevant to a hypothesis of broad interest. It is often argued that ‘clean titling’ of assets can facilitate their use as collateral, increasing access to credit, leading to more investment and faster growth. Informed by this hypothesis, many studies estimate the effects of titling on credit and other outcomes, but they usually pertain to making assets more usable as collateral. The PLAA went in the opposite direction – it reduced the “collateralizability” of land which should have  reduced investment and growth, based on the argument we have described. We investigate whether it did.

To identify the effects of the PLAA, we assembled a panel dataset on 25 districts in Punjab from 1890 to 1910. Our dataset contains information on mortgages and sales of land, economic outcomes, such as acreage and ownership of cattle, and control variables like rainfall and population. Because the PLAA targeted professional moneylenders, it should have reduced mortgage-backed credit more in places where they were bigger players in the credit market. Hence, we interact a measure of the importance of the professional, that is, non-agricultural, moneylenders in the mortgage market with an indicator variable for the introduction of the PLAA, which takes the value of  1 from 1900 onward. As expected, we find that  that the PLAA contracted credit more in places where professional moneylenders played a larger role – compared to  districts with no professional moneylenders.  The PLAA reduced mortgage-backed credit by 48 percentage points more at the 25th percentile of our measure of moneylender-importance and by 61 percentage points more at the 75th percentile.

However, this decrease of mortgage-backed credit in professional moneylender-dominated areas did not lead to lower acreage or less ownership of cattle. In short, the PLAA affected credit markets as we might expect without undermining agricultural productivity. Because we have panel data, we are able to account for potential confounding factors such as time-invariant unobserved differences across districts (using district fixed effects), common district-specific shocks (using year effects) and the possibility that districts were trending differently independent of the PLAA (using district-specific time trends).

British officials provided a plausible explanation for the non-impact of PLAA on agricultural production: lenders had merely become more judicious – they were still willing to lend for productive activity, but not for ‘extravagant’ expenditures, such as social ceremonies.  There may be a general lesson here:  policies that make it harder for lenders to recover money may have the beneficial effect of encouraging due diligence.

 

 

To contact the authors:

lhartman@nps.edu

aswamy@williams.edu

Falling Behind and Catching up: India’s Transition from a Colonial Economy

by Bishnupriya Gupta (University of Warwick and CAGE)

The full paper of this blog post was published by The Economic History Review and it is available here 

152473-004-E0D19F36
Official of the East India Company riding in an Indian procession, watercolour on paper, c. 1825–30; in the Victoria and Albert Museum, London. Available at <https://www.britannica.com/topic/East-India-Company/media/1/176643/162308&gt;

There has been much discussion in recent years about India’s growth failure in the first 30 years after independence in 1947. India became a highly-regulated economy and withdrew from the global market. This led to inefficiency and low growth. The architect of Indian planning –Jawaharlal Nehru, the first prime minister, did not put India on an East Asian path. As a contrast, the last decade of the 20th century has seen a reintegration into the global economy and today India is one of the fastest growing economies.

Any analysis of Indian growth and development that starts in 1947, is deeply flawed. It ignores the history of development and the impact of colonization. This paper takes a long run view of India’s economic development and argues that the Indian economy stagnated under colonial rule and a reversal came with independence. Although a slow growth in comparison to East Asia, the Nehruvian legacy put India on a growth path.

Tharoor (2017) in his book Inglorious Empire argues that Britain’s industrial revolution was built on the destruction of Indian textile industries and British rule turned India from an exporter of agricultural goods.  A different view on colonial rule comes from Niall Ferguson in his book Empire: How Britain Made the Modern World. Ferguson claimed that even if the British rule did not increase Indian incomes, things might have been much worse under a restored Mughal regime in 1857. The British build the railways and connected India to the rest of the world.

Neither of these views are based on statistical evidence. Data on GDP per capita (Figure 1), shows that there was a slow decline and stagnation over a long period. Evidence on wages and per capita GDP show a prosperous economy in 1600 under the Mughal Emperor Akbar. Living standards began to decline from the middle of the 17th century, before colonization, continued as the East India Company gained territorial control in 1757. It is important to note that the decline coincided with increased integration with international markets and the rising trade in textiles to Europe. In 1857, India became a part of the global economy of the British Empire. Indian trade volume increased, but from an exporter or industrial products, India became an exporter if food and raw material. Per capita income stagnated even as trade increased, the colonial government built a railway network and British entrepreneurs owned large parts of the industrial sector. In 1947, the country was one of the poorest in the world. Figure 1 below also tells us that growth picked up after independence as India moved towards regulation and restrictions on trade and private investment.

What explains the stagnation in income prior to independence? The colonial government invested very little in the main sector, agriculture. The bulk of British investment went to the railways, but not in irrigation. The railways, initially connected the hinterland with the ports, but over time integrated markets, reducing price variability across markets. However, it did not contribute increasing agricultural productivity. Without large investment in irrigation, output per acre declined in areas that did not get canals. Industry on the other had was the fastest growing sector, but employed only 10 per cent of the work force. Stagnation of the economy under control rule had little to do with trade.

fig01
Indian GDP per capita between 1600 and 2000. Source: Aniruddha Bagchi, “Why did the Indian economy stagnate under the colonial rule?”  in Ideas for India 2013

Indian growth reversal began in independent India with regulation of trade and industry and a break with the global economy. For the first time in the 20th century, the Indian economy began to grow as the graph shows with investment in capital goods industries and agricultural infrastructure. Industrial growth and the green revolution in agriculture, moved the economy from stagnation to growth. This growth slowed down, but the economy did not stagnate as in the colonial period. Following economic reforms after the 1980s, India has entered a high growth regime. The initial increase in growth was a response to removal of restrictions on domestic private investment, well before reintegration into the global economy in the 1990s. The foundations for growth were laid in the first three decades after independence.

The institutional legacy of British rule, had long run consequences. One example is education policy that prioritized investment in secondary and tertiary education, creating a small group with higher education, but few with basic primary schooling. In 1947, less than one–fifth of the population had basic education. The higher education bias in education continued after independence and has created an advantage for the service sector. There are lessons from history to understand Indian growth after independence.

 

To contact the author: B.Gupta@warwick.ac.uk

Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69

by Jean-Pascal Bassino (ENS Lyon) and Pierre van der Eng (Australian National University)

This blog is part of a larger research paper published in the Economic History Review.

 

Bassino1
Vietnam, rice paddy. Available at Pixabay.

In the ‘great divergence’ debate, China, India, and Japan have been used to represent the Asian continent. However, their development experience is not likely to be representative of the whole of Asia. The countries of Southeast Asia were relatively underpopulated for a considerable period.  Very different endowments of natural resources (particularly land) and labour were key parameters that determined economic development options.

Maddison’s series of per-capita GDP in purchasing power parity (PPP) adjusted international dollars, based on a single 1990 benchmark and backward extrapolation, indicate that a divergence took place in 19th century Asia: Japan was well above other Asian countries in 1913. In 2018 the Maddison Project Database released a new international series of GDP per capita that accommodate the available historical PPP-based converters. Due to the very limited availability of historical PPP-based converters for Asian countries, the 2018 database retains many of the shortcomings of the single-year extrapolation.

Maddison’s estimates indicate that Japan’s GDP per capita in 1913 was much higher than in other Asian countries, and that Asian countries started their development experiences from broadly comparable levels of GDP per capita in the early nineteenth century. This implies that an Asian divergence took place in the 19th century as a consequence of Japan’s economic transformation during the Meiji era (1868-1912). There is now  growing recognition that the use of a single benchmark year and the choice of a particular year may influence the historical levels of GDP per capita across countries. Relative levels of Asian countries based on Maddison’s estimates of per capita GDP are not confirmed by other indicators such as real unskilled wages or the average height of adults.

Our study uses available estimates of GDP per capita in current prices from historical national accounting projects, and estimates PPP-based converters and PPP-adjusted GDP with multiple benchmarks years (1913, 1922, 1938, 1952, 1958, and 1969) for India, Indonesia, Korea, Malaya, Myanmar (then Burma), the Philippines, Sri Lanka (then Ceylon), Taiwan, Thailand and Vietnam, relative to Japan. China is added on the basis of other studies. PPP-based converters are used to calculate GDP per capita in constant PPP yen. The indices of GDP per capita in Japan and other countries were expressed as a proportion of GDP per capita in Japan during the years 1910–70 in 1934–6 yen, and then converted to 1990 international dollars by relying on PPP-adjusted Japanese series comparable to US GDP series. Figure 1 presents the resulting series for Asian countries.

 

Figure 1. GDP per capita in selected Asian countries, 1910–1970 (1934–6 Japanese yen)

Bassino2
Sources: see original article.

 

The conventional view dates the start of the divergence to the nineteenth century. Our study identifies the First World War and the 1920s as the era during which the little divergence in Asia occurred. During the 1920s, most countries in Asia — except Japan —depended significantly on exports of primary commodities. The growth experience of Southeast Asia seems to have been largely characterised by market integration in national economies and by the mobilisation of hitherto underutilised resources (labour and land) for export production. Particularly in the land-abundant parts of Asia, the opening-up of land for agricultural production led to economic growth.

Commodity price changes may have become debilitating when their volatility increased after 1913. This was followed by episodes of import-substituting industrialisation, particularly during after 1945.  While Japan rapidly developed its export-oriented manufacturing industries from the First World War, other Asian countries increasingly had inward-looking economies. This pattern lasted until the 1970s, when some Asian countries followed Japan on a path of export-oriented industrialisation and economic growth. For some countries this was a staggered process that lasted well into the 1990s, when the World Bank labelled this development the ‘East Asian miracle’.

 

To contact the authors:

jean-pascal.bassino@ens-lyon.fr

pierre.vandereng@anu.edu.au

 

References

Bassino, J-P. and Van der Eng, P., ‘Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69’, Economic History Review (forthcoming).

Fouquet, R. and Broadberry, S., ‘Seven centuries of European economic growth and decline’, Journal of Economic Perspectives, 29 (2015), pp. 227–44.

Fukao, K., Ma, D., and Yuan, T., ‘Real GDP in pre-war Asia: a 1934–36 benchmark purchasing power parity comparison with the US’, Review of Income and Wealth, 53 (2007), pp. 503–37.

Inklaar, R., de Jong, H., Bolt, J., and van Zanden, J. L., ‘Rebasing “Maddison”: new income comparisons and the shape of long-run economic development’, Groningen Growth and Development Centre Research Memorandum no. 174 (2018).

Link to the website of the Southeast Asian Development in the Long Term (SEA-DELT) project:  https://seadelt.net

Factor Endowments on the “Frontier”: Algerian Settler Agriculture at the Beginning of the 1900s

by Laura Maravall Buckwalter (University of Tübingen)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

It is often claimed that access to land and labour during the colonial years determined land redistribution policies and labour regimes that had persistent, long-run effects.  For this reason, the amount of land and labour available in a colonized country at a fixed point in time are being included more frequently in regression frameworks as proxies for the types of colonial modes of production and institutions. However, despite the relevance of these variables within the scholarly literature on settlement economies, little is known about the way in which they changed during the process of settlement. This is because most studies focus on long-term effects and tend to exclude relevant inter-country heterogeneities that should be included in the assessment of the impact of colonization on economic development.

In my article, I show how colonial land policy and settler modes of production responded differently within a colony.  I examine rural settlement in French Algeria at the start of the 1900s and focus on cereal cultivation which was the crop that allowed the arable frontier to expand. I rely upon the literature that reintroduces the notion of ‘land frontier expansion’ into the understanding of settler economies. By including the frontier in my analysis, it is possible to assess how colonial land policy and settler farming adapted to very different local conditions. For exanple,  because settlers were located in the interior regions they encountered growing land aridity. I argue that the expansion of rural settlement into the frontier was strongly dependent upon the adoption of modern ploughs, intensive labour (modern ploughs were non-labour saving) and larger cultivated fields (because they removed fallow areas) which, in turn, had a direct impact on  colonial land policy and settler farming.

Figure 1. Threshing wheat in French Algeria (Zibans)

Buckwalter 1
Source: Retrieved from https://www.flickr.com/photos/internetarchivebookimages/14764127875/in/photostream/, last accessed 31st of May, 2019.

 

My research takes advantage of annual agricultural statistics reported by the French administration at the municipal level in Constantine for the years 1904/05 and 1913/14. The data are analysed in a cross-section and panel regression framework and, although the dataset provides a snapshot at only two points in time, the ability to identify the timing of settlement after the 1840s for each municipality provides a broader temporal framework.

Figure 2. Constantine at the beginning of the 1900s

Buckwalter 2
Source: Original outline of the map derives from mainly from Carte de la Colonisation Officielle, Algérie (1902), available online at the digital library from the Bibliothèque Nationale de France, retrieved from http://catalogue.bnf.fr/ark:/12148/cb40710721s (accessed on 28 Apr. 2019) and ANOM-iREL, http://anom.archivesnationales.culture.gouv.fr/ (accessed on 28 Apr. 2019).

 

The results illustrate how the limited amount of arable land on the Algerian frontier forced  colonial policymakers to relax  restrictions on the amount of land owned by settlers. This change in policy occurred because expanding the frontier into less fertile regions and consolidating settlement required agricultural intensification –  changes in the frequency of crop rotation and more intensive ploughing. These techniques required larger fields and were therefore incompatible  with the French colonial ideal of establishing a small-scale, family farm type of settler economy.

My results also indicate that settler farmers were able to adopt more intensive techniques mainly by relying on the abundant indigenous labour force. The man-to-cultivable land ratio, which increased after the 1870s due to continuous indigenous population growth and colonial land expropriation measures, eased settler cultivation, particularly on the frontier. This confirms that the availability of labour relative to land is an important variable that should be taken into consideration to assess the impact of settlement on economic development. My findings are in accord with Lloyd and Metzer (2013, p. 20), who argue that, in Africa, where the indigenous peasantry was significant, the labour surplus allowed low wages and ‘verged on servility’, leading to a ‘segmented labour and agricultural production system’. Moreover, it is precisely the presence of a large indigenous population relative to that of the settlers, and the reliance of settlers upon the indigenous labour and the state (to access land and labour), that has allowed Lloyd and Metzer to describe Algeria (together with Southern Rhodesia, Kenya and South Africa) as having a “somewhat different type of settler colonialism that emerged in Africa over the 19th and early 20th Centuries” (2013, p.2).

In conclusion, it is reasonable to assume that, as rural settlement gains ground within a colony, local endowments and cultivation requirements change. The case of rural settlement in Constantine reveals how settler farmers and colonial restrictions on ownership size adapted to the varying amounts of land and labour.

 

To contact: 

laura.maravall@uni-tuebingen.de

Twitter: @lmaravall

 

References

Ageron, C. R. (1991). Modern Algeria: a history from 1830 to the present (9th ed). Africa World Press.

Frankema, E. (2010). The colonial roots of land inequality: geography, factor endowments, or institutions? The Economic History Review, 63(2):418–451.

Frankema, E., Green, E., and Hillbom, E. (2016). Endogenous processes of colonial settlement. the success and failure of European settler farming in Sub-Saharan Africa. Revista de Historia Económica-Journal of Iberian and Latin American Economic History, 34(2), 237-265.

Easterly, W., & Levine, R. (2003). Tropics, germs, and crops: how endowments influence economic development. Journal of monetary economics, 50(1), 3-39.

Engerman, S. L., and Sokoloff, K. L. (2012). Economic development in the Americas since 1500: endowments and institutions. Cambridge University Press.

Lloyd, C. and Metzer, J. (2013). Settler colonization and societies in world history: patterns and concepts. In Settler Economies in World History, Global Economic History Series 9:1.

Lützelschwab, C. (2007). Populations and Economies of European Settlement Colonies in Africa (South Africa, Algeria, Kenya, and Southern Rhodesia). In Annales de démographie historique (No. 1, pp. 33-58). Belin.

Lützelschwab, C. (2013). Settler colonialism in Africa Lloyd, C., Metzer, J., and Sutch, R. (2013), Settler economies in world history. Brill.

Willebald, H., and Juambeltz, J. (2018). Land Frontier Expansion in Settler Economies, 1830–1950: Was It a Ricardian Process? In Agricultural Development in the World Periphery (pp. 439-466). Palgrave Macmillan, Cham.

Plague and long-term development

by Guido Alfani (Bocconi University, Dondena Centre and IGIER)

 

The full paper has been published in The Economic History Review and is available here.

A YouTube video accompanies this work and can be found here.

 

How did preindustrial economies react to extreme mortality crises caused by severe epidemics of plague? Were health shocks of this kind able to shape long-term development patterns? While past research focused on the Black Death that affected Europe during 1347-52 ( Álvarez Nogal and Prados de la Escosura 2013; Clark 2007; Voigtländer and Voth 2013), in a forthcoming article with Marco Percoco we analyse the long-term consequences of what was by far the worst mortality crisis affecting Italy during the Early Modern period: the 1629-30 plague which killed an estimated 30-35% of the northern Italian population — about two million victims.

 

Figure 1 Luigi Pellegrini Scaramuccia (1670), Federico Borromeo visits the plague ward during the 1630 plague,

Alfani 1

Source: Milan, Biblioteca Ambrosiana

 

This episode is significant in Italian history, and more generally, for our understanding of the Little Divergence between the North and South of Europe. It had recently been hypothesized that the 1630 plague was the source of Italy’s relative decline during the seventeenth century (Alfani 2013). However, this hypothesis lacked solid empirical evidence. To resolve this question, we take a different approach from previous studies, and  demonstrate that plague lowered the trajectory of development of Italian cities. We argue that this was mostly due to a productivity shock caused by the plague, but we also explore other contributing factors. Consequently,  we provide support for the view that the economic consequences of severe demographic shocks need to be understood and studied on a case-by-case basis, as the historical context in which they occurred can lead to very different outcomes (Alfani and Murphy 2017).

After assembling a new database of mortality rates in a sample of 56 cities, we estimate a model of population growth allowing for different regimes of growth. We build on the seminal papers by Davis and Weinstein (2002), and Brakman et al. (2004) who based their analysis on a new framework in economic geography framework in which a relative city size growth model is estimated to determine whether a shock has temporary or persistent effects. We find that cities affected by the 1629-30 plague experienced persistent, long-term effects (i.e., up to 1800) on their pattern of relative population growth.

 

Figure 2. Giacomo Borlone de Buschis (attributed), Triumph of Death (1485), fresco

Alfani 2

Source: Oratorio dei Disciplini, Clusone (Italy).

 

We complete our analysis by estimating the absolute impact of the epidemic. We find that in northern Italian regions the plague caused a lasting decline in both the size and rate of change  of urban populations. The lasting damage done to the urban population are shown in Figure 3. For urbanization rates it will suffice to notice that across the North of Italy, by 1700 (70 years after the 1630 plague), they were still more than 20 per cent lower than in the decades preceding the catastrophe (16.1 per cent in 1700 versus an estimated 20.4 per cent in 1600, for cities >5,000). Overall, these findings suggest that surges in plagues may contribute to the decline of economic regions or whole countries. Our conclusions are  strengthened by showing that while there is clear evidence of the negative consequences of the 1630 plague, there is hardly any evidence for a positive effect (Pamuk 2007). We hypothesize that the potential positive consequences of the 1630 plague were entirely eroded by a negative productivity shock.

 

Figure 3. Size of the urban population in Piedmont, Lombardy, and Veneto (1620-1700)

Alfani 3

Source: see original article

 

Demonstrating that the plague had a persistent negative effect on many key Italian urban economies, we provide support for the hypothesis that the origins of  relative economic decline in northern Italy are to be found in particularly unfavorable epidemiological conditions. It was the context in which an epidemic occurred that increased its ability to affect the economy, not the plague itself.  Indeed, the 1630 plague affected the main states of the Italian Peninsula at the worst possible moment when its manufacturing were dealing with increasing competition from northern European countries. This explanation, however, provides a different interpretation to the Little Divergence in recent literature.

 

To contact the author: guido.alfani@unibocconi.it

 

References

Alfani, G., ‘Plague in seventeenth century Europe and the decline of Italy: and epidemiological hypothesis’, European Review of Economic History, 17, 4 (2013), pp.  408-430

Alfani, G. and Murphy, T., ‘Plague and Lethal Epidemics in the Pre-Industrial World’, Journal of Economic History, 77, 1 (2017), pp. 314-343.

Alfani, G. and Percoco, M., ‘Plague and long-term development: the lasting effects of the 1629-30 epidemic on the Italian cities’, The Economic History Review, forthcoming, https://doi.org/10.1111/ehr.12652

Álvarez Nogal, C. and Prados de la Escosura,L., ‘The Rise and Fall of Spain (1270-1850)’, Economic History Review, 66, 1 (2013), pp. 1–37.

Brakman, S., Garretsen H., Schramm M. ‘The Strategic Bombing of German Cities during World War II and its Impact on City Growth’, Journal of Economic Geography, 4 (2004), pp. 201-218.

Clark, G., A Farewell to Alms (Princeton, 2007).

Davis, D.R. and Weinstein, D.E. ‘Bones, Bombs, and Break Points: The Geography of Economic Activity’, American Economic Review, 92, 5 (2002), pp. 1269-1289.

Pamuk, S., ‘The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600’, European Review of Economic History, 11 (2007), pp. 289-317.

Voigtländer, N. and H.J. Voth, ‘The Three Horsemen of Riches: Plague, War, and Urbanization in Early Modern Europe’, Review of Economic Studies 80, 2 (2013), pp. 774–811.

Competition and rent-seeking during the slave trade

by Jose Corpuz (University of Warwick)

Untitled 1

The Royal African Company of England (RAC) gave gifts and similar payments to African chiefs for exclusive services. But African chiefs who could stop or redirect trade from inland could extract payments from RAC, particularly when competition from other English merchants increased.

There is a lot of descriptive evidence on rent-seeking in Africa during the slave trade. My research provides quantitative evidence of rent-seeking and shows that it changed over time. I constructed the database of more than 20,000 payments myself, from handwritten seventeenth century RAC archives.

My study contributes to the debate about rent-seeking during the slave trade. The ‘fishers-of-men’ view argues that slaves were a common property resource and competition among enslavers would dissipate any rents (Thomas and Bean, 1974). The ‘hunters-of-rent’ view, however, argues that the competition was restricted by barriers to entry, enabling rents (Evans and Richardson, 1995).

My research provides quantitative evidence that rent-seeking existed and shows when, where and how much rent-seeking increased during the slave trade.

I used my own dataset to examine rent-seeking during the slave trade, looking at more than 20,000 payments (for example ‘dashey,’ a local term used in West Africa which literally means gift) that the RAC made to seventeenth century chiefs in Ghana. RAC made these payments to African chiefs in return for exclusive trade with caravan merchants from inland. These payments were separate from any price paid for slaves themselves.

I use an event study to show that this power of chiefs increased when the RAC lost royal privileges after the Glorious Revolution in 1688, and this change increased competition from other English merchants.

I answer three questions. First: what was the distribution of payments across chiefs?

I find that the distribution of payments to chiefs was unequal. In particular, the highest-ranking head chiefs received the greatest value of payments per capita. These findings provide quantitative evidence that the slave trade was ‘the business of kings, rich men, and prime merchants’ (in other words, elites) and that the distribution of payments among them was unequal (Hopkins, 1973).

Untitled 2

Second: what commodities were included and how did this change over time?

Usually, the payments were European cloth, firearms and alcohol. Head chiefs used European cloth to signal authority and prestige, and received most of the European cloth, particularly when their bargaining position improved after 1688.

These findings highlight the importance of payments as a channel through which European merchants supplied goods in response to African demand. Unlike Rodney’s (1988) ‘How Europe Underdeveloped Africa’ thesis, European merchants did not determine the goods they supplied to Africans, and Africans were not passive recipients of these goods.

Untitled 3

Third: did payments rise after the Glorious Revolution in 1688, reducing the RAC’s privileges – such as the power to seize other English merchants’ ships and cargoes (Davies, 1957; Carlos and Kruse, 1996) – and facilitating competition from other English merchants?

Using ‘diff-in-diff’ estimation, I find that the RAC made greater payments to chiefs whose compliance was most important in deterring other English merchants from competing with the RAC after 1688.

In particular, I find that payments increased the most to chiefs in ‘non-coast caravan routes’ or locations where they could stop or redirect trade flowing from inland. The chiefs demanded an increased share of the RAC’s total revenue. Qualitative evidence from the letters (Law, 2001, 2006, 2010) supports the view that this increase can be explained by the chiefs’ increased bargaining power.

Untitled 4Overall, the findings are consistent with the hunters-of-rent view of rent-seeking during the slave trade. Some chiefs found themselves in the right place at the right time and took advantage of the situation.

 

References

Carlos, AM, and J Brown-Kruse (1996) ‘The decline of the Royal African Company: Fringe firms and the role of the charter’, Economic History Review 49(2): 291-313.

Davies, KG (1957) The Royal African Company, Longmans, Green and Co. Ltd.

Evans, EW, and D Richardson (1995) ‘Hunting for rents: The economics of slaving in pre-colonial Africa’, Economic History Review 48(4): 665-86.

Hopkins, AG (1973) An Economic History of West Africa, Routledge.

Law, R (2001, 2006, 2010) The English in West Africa, 1685-1698: The local correspondence of the Royal African Company of England, 1681-1699 (Vols. 1-3), Oxford University Press.

Rodney, W (1988) How Europe Underdeveloped Africa, Bogle-L’Ouverhure Publications.

Thomas, RP, and RN Bean (1974) ‘The fishers of men: The profits of the slave trade’, Journal of Economic History 34(4): 885-914.

Recurring growth without industrialisation: occupational structures in northern Nigeria, 1921-2006

by Emiliano Travieso (University of Cambridge)

 

Nigeria2
InStove factory workers in Nigeria. Available at <http://www.instove.org/node/59&gt;

Despite recent decades of economic growth, absolute poverty is on the rise in Nigeria, as population increases continue to outpace the reduction in poverty rates. Manufacturing industries, which have the potential to absorb large numbers of workers into better paying jobs, have expanded only very modestly, and most workers remain employed in low productivity sectors (such as the informal urban economy and subsistence agriculture). 

 This scenario is particularly stark in the northern states, which concentrate more than half of the national population and where poverty rates are at their highest. As the largest region of the most populated nation in the continent (and itself three times as large as any other West African country), quantifying and qualifying northern Nigerias past economic development is crucial in order to discuss the perspectives for structural change and poverty alleviation in sub-Saharan Africa.  

 My research traces the major shifts in the economy of northern Nigeria during and since colonial rule through a detailed study of occupational structures, based on colonial and independence-era censuses and other primary sources. 

 While the region has a long history of handicraft production – under the nineteenth-century Sokoto Caliphate it became the largest textile producer in sub-Saharan Africa – northern Nigeria deindustrialised during British indirect rule. Partially as a result of the expansion of export agriculture (mainly of groundnuts and, to a lesser extent, cotton), the share of the workforce in manufacturing decreased from 18% to 7% in the last four decades of the colonial period. 

 After independence in 1960, growth episodes were led by transport, urban services and government expenditure fuelled by oil transfers from the southeast of the country, but did not spur significant structural change in favour of manufacturing. By 2006, the share of the workforce in manufacturing had risen only slightly: to 8%. 

 In global economic history, poverty alleviation has often resulted from a previous period of systematic movement of labour from low- to high-productivity sectors. The continued expansion of manufacturing achieved just that during the Industrial Revolution in the West and, in the twentieth century, in many parts of the Global South. 

 In large Asian and Latin American economies, late industrialisation sustained impressive achievements in terms of job creation and poverty alleviation. In cases such as Brazil, Mexico and China, large domestic markets, fast urbanisation and improvements in education contributed decisively to lifting millions of people out of poverty. 

Can northern Nigeria, with its large population, deep historical manufacturing roots and access to the largest national market in Africa, develop into a late industrialiser in the twenty-first century? My study suggests that rapid demographic growth will not necessarily result in structural change, but that, through improved market integration and continued expansion of education, the economy could harness the skills and energy of its rising population to produce a more impressive expansion of manufacturing than we have yet seen. 

Trains of thought: evidence from Sweden on how railways helped ideas to travel

by Eric Melander (University of Warwick)

This paper was presented at the EHS Annual Conference 2019 in Belfast.

 

 

Navvies_at_Nybro-Sävsjöström_railway_Sweden
Navvies during work at Nybro-Sävsjöströms Järnväg in Sweden. Standing far right is Oskar Lindahl from Mackamåla outside Målerås. Available at Wikimedia Commons.

The role of rail transport in shaping the geography of economic development is widely recognised. My research shows that its role in enabling the spread of political ideas is equally significant.

Using a natural experiment from Swedish history, I find that the increased ability of individuals to travel was a key driver for the spatial diffusion of engagement in grassroots social movements pushing for democratisation.

In nineteenth century Sweden, as in much of Europe, trade unions, leftist parties, temperance movements and non-state churches were important forces for democratisation and social reform. By 1910, 700,000 Swedes were members of at least one such group, out of a total population of around 5.5 million. At the same time, the Swedish rail network had been developed from just 6,000km in 1881 to 14,000km in 1910.

Swedish social historians, such as Sven Lundkvist, have noted that personal visits by agitators and preachers were important channels for the spread of the new ideas. A key example is August Palm, a notable social democrat and labour activist, who made heavy use of the railways during his extensive ‘agitation travels’. And Swedish political economist and economic historian Eli Heckscher has written about the ‘democratising effect’ of travel in this period.

My study is the first to test the hypothesised link between railway expansion and the success of these movements formally, using modern economic and statistical techniques.

By analysing a rich dataset, including historical railway maps, information on passenger and freight traffic, census data and archival information on social movement membership in Sweden, I demonstrate the impact of railway access on the spread and growth of activist organisations. Well-connected towns and cities were more likely to host at least one social movement organisation, and to see more rapid growth in membership numbers.

A key mechanism underlying this result is that railways reduced effective distances between places: the increased ability of individuals to travel and spread their ideas drove the spatial diffusion of movement membership.

The positive impact of rail is only as a result of increased passenger flows to a town or city – freight volumes had no impact, suggesting that it was the mobility of individuals that spread new ideas, not an acceleration of economic activity more broadly.

These findings are important because they shed light on the role played by railways, and by communication technology more broadly, in the diffusion of ideas. Recent work on this topic has focused on the role of social media on short-lived bursts of (extreme) collective action. Research by Daron Acemoglu, Tarek Hassan and Ahmed Tahoun, for example, shows that Twitter shaped protest activity during the Arab Spring.

My study shows that technology also matters for more broad-based popular engagement in nascent social movements over much longer time horizons. Identifying the importance of technology for the historical spread of democratic ideas can therefore sharpen our understanding of contemporary political events.

A comparative history of occupational structure and urbanisation across Africa: design, data and preliminary overview

by Gareth Austin (University of Cambridge) and Leigh Shaw-Taylor (University of Cambridge)

This paper was presented at the EHS Annual Conference 2019 in Belfast.

 

OLYMPUS DIGITAL CAMERA
A lone giraffe in Nairobi National Park. Available at Wikimedia Commons.

The general story in the research literature is that under colonial rule from the around the 1890s to around 1960, African economies became structured around exports of primary products, and that this persisted through the unsuccessful early post-colonial policies of import-substituting industrialisation, was entrenched by ‘structural adjustment’ in the 1980s, and has continued through the relatively strong economic growth across the continent since around 1995.

Our research offers a preliminary overview of the AFCHOS project, an international collaboration involving 20 scholars currently preparing 15 national or sub-national case studies. The discussion is organised in two sections.

Section I describes how by creating country data-bases as an essential first step, we aim to develop the first overview of changing occupational structures across sub-Saharan Africa, from the moment when the necessary data became available in the country concerned, to the present.

We track the shifts between agriculture, extraction, the secondary sector and services, and explore the trends in specific occupational groups within each of these sectors. We also examine the closely related process of urbanisation.

The core of the enterprise is the construction of datasets that reflect without distortion the specificities of African conditions, are commensurable across the continent, and are also commensurable with the datasets developed by parallel projects on the occupational structures of Eurasia and the Americas.

Section II outlines preliminary findings. It is centred on four graphs, depicting the evolution of the share of the economically active population in each sector, for about 14 countries. We relate these to the indications of the evolution of the size and location of population, and the size and composition of GDP.

The population of sub-Saharan Africa has increased perhaps six times since the influenza pandemic of 1918, and average living standards have not fallen: a remarkable achievement in terms of aggregate economic growth, and one that has not been sufficiently appreciated.

It is also striking that the multiplication of population, enabled by falling mortality rates, was accompanied by rapid urbanisation. There were also improvements in living standards, though modest and uneven.

Agriculture’s share in employment generally fell, especially after 1960. The share of manufacturing evolved quite differently over space and time within Africa, as we will elaborate.

Urbanisation has been accompanied by a general growth of employment in services. Where we have disaggregated the latter, so far, there has been dramatic growth in transport and distributive trades, suggesting increasing integration of national and regional economies – an important step in economic development.

The trafficking of children: exploitation, sexual slavery and the League of Nations

by Elizabeth A. Faulkner (University of Hull) and Cathal Rogers (Staffordshire University)

This paper was presented at the EHS Annual Conference 2019 in Belfast.

 

pexels-photo-276988
Child trafficking graffiti on brick. Available at Pexels.

The trafficking of children receives extensive media coverage today, with endless tales of exploited and enslaved children. But these reports are not isolated.

For example in 1923, the League of Nations Advisory Committee on the Traffic in Women and Children heard that ‘The White slave traffic assumed large proportions; young girls – and even young boys – swelled the personnel of the over-numerous houses of ill-fame’.[1] The purpose of our study is to identify whether fears of the sexual enslavement of children during the era were legitimate or the product of a ‘moral panic’.

The issue of human trafficking is a relatively new area of international law, but the issue has appeared on numerous occasions as an issue of grave moral concern at the international level for over a century. In 1921, the League of Nations passed the International Convention for the Suppression of the Traffic in Women and Children.

This Convention marked a notable departure from the overtly racialised focus of previous attempts to address this issue of human trafficking namely, the 1904 and 1910 White Slave Traffic Conventions.[2]

Our study investigates the trafficking and exploitation of children between 1922 and 1929 through an examination of the archives of the League of Nations, Geneva. The inquiry sought to uncover recorded cases of child trafficking through focusing on the Summary of Annual Reports submitted to the Traffic in Women and Children Committee.

In terms of references to ‘trafficking’, from the 324 responses (1922-1929) considered by this inquiry, only 11 references to trafficking were identified. As a percentage, that is just 3.3% of responses.

Our research seeks to understand the exploitation of children during the 1920s, beyond ‘trafficking for immoral purposes’. Identifying the types of exploitation that children experienced globally, whether for commercial or economic gain, sexual gratification or adoption.

The aim of the research is to challenge and enrich our understanding of morals, race and the exploitation of children in the nineteenth and early twentieth century, through deconstructing fears of the sexual enslavement of children.

The inquiry seeks to readdress the racial bias of previous examinations of the human trafficking of the era and to expand our knowledge of trafficked and or exploited children in the legacy of the ‘White Slavery Conventions’.

 

Notes:

[1] De Reding De Bibberegg, Delegate of the International Red Cross Committee and the International Red Cross Committee and the International ‘Save the Children’ Fund in Greece. League of Nations, Advisory Committee on the Traffic in Women and Children, Minutes if the Second Session, Geneva March 22nd – 27th 1923 at 65

[2] The ‘White Slavery Conventions’ namely the International Agreement for the Suppression of White Slave Traffic 1904, the International Convention for the Suppression of the White Slave Traffic 1910, the International Convention for the Suppression of Traffic in Women and Children 1921 and the International Convention for the Suppression of the Traffic in Women of Full Age 1933