Obituary: Professor Robert (Bob) Millward

The Economic History Society is saddened to learn of the recent death of Professor Robert (Bob) Millward. Professor Millward was professor of economics at Salford University before taking the chair in economic history at Manchester University in 1989. Bob was a highly-regarded scholar with diverse interests in economic history and he will be sorely missed. Read an academic appreciation of Robert Millward here

#EHS2020 University of Oxford – Call For Papers open

The 2020 Annual Conference will be held at St Catherine’s College, Oxford, Friday 17 – Sunday 19 April.  Registration, sessions, delegate accommodation, dining, and meetings will all be located in the College.

  • A link to the call for papers can be found here.  Deadline: 2 September 2019
  • A link to the call for posters can be found here.  Deadline: 18 November 2019

More information here

Screen Shot 2019-06-11 at 19.34.58
EHS 2019 Conference dinner

A Policy of Credit Disruption: The Punjab Land Alienation Act of 1900

by Latika Chaudhary (Naval Postgraduate School) and Anand V. Swamy (Williams College)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

Agriculture_in_Punjab_India
Farming, farms, crops, agriculture in North India. Available at Wikimedia Commons.

In the late 19th century the British-Indian government (the Raj) became preoccupied with default on debt and the consequent transfer of land in rural India. In many regions Raj officials made the following argument: British rule had created or clarified individual property rights in land, which had for the first time made land available as collateral for debt. Consequently, peasants could now borrow up to the full value of their land. The Raj had also replaced informal village-based forms of dispute resolution with a formal legal system operating outside the village, which favored the lender over the borrower. Peasants were spendthrift and naïve, and unable to negotiate the new formal courts created by British rule, whereas lenders were predatory and legally savvy. Borrowers were frequently defaulting, and land was rapidly passing from long-standing resident peasants to professional moneylenders who were often either immigrant, of another religion, or sometimes both.  This would lead to social unrest and threaten British rule. To preserve British rule it was essential that one of the links in the chain be broken, even if this meant abandoning cherished notions of sanctity of property and contract.

The Punjab Land Alienation Act (PLAA) of 1900 was the most ambitious policy intervention motivated by this thinking. It sought to prevent professional moneylenders from acquiring the property of traditional landowners. To this end it banned, except under some conditions, the permanent transfer of land from an owner belonging to an ‘agricultural tribe’ to a buyer or creditor who was not from this tribe. Moreover, a lender who was not from an agricultural tribe could no longer seize the land of a defaulting debtor who was from an agricultural tribe.

The PLAA made direct restrictions on the transfer of land a respectable part of the policy apparatus of the Raj and its influence persists to the present-day. There is a substantial literature on the emergence of the PLAA, yet there is no econometric work on two basic questions regarding its impact. First, did the PLAA reduce the availability of mortgage-backed credit? Or were borrowers and lenders able to use various devices to evade the Act, thereby neutralizing it?  Second, if less credit was available, what were the effects on agricultural outcomes and productivity? We use panel data methods to address these questions, for the first time, so far as we know.

Our work provides evidence regarding an unusual policy experiment that is relevant to a hypothesis of broad interest. It is often argued that ‘clean titling’ of assets can facilitate their use as collateral, increasing access to credit, leading to more investment and faster growth. Informed by this hypothesis, many studies estimate the effects of titling on credit and other outcomes, but they usually pertain to making assets more usable as collateral. The PLAA went in the opposite direction – it reduced the “collateralizability” of land which should have  reduced investment and growth, based on the argument we have described. We investigate whether it did.

To identify the effects of the PLAA, we assembled a panel dataset on 25 districts in Punjab from 1890 to 1910. Our dataset contains information on mortgages and sales of land, economic outcomes, such as acreage and ownership of cattle, and control variables like rainfall and population. Because the PLAA targeted professional moneylenders, it should have reduced mortgage-backed credit more in places where they were bigger players in the credit market. Hence, we interact a measure of the importance of the professional, that is, non-agricultural, moneylenders in the mortgage market with an indicator variable for the introduction of the PLAA, which takes the value of  1 from 1900 onward. As expected, we find that  that the PLAA contracted credit more in places where professional moneylenders played a larger role – compared to  districts with no professional moneylenders.  The PLAA reduced mortgage-backed credit by 48 percentage points more at the 25th percentile of our measure of moneylender-importance and by 61 percentage points more at the 75th percentile.

However, this decrease of mortgage-backed credit in professional moneylender-dominated areas did not lead to lower acreage or less ownership of cattle. In short, the PLAA affected credit markets as we might expect without undermining agricultural productivity. Because we have panel data, we are able to account for potential confounding factors such as time-invariant unobserved differences across districts (using district fixed effects), common district-specific shocks (using year effects) and the possibility that districts were trending differently independent of the PLAA (using district-specific time trends).

British officials provided a plausible explanation for the non-impact of PLAA on agricultural production: lenders had merely become more judicious – they were still willing to lend for productive activity, but not for ‘extravagant’ expenditures, such as social ceremonies.  There may be a general lesson here:  policies that make it harder for lenders to recover money may have the beneficial effect of encouraging due diligence.

 

 

To contact the authors:

lhartman@nps.edu

aswamy@williams.edu

Falling Behind and Catching up: India’s Transition from a Colonial Economy

by Bishnupriya Gupta (University of Warwick and CAGE)

The full paper of this blog post was published by The Economic History Review and it is available here 

152473-004-E0D19F36
Official of the East India Company riding in an Indian procession, watercolour on paper, c. 1825–30; in the Victoria and Albert Museum, London. Available at <https://www.britannica.com/topic/East-India-Company/media/1/176643/162308&gt;

There has been much discussion in recent years about India’s growth failure in the first 30 years after independence in 1947. India became a highly-regulated economy and withdrew from the global market. This led to inefficiency and low growth. The architect of Indian planning –Jawaharlal Nehru, the first prime minister, did not put India on an East Asian path. As a contrast, the last decade of the 20th century has seen a reintegration into the global economy and today India is one of the fastest growing economies.

Any analysis of Indian growth and development that starts in 1947, is deeply flawed. It ignores the history of development and the impact of colonization. This paper takes a long run view of India’s economic development and argues that the Indian economy stagnated under colonial rule and a reversal came with independence. Although a slow growth in comparison to East Asia, the Nehruvian legacy put India on a growth path.

Tharoor (2017) in his book Inglorious Empire argues that Britain’s industrial revolution was built on the destruction of Indian textile industries and British rule turned India from an exporter of agricultural goods.  A different view on colonial rule comes from Niall Ferguson in his book Empire: How Britain Made the Modern World. Ferguson claimed that even if the British rule did not increase Indian incomes, things might have been much worse under a restored Mughal regime in 1857. The British build the railways and connected India to the rest of the world.

Neither of these views are based on statistical evidence. Data on GDP per capita (Figure 1), shows that there was a slow decline and stagnation over a long period. Evidence on wages and per capita GDP show a prosperous economy in 1600 under the Mughal Emperor Akbar. Living standards began to decline from the middle of the 17th century, before colonization, continued as the East India Company gained territorial control in 1757. It is important to note that the decline coincided with increased integration with international markets and the rising trade in textiles to Europe. In 1857, India became a part of the global economy of the British Empire. Indian trade volume increased, but from an exporter or industrial products, India became an exporter if food and raw material. Per capita income stagnated even as trade increased, the colonial government built a railway network and British entrepreneurs owned large parts of the industrial sector. In 1947, the country was one of the poorest in the world. Figure 1 below also tells us that growth picked up after independence as India moved towards regulation and restrictions on trade and private investment.

What explains the stagnation in income prior to independence? The colonial government invested very little in the main sector, agriculture. The bulk of British investment went to the railways, but not in irrigation. The railways, initially connected the hinterland with the ports, but over time integrated markets, reducing price variability across markets. However, it did not contribute increasing agricultural productivity. Without large investment in irrigation, output per acre declined in areas that did not get canals. Industry on the other had was the fastest growing sector, but employed only 10 per cent of the work force. Stagnation of the economy under control rule had little to do with trade.

fig01
Indian GDP per capita between 1600 and 2000. Source: Aniruddha Bagchi, “Why did the Indian economy stagnate under the colonial rule?”  in Ideas for India 2013

Indian growth reversal began in independent India with regulation of trade and industry and a break with the global economy. For the first time in the 20th century, the Indian economy began to grow as the graph shows with investment in capital goods industries and agricultural infrastructure. Industrial growth and the green revolution in agriculture, moved the economy from stagnation to growth. This growth slowed down, but the economy did not stagnate as in the colonial period. Following economic reforms after the 1980s, India has entered a high growth regime. The initial increase in growth was a response to removal of restrictions on domestic private investment, well before reintegration into the global economy in the 1990s. The foundations for growth were laid in the first three decades after independence.

The institutional legacy of British rule, had long run consequences. One example is education policy that prioritized investment in secondary and tertiary education, creating a small group with higher education, but few with basic primary schooling. In 1947, less than one–fifth of the population had basic education. The higher education bias in education continued after independence and has created an advantage for the service sector. There are lessons from history to understand Indian growth after independence.

 

To contact the author: B.Gupta@warwick.ac.uk

Why did the industrial diet triumph?

by Fernando Collantes (University of Zaragoza and Instituto Agroalimentario de Aragón)

This blog is part of a larger research paper published in the Economic History Review.

 

Harvard_food_pyramid
Harvard food pyramid. Available at Wikimedia Commons.

Consumers in the Northern hemisphere are feeling increasingly uneasy about their industrial diet. Few question that during the twentieth century the industrial diet helped us solve the nutritional problems related to scarcity. But there is now growing recognition that the triumph of the industrial diet triggered new problems related to abundance, among them obesity, excessive consumerism and environmental degradation. Currently, alternatives, ranging from organic food and those bearing geographical-‘quality’ labels, struggle to transcend the industrial diet. Frequently, these alternatives face a major obstacle: their relatively high price compared to mass-produced and mass-retailed food.

The research that I have conducted examines the literature on nutritional transitions, food regimes and food history, and positions it within present-day debates on diet change in affluent societies.  I employ a case-study of the growth in mass consumption of dairy products in Spain between 1965 and 1990. In the mid-1960s, dairy consumption was very low in Spain and many suffered from calcium deficiency.  Subsequently, there was a rapid growth in consumption. Milk, especially, became an integral part of the diet for the population. Alongside mass consumption there was also mass-production and complementary technical change. In the early 1960s, most consumers only drank raw milk, but by the 1990s milk was being sterilised and pasteurised to standard specifications by an emergent national dairy industry.

In the early 1960s, the regular purchase of milk was too expensive for most households. By the early 1990s, an increase in household incomes, complemented by (alleged) price reductions generated by dairy industrialization, facilitated rapid milk consumption. A further factor aiding consumption was changing consumer preferences. Previously, consumers perceptions of milk were affected by recurrent episodes of poisoning and fraud. The process of dairy industrialization ensured a greater supply of ‘safe’ milk and this encouraged consumers to use their increased real incomes to buy more milk. ‘Quality’ milk meant milk that was safe to consume became the main theme in the advertising campaigns employed milk processers (Figure 1).

 

Figure 1. Advertisement by La Lactaria Española in the early 1970s.

Collantes Pic
Source: Revista Española de Lechería, no. 90 (1973).

 

What are the implications of my research to contemporary debates on food quality? First the transition toward a diet richer in organic foods and foods characterised by short food-supply chains and artisan-like production, backed by geographical-quality labels has more than niche relevance. There are historical precedents (such as the one studied in this article) of large sections of the populace willing to pay premium prices for food products that in some senses are  perceived as qualitatively superior to other, more ‘conventional’ alternatives. If it happened in the past, it can happen again.  Indeed, new qualitative substitutions are already taking place. The key issue is the direction of this substitution. Will consumers use their affluence to ‘green’ their diet? Or will they use higher incomes  to purchase more highly processed foods — with possibly negative implications for  public health and environmental sustainability? This juncture between  food-system dynamics and  public policy is crucial. As Fernand Braudel argued, it is the extraordinary capacity for adaption that defines capitalism.  My research suggests that we need public policies that reorient food capitalism towards socially progressive ends.

 

To contact the author:  collantf@unizar.es

Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69

by Jean-Pascal Bassino (ENS Lyon) and Pierre van der Eng (Australian National University)

This blog is part of a larger research paper published in the Economic History Review.

 

Bassino1
Vietnam, rice paddy. Available at Pixabay.

In the ‘great divergence’ debate, China, India, and Japan have been used to represent the Asian continent. However, their development experience is not likely to be representative of the whole of Asia. The countries of Southeast Asia were relatively underpopulated for a considerable period.  Very different endowments of natural resources (particularly land) and labour were key parameters that determined economic development options.

Maddison’s series of per-capita GDP in purchasing power parity (PPP) adjusted international dollars, based on a single 1990 benchmark and backward extrapolation, indicate that a divergence took place in 19th century Asia: Japan was well above other Asian countries in 1913. In 2018 the Maddison Project Database released a new international series of GDP per capita that accommodate the available historical PPP-based converters. Due to the very limited availability of historical PPP-based converters for Asian countries, the 2018 database retains many of the shortcomings of the single-year extrapolation.

Maddison’s estimates indicate that Japan’s GDP per capita in 1913 was much higher than in other Asian countries, and that Asian countries started their development experiences from broadly comparable levels of GDP per capita in the early nineteenth century. This implies that an Asian divergence took place in the 19th century as a consequence of Japan’s economic transformation during the Meiji era (1868-1912). There is now  growing recognition that the use of a single benchmark year and the choice of a particular year may influence the historical levels of GDP per capita across countries. Relative levels of Asian countries based on Maddison’s estimates of per capita GDP are not confirmed by other indicators such as real unskilled wages or the average height of adults.

Our study uses available estimates of GDP per capita in current prices from historical national accounting projects, and estimates PPP-based converters and PPP-adjusted GDP with multiple benchmarks years (1913, 1922, 1938, 1952, 1958, and 1969) for India, Indonesia, Korea, Malaya, Myanmar (then Burma), the Philippines, Sri Lanka (then Ceylon), Taiwan, Thailand and Vietnam, relative to Japan. China is added on the basis of other studies. PPP-based converters are used to calculate GDP per capita in constant PPP yen. The indices of GDP per capita in Japan and other countries were expressed as a proportion of GDP per capita in Japan during the years 1910–70 in 1934–6 yen, and then converted to 1990 international dollars by relying on PPP-adjusted Japanese series comparable to US GDP series. Figure 1 presents the resulting series for Asian countries.

 

Figure 1. GDP per capita in selected Asian countries, 1910–1970 (1934–6 Japanese yen)

Bassino2
Sources: see original article.

 

The conventional view dates the start of the divergence to the nineteenth century. Our study identifies the First World War and the 1920s as the era during which the little divergence in Asia occurred. During the 1920s, most countries in Asia — except Japan —depended significantly on exports of primary commodities. The growth experience of Southeast Asia seems to have been largely characterised by market integration in national economies and by the mobilisation of hitherto underutilised resources (labour and land) for export production. Particularly in the land-abundant parts of Asia, the opening-up of land for agricultural production led to economic growth.

Commodity price changes may have become debilitating when their volatility increased after 1913. This was followed by episodes of import-substituting industrialisation, particularly during after 1945.  While Japan rapidly developed its export-oriented manufacturing industries from the First World War, other Asian countries increasingly had inward-looking economies. This pattern lasted until the 1970s, when some Asian countries followed Japan on a path of export-oriented industrialisation and economic growth. For some countries this was a staggered process that lasted well into the 1990s, when the World Bank labelled this development the ‘East Asian miracle’.

 

To contact the authors:

jean-pascal.bassino@ens-lyon.fr

pierre.vandereng@anu.edu.au

 

References

Bassino, J-P. and Van der Eng, P., ‘Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69’, Economic History Review (forthcoming).

Fouquet, R. and Broadberry, S., ‘Seven centuries of European economic growth and decline’, Journal of Economic Perspectives, 29 (2015), pp. 227–44.

Fukao, K., Ma, D., and Yuan, T., ‘Real GDP in pre-war Asia: a 1934–36 benchmark purchasing power parity comparison with the US’, Review of Income and Wealth, 53 (2007), pp. 503–37.

Inklaar, R., de Jong, H., Bolt, J., and van Zanden, J. L., ‘Rebasing “Maddison”: new income comparisons and the shape of long-run economic development’, Groningen Growth and Development Centre Research Memorandum no. 174 (2018).

Link to the website of the Southeast Asian Development in the Long Term (SEA-DELT) project:  https://seadelt.net

Factor Endowments on the “Frontier”: Algerian Settler Agriculture at the Beginning of the 1900s

by Laura Maravall Buckwalter (University of Tübingen)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

It is often claimed that access to land and labour during the colonial years determined land redistribution policies and labour regimes that had persistent, long-run effects.  For this reason, the amount of land and labour available in a colonized country at a fixed point in time are being included more frequently in regression frameworks as proxies for the types of colonial modes of production and institutions. However, despite the relevance of these variables within the scholarly literature on settlement economies, little is known about the way in which they changed during the process of settlement. This is because most studies focus on long-term effects and tend to exclude relevant inter-country heterogeneities that should be included in the assessment of the impact of colonization on economic development.

In my article, I show how colonial land policy and settler modes of production responded differently within a colony.  I examine rural settlement in French Algeria at the start of the 1900s and focus on cereal cultivation which was the crop that allowed the arable frontier to expand. I rely upon the literature that reintroduces the notion of ‘land frontier expansion’ into the understanding of settler economies. By including the frontier in my analysis, it is possible to assess how colonial land policy and settler farming adapted to very different local conditions. For exanple,  because settlers were located in the interior regions they encountered growing land aridity. I argue that the expansion of rural settlement into the frontier was strongly dependent upon the adoption of modern ploughs, intensive labour (modern ploughs were non-labour saving) and larger cultivated fields (because they removed fallow areas) which, in turn, had a direct impact on  colonial land policy and settler farming.

Figure 1. Threshing wheat in French Algeria (Zibans)

Buckwalter 1
Source: Retrieved from https://www.flickr.com/photos/internetarchivebookimages/14764127875/in/photostream/, last accessed 31st of May, 2019.

 

My research takes advantage of annual agricultural statistics reported by the French administration at the municipal level in Constantine for the years 1904/05 and 1913/14. The data are analysed in a cross-section and panel regression framework and, although the dataset provides a snapshot at only two points in time, the ability to identify the timing of settlement after the 1840s for each municipality provides a broader temporal framework.

Figure 2. Constantine at the beginning of the 1900s

Buckwalter 2
Source: Original outline of the map derives from mainly from Carte de la Colonisation Officielle, Algérie (1902), available online at the digital library from the Bibliothèque Nationale de France, retrieved from http://catalogue.bnf.fr/ark:/12148/cb40710721s (accessed on 28 Apr. 2019) and ANOM-iREL, http://anom.archivesnationales.culture.gouv.fr/ (accessed on 28 Apr. 2019).

 

The results illustrate how the limited amount of arable land on the Algerian frontier forced  colonial policymakers to relax  restrictions on the amount of land owned by settlers. This change in policy occurred because expanding the frontier into less fertile regions and consolidating settlement required agricultural intensification –  changes in the frequency of crop rotation and more intensive ploughing. These techniques required larger fields and were therefore incompatible  with the French colonial ideal of establishing a small-scale, family farm type of settler economy.

My results also indicate that settler farmers were able to adopt more intensive techniques mainly by relying on the abundant indigenous labour force. The man-to-cultivable land ratio, which increased after the 1870s due to continuous indigenous population growth and colonial land expropriation measures, eased settler cultivation, particularly on the frontier. This confirms that the availability of labour relative to land is an important variable that should be taken into consideration to assess the impact of settlement on economic development. My findings are in accord with Lloyd and Metzer (2013, p. 20), who argue that, in Africa, where the indigenous peasantry was significant, the labour surplus allowed low wages and ‘verged on servility’, leading to a ‘segmented labour and agricultural production system’. Moreover, it is precisely the presence of a large indigenous population relative to that of the settlers, and the reliance of settlers upon the indigenous labour and the state (to access land and labour), that has allowed Lloyd and Metzer to describe Algeria (together with Southern Rhodesia, Kenya and South Africa) as having a “somewhat different type of settler colonialism that emerged in Africa over the 19th and early 20th Centuries” (2013, p.2).

In conclusion, it is reasonable to assume that, as rural settlement gains ground within a colony, local endowments and cultivation requirements change. The case of rural settlement in Constantine reveals how settler farmers and colonial restrictions on ownership size adapted to the varying amounts of land and labour.

 

To contact: 

laura.maravall@uni-tuebingen.de

Twitter: @lmaravall

 

References

Ageron, C. R. (1991). Modern Algeria: a history from 1830 to the present (9th ed). Africa World Press.

Frankema, E. (2010). The colonial roots of land inequality: geography, factor endowments, or institutions? The Economic History Review, 63(2):418–451.

Frankema, E., Green, E., and Hillbom, E. (2016). Endogenous processes of colonial settlement. the success and failure of European settler farming in Sub-Saharan Africa. Revista de Historia Económica-Journal of Iberian and Latin American Economic History, 34(2), 237-265.

Easterly, W., & Levine, R. (2003). Tropics, germs, and crops: how endowments influence economic development. Journal of monetary economics, 50(1), 3-39.

Engerman, S. L., and Sokoloff, K. L. (2012). Economic development in the Americas since 1500: endowments and institutions. Cambridge University Press.

Lloyd, C. and Metzer, J. (2013). Settler colonization and societies in world history: patterns and concepts. In Settler Economies in World History, Global Economic History Series 9:1.

Lützelschwab, C. (2007). Populations and Economies of European Settlement Colonies in Africa (South Africa, Algeria, Kenya, and Southern Rhodesia). In Annales de démographie historique (No. 1, pp. 33-58). Belin.

Lützelschwab, C. (2013). Settler colonialism in Africa Lloyd, C., Metzer, J., and Sutch, R. (2013), Settler economies in world history. Brill.

Willebald, H., and Juambeltz, J. (2018). Land Frontier Expansion in Settler Economies, 1830–1950: Was It a Ricardian Process? In Agricultural Development in the World Periphery (pp. 439-466). Palgrave Macmillan, Cham.

Military casualties and exchange rates during the First World War: did the Eastern Front matter?

by Pablo Duarte and Andreas Hoffmann (Leipzig University)

An article expanding on this blog has been published in the Economic History Review.

 

Russian_Troops_NGM-v31-p372
Russion troops going to the front. Available at Wikimedia Commons.

In 1918 the Entente forces defeated the Central Powers on the Western Front. The First World War, with countless brutal battles and over 40 million casualties, had finally ended.

During the war, all governments substantially increased their national debt and promised to hand the bill to the losers. They also promised to return to the pre-war gold parity rather than inflating and devaluing their currency. Since the outcome of the war was expected to severely affect currency values, particularly for the losers,  foreign exchange traders had an incentive to closely follow war events to update their beliefs on who was more likely to win.

According to Ferguson’s (1998) The Pity of War, the lost morale of the German troops — reflected in higher numbers of prisoners of war and of soldiers surrendering on the Western Front — was the ultimate reason for their defeat. Complementing this argument, Hall (2004) provided evidence that military casualties on the Western Front — the key front to finally winning the war — can help explain contemporary fluctuations in the exchange rates between belligerents’ currencies.

Although finally decided in the West, historians have emphasized the relevance of the global dimension of the First World War and the importance of the Eastern Front in understanding its complex evolution. Imagine it is 1914. Russia has just entered the war (earlier than expected), upsetting the plans of the Central Powers to circumvent a two-front war. Events on one front affected those on the other. But did contemporary traders, like historians today, consider the Eastern Front to be of relevance?

In our forthcoming article, we provide the first empirical insights into the relative importance of the Eastern Front during the First World War from the perspective of contemporary foreign exchange traders. Building on Hall’s study, the article indicates when and to what extent military casualties from both the Western and  Eastern Fronts were linked to exchange rate fluctuations during the First World War, and suggest that traders used this information as an indicator as to  which side was more likely to win.

To analyze the link between exchange rates and casualties we have introduced a novel dataset:  the German Reichsarchiv and the Austrian War Office. Merging our dataset with that for the Western Front employed by Hall (2004), we have been able to construct a rich dataset on war casualties for France, Britain, and Russia as well as Germany and Austria-Hungary, for  both Fronts.

 

Figure 1. 15,000 Russian Prisoners of war in Germany.

Duarte &amp; Hoffmann
Russian prisoners in Germany. Available at Wikimedia Commons.

 

Using the digital archives of the Neue Zürcher Zeitung (a Swiss newspaper),  we have further documented information on casualties, specifically  the number of prisoners of war (Figure 1).  The following quote from December 1914 makes this finding explicit:

Berlin, Dec. 31 [1914] (Wolff. Authorized) The overall number of prisoners of war (no civilian prisoners) in Germany at the end of the year is 8,138 officers and 577,875 men. This number does not include a portion of those captured on the run in Russian Poland nor any of those still in transit. The overall number is comprised of the following: French 3,159 officers and 215,905 men, including 7 generals; Russians 3,575 officers and 306,294 men, including 3 generals; British 492 officers and 18,824 men (Neue Zürcher Zeitung, 1 Jan. 1915, p. A1.).

 

In summary, our forthcoming article provides evidence that foreign exchange traders recognized the global dimension of the war, especially the Eastern and Western Fronts.  Casualties on both Fronts were associated with exchange rate fluctuations. The number of soldiers captured on the Eastern Front affected exchange rates in the early war years. Foreign exchange traders gave additional weight to the Eastern Front during the first year of the war because Russia’s attack came as a surprise and the number of casualties was substantially higher than on the Western Front.

From autumn 1916 onwards, even though Russia had not yet left the war, our findings indicate that traders believed that the key to winning the war was in the west.  The Brusilov offensive, a massive Russian attack (from June to September 1916), had proven that the Central Powers would face substantial opposition in the East. Moreover, the Allied forces on the Western Front had started to coordinate joint offenses.

 

To contact the authors:

pablo.duarte@uni-leipzig.de Twitter: @economusiker

ahoffmann@wifa.uni-leipzig.de Twitter: @Andhoflei

 

References

Ferguson, N. (1998). The Pity of War. Basic Books.

Hall, G. J., ‘Exchange rates and casualties during the First World War’, Journal of Monetary Economics, 51 (2004), pp. 1711–42.

 

 

 

Plague and long-term development

by Guido Alfani (Bocconi University, Dondena Centre and IGIER)

 

The full paper has been published in The Economic History Review and is available here.

A YouTube video accompanies this work and can be found here.

 

How did preindustrial economies react to extreme mortality crises caused by severe epidemics of plague? Were health shocks of this kind able to shape long-term development patterns? While past research focused on the Black Death that affected Europe during 1347-52 ( Álvarez Nogal and Prados de la Escosura 2013; Clark 2007; Voigtländer and Voth 2013), in a forthcoming article with Marco Percoco we analyse the long-term consequences of what was by far the worst mortality crisis affecting Italy during the Early Modern period: the 1629-30 plague which killed an estimated 30-35% of the northern Italian population — about two million victims.

 

Figure 1 Luigi Pellegrini Scaramuccia (1670), Federico Borromeo visits the plague ward during the 1630 plague,

Alfani 1

Source: Milan, Biblioteca Ambrosiana

 

This episode is significant in Italian history, and more generally, for our understanding of the Little Divergence between the North and South of Europe. It had recently been hypothesized that the 1630 plague was the source of Italy’s relative decline during the seventeenth century (Alfani 2013). However, this hypothesis lacked solid empirical evidence. To resolve this question, we take a different approach from previous studies, and  demonstrate that plague lowered the trajectory of development of Italian cities. We argue that this was mostly due to a productivity shock caused by the plague, but we also explore other contributing factors. Consequently,  we provide support for the view that the economic consequences of severe demographic shocks need to be understood and studied on a case-by-case basis, as the historical context in which they occurred can lead to very different outcomes (Alfani and Murphy 2017).

After assembling a new database of mortality rates in a sample of 56 cities, we estimate a model of population growth allowing for different regimes of growth. We build on the seminal papers by Davis and Weinstein (2002), and Brakman et al. (2004) who based their analysis on a new framework in economic geography framework in which a relative city size growth model is estimated to determine whether a shock has temporary or persistent effects. We find that cities affected by the 1629-30 plague experienced persistent, long-term effects (i.e., up to 1800) on their pattern of relative population growth.

 

Figure 2. Giacomo Borlone de Buschis (attributed), Triumph of Death (1485), fresco

Alfani 2

Source: Oratorio dei Disciplini, Clusone (Italy).

 

We complete our analysis by estimating the absolute impact of the epidemic. We find that in northern Italian regions the plague caused a lasting decline in both the size and rate of change  of urban populations. The lasting damage done to the urban population are shown in Figure 3. For urbanization rates it will suffice to notice that across the North of Italy, by 1700 (70 years after the 1630 plague), they were still more than 20 per cent lower than in the decades preceding the catastrophe (16.1 per cent in 1700 versus an estimated 20.4 per cent in 1600, for cities >5,000). Overall, these findings suggest that surges in plagues may contribute to the decline of economic regions or whole countries. Our conclusions are  strengthened by showing that while there is clear evidence of the negative consequences of the 1630 plague, there is hardly any evidence for a positive effect (Pamuk 2007). We hypothesize that the potential positive consequences of the 1630 plague were entirely eroded by a negative productivity shock.

 

Figure 3. Size of the urban population in Piedmont, Lombardy, and Veneto (1620-1700)

Alfani 3

Source: see original article

 

Demonstrating that the plague had a persistent negative effect on many key Italian urban economies, we provide support for the hypothesis that the origins of  relative economic decline in northern Italy are to be found in particularly unfavorable epidemiological conditions. It was the context in which an epidemic occurred that increased its ability to affect the economy, not the plague itself.  Indeed, the 1630 plague affected the main states of the Italian Peninsula at the worst possible moment when its manufacturing were dealing with increasing competition from northern European countries. This explanation, however, provides a different interpretation to the Little Divergence in recent literature.

 

To contact the author: guido.alfani@unibocconi.it

 

References

Alfani, G., ‘Plague in seventeenth century Europe and the decline of Italy: and epidemiological hypothesis’, European Review of Economic History, 17, 4 (2013), pp.  408-430

Alfani, G. and Murphy, T., ‘Plague and Lethal Epidemics in the Pre-Industrial World’, Journal of Economic History, 77, 1 (2017), pp. 314-343.

Alfani, G. and Percoco, M., ‘Plague and long-term development: the lasting effects of the 1629-30 epidemic on the Italian cities’, The Economic History Review, forthcoming, https://doi.org/10.1111/ehr.12652

Álvarez Nogal, C. and Prados de la Escosura,L., ‘The Rise and Fall of Spain (1270-1850)’, Economic History Review, 66, 1 (2013), pp. 1–37.

Brakman, S., Garretsen H., Schramm M. ‘The Strategic Bombing of German Cities during World War II and its Impact on City Growth’, Journal of Economic Geography, 4 (2004), pp. 201-218.

Clark, G., A Farewell to Alms (Princeton, 2007).

Davis, D.R. and Weinstein, D.E. ‘Bones, Bombs, and Break Points: The Geography of Economic Activity’, American Economic Review, 92, 5 (2002), pp. 1269-1289.

Pamuk, S., ‘The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600’, European Review of Economic History, 11 (2007), pp. 289-317.

Voigtländer, N. and H.J. Voth, ‘The Three Horsemen of Riches: Plague, War, and Urbanization in Early Modern Europe’, Review of Economic Studies 80, 2 (2013), pp. 774–811.

The Political Economy of the Army in a Nonconsolidated Democracy: Spain (1931-1939)

by Alvaro La Parra-Perez (Weber State University)

The full article is published by the Economic History Review and is available for Early View at this link 

The Spanish Civil War (1936-9; henceforth, SCW) ended the Second Spanish Republic (1931-9), which is often considered Spain’s first democracy. Despite the hopes raised by the Republic – which enfranchised women and held free and fair elections, separated Church and state, and drafted and ambitious agrarian reform- , its end was not very different from many previous Spanish regimes: a military coup started the SCW which ultimately resulted in a dictatorship led by one of the rebel officers: Francisco Franco (1939/75).

In my article “For a Fistful of Pesetas? The Political Economy of the Army in a Non-Consolidated Democracy: The Second Spanish Republic and Civil War (1931-9)”, I open the “military black box” to understand the motivations driving officers’ behavior. In particular, the article explores how the redistribution of economic and professional rents during the Republic influenced officers’ likelihood of rebelling or remaining loyal to the republican government in 1936. By looking at (military) intra-elite conflict, I depart from the traditional assumption of an “elite single agent” that characterizes the neoclassical theory of the state (e.g. here, here, here, or here; also here).

The article uses a new data set with almost 12,000 active officers active in 1936 who belonged to the corps more directly involved in combat. Using the Spanish military yearbooks between 1931 and 1936, I traced officers’ individual professional trajectories and assessed the impact that republican military reforms in 1931-6 had on their careers. The side –loyal or rebel- chosen by each officer comes from Carlos Engel.

Untitled
Figure 1. Extract from the 1936 military yearbook. Source: 1936 Military Yearbook published by the Spanish Minister of War: http://hemerotecadigital.bne.es/issue.vm?id=0026976287&search=&lang=en

The main military reforms during the Republic took place under Manuel Azaña’s term as Minister of the War (1931-3). Azaña was also the leader of the leftist coalition that ruled the Republic when some officers rebelled and the SCW began. Azaña’s reforms favored the professional and economic independence of the Air Force and harmed many officers’ careers when some promotions passed during Primo de Rivera’s dictatorship (1923/30) were revised and cancelled. The system of military promotions was also revised and rendered more impersonal and meritocratic. Some historians also argue that the elimination of the highest rank in the army (Lieutenant General) worsened the professional prospects of many officers because vacancies for promotions became scarcer.

The results suggest that, at the margin, economic and professional considerations had a significant influence on officers’ choice of side during the SCW. The figure below shows the probit average marginal effects for the likelihood of rebelling among officers in republican-controlled areas. The main variables of interest are the ones under the “Rents” header. In general, those individuals or factions that improved their economic rents under Azaña’s reforms were less likely to rebel. For example, aviators were almost 20 percentage points less likely to rebel than the reference corps (artillerymen) and those officers with worse prospects after the rank of lieutenant general was eliminated were more likely to join the rebel ranks. Also, officers with faster careers (greater “change of position”) in the months before the SCW were less likely to rebel. The results also suggest that officers had a high discount rate for changes in their rank or position in the scale. Pre-1935 promotions are not significantly related to officers’ side during the SCW. Officers negatively affected by the revision of promotions in 1931/3 were more likely to rebel only at the 10 percent significance level (p-value=0.089).

Untitled 2
Figure 2. Probit average marginal effects for officers in republican-controlled areas with 95-percent confidence intervals. Source: see original article

To be clear, economic and professional interest were not the only elements explaining officers’ behavior. The article also finds evidence for the significance of other social and ideological factors. Take the case of hierarchical influences. Subordinates’ likelihood of rebelling in a given unit increased if their leader rebelled. Also, officers were less likely to rebel in those areas where the leftist parties that ruled in July 1936 had obtained better results in the elections held in February. Finally, members of the Assault Guard –a unit for which proven loyalty to the Republic was required- were more likely to remain loyal to the republican government.

The results are hardly surprising for an economist: people respond to incentives and officers – being people- were influenced at the margin by the impact that Azaña’s reforms had on their careers. This mechanism adds to the ideological explanations that have often dominated the narratives of the SCW, which tend to depict the army –more or less explicitly- as a monolithic agent aligned with conservative elites. As North, Wallis, and Weingast showed for other developing societies, intra-elite conflict and the redistribution of rents were an important factor in the dynamics (and ultimate fall) of the dominant coalition in Spain’s first democracy.

 

To contact the author:

Twitter: @AlvaroLaParra

Professional website: https://sites.google.com/site/alvarolaparraperez/

Can school centralization foster human capital accumulation? A quasi-experiment from early twentieth-century Italy

By Gabriele Cappelli (University of Siena) and Michelangelo Vasta (University of Siena)

The article is available on Early View at the Economic History Review’s link here

 

The issue of school reform is a key element of institutional change across countries. In developing economies the focus is rapidly shifting from increasing enrolments to improving educational outputs (literacy and skills) and outcomes (wages and productivity). In advanced economies, policy-makers focus on generating skills from educational inputs despite limited resources. This is unsurprising, because human capital formation is largely acknowledged as one of the main factors of economic growth.

Related to education policy, reforms have long focused on the way that the school systems can be organized, particularly its management and funding by local v. central government. On the one hand, local policy makers are more aware of the needs of local communities, which is supposed to improve schooling. On the other hand, school preferences might vary considerably between the central government and the local ruling elites, hampering the diffusion of education. Despite the importance of the topic, there is little historical research on this topic.

In this paper, we offer fresh evidence using a quasi-experiment that aims to explore dramatic changes in Italy’s educational institutions at the beginning of the 20th century, i.e. the 1911 Daneo-Credaro Reform. Due to this legislation, most municipalities moved from a decentralized school system, which had been based on the 1859 Casati Law, to direct state management and funding, while other municipalities, mainly provincial and district capitals, retained their autonomy, thus forming two distinct groups (Figure 1).

The Reform design allows us to compare these two groups through a quasi-experiment based on an innovative technique, namely Propensity Score Matching (henceforth PSM). PSM tackles an issue with the Reform that we study, namely that the assignment into treatment (centralization) of the municipalities is not random: the municipalities that retained school autonomy were those characterized by high literacy. By contrast, the poorest and less literate municipalities were more likely to end up under state control, implying that the analysis of the Daneo-Credaro Reform as an experiment will tend to overestimate the impact of centralization. PSM tackles the issue by ‘randomizing’ the selection into treatment: a statistical model is used to estimate the probability of being selected into centralization (propensity score) for each municipality; then, an algorithm matches municipalities in the treatment group with municipalities in the control group that have an equal (or very similar) propensity score – meaning that the only different feature will be whether they are treated or not. To perform PSM, we construct a novel database at the municipal level (a large sample of 1,000+ comuni). Secondly, we fill a gap in the historiography by providing an in-depth discussion of the way that the Reform worked, which has so far been neglected.

1
Figure 1 – Municipalities that still retained school autonomy in Italy by 1923. Source: Ministero della Pubblica Istruzione (1923), Relazione sul numero, la distribuzione e il funzionamento delle scuole elementari. Rome. Note: both the grey and black dots represent municipalities that retained school autonomy by 1923, while the others (not shown in the map) had shifted to centralized school management and funding. 

We find that the municipalities that switched to state control were characterized by a 0.43 percentage-point premium on the average annual growth of literacy between 1911 and 1921, compared to those that retained autonomy (Table 1). The estimated coefficient means that two very similar municipalities with equal literacy rates at 60% in 1911 will have a literacy gap equal to 3 percentage points in 1921, i.e. 72.07% (school autonomy) vs 75.17% (treated). This difference is similar to the gap between the treatment group and a counterfactual that we estimated in a robustness check based on Italian provinces (Figure 2).

Screen Shot 2019-07-22 at 20.34.56
Table 1 – Estimated treatment (Daneo-Credaro Reform) effect, 1911 – 1921.
2
Figure 2 – Literacy rates in the treatment and control groups, 1881 – 1921, pseudo-DiD. Source: see original article

Centralization improved the overall functioning of the school system and the efficiency of school funding. First, it reduced the distance between the central government and the city councils by granting more decision-making power to the provincial schooling board under the supervision of the central government. Thus, the control exercised by the Ministry reassured teachers that their salary would be increased, and the government could now guarantee that they would be paid regularly, which was not always the case when the municipalities managed primary schooling. Secondly, additional funding was provided to build new schools. The resultant increase appears to have been very large and its impact was amplified by the full reorganization of the school system. The funds could be directed to where they were most needed. Consequently, we argue, a mere increase in funding without institutional change would have been less effective in increasing literacy rates.
To conclude, the 50-year persistence of decentralized primary schooling hampered the accumulation of human capital and regional convergence in basic education, thus casting a long shadow on the future pace of aggregate and regional economic growth. The centralization of primary education via the Daneo-Credaro Reform in 1911 was a major breakthrough, which fostered the spread of literacy and allowed the country to reduce the human-capital gap with the most advanced economies.

 

To contact the author: Gabriele Cappelli

Email: gabriele.cappelli@unisi.it

Twitter: gabercappe

Landlords and tenants in Britain, 1440-1660

review by James P. Bowen (University of Liverpool)

book edited by Jane Whittle

‘Landlords and tenanta in Britain, 1440-1660’ is published by Boydell and Brewer. SAVE  25% when you order direct from the publisher – offer ends on the 15th August 2019. See below for details.

 

9781843838500_1

This book, the first volume in the Economic History Society’s ‘People, Markets, Goods: Economies and Societies in History’ paperback series, revisits Tawney’s classic work, The Agrarian Problem in the Sixteenth Century, published in 1912. It arises from a conference held to mark the centenary of the book’s publication and includes the leading figures in rural and agrarian history showcasing the latest research on issues originally discussed by Tawney. The book is logically structured. Keith Wrightson’s foreword provides personal insight as to attitudes amongst Cambridge economic historians who maligned Tawney. The first three chapters offer overviews beginning with Jane Whittle’s historiographical essay concerning Tawney, providing background to his Agrarian Problem. Christopher Dyer surveys the fifteenth century, given Tawney’s view that demographic changes were key in creating change in fifteenth-century England, providing the conditions for the ‘problem’ of the sixteenth century. Harold Garrett-Goodyear addresses the issues surrounding copyhold tenure and the institutional function of manor courts in promoting lords’ private interests as landowners and how this was reflected in economic and social change with the emergence of agrarian capitalism, greater social differentiation and the transition from feudal to modern society.

The remaining chapters are thematic, several of which are detailed local or micro-studies. Briony McDonagh and Heather Falvey explore the enclosure process at a local level. Complementing the rural viewpoint, Andy Wood shows how notions of custom and popular memory were prominent in urban society below the ‘middling sort’, specifically weavers of Malmesbury, Wiltshire, a cloth-working town. Whilst there is an apparent lack of evidence for Tawney’s sense of ‘ideal customary’, he suggests this does not undermine his view, conversely reinforcing his argument about the centrality of custom in popular political culture and disputes arising because of struggles over customary entitlement and urban identity. Providing a comparative dimension Julian Goodare searches for a Scottish agrarian problem, pointing out that whilst the two countries had different legal and political systems, similar processes seem to have been at play, suggesting a common economic problem rather than law or political structures.

Several chapters address the issue of tenure, Tawney having pointed to the insecurity of leasehold tenure and the increasing commercial landlord policies as being central to the agrarian problems of the sixteenth century. Jean Morrin examines a landlord-tenant dispute on the Durham Cathedral Estate over the abolition of traditional customary tenures, specifically tenant-right. She argues for a more subtle approach to leases in the early modern period given the various forms which they took, presenting a picture of negotiation and compromise, which not only encouraged tenants to improve farms, but also granted them the right to bequeath, sell or mortgage their leases to whomever they chose. Jennifer Holt explores the case of the Hornby Castle Estate in north Lancashire, analyzing the potential income from customary land and quantifying the shares of lords and tenants, demonstrating how manorial tenants benefitted despite the lord’s attempt to raise rents and fines, retaining their tenures on a customary basis.

Chapters by Bill Shannon and Elizabeth Griffiths look at landlord-driven agrarian improvement intended to raise revenue. Christopher Brooks considers the legal and political context, in particular the impact of the Civil Wars and Interregnum, highlighting the complexities which weakened Tawney’s assessment of the mid- and later seventeenth century. He highlights the common laws engagement with customary tenures by 1640, arguing that greater security afforded to smallholders enabled them to assert their rights more aggressively, with patriarchal and seigniorial landlord-tenant relationships being replaced by economic relations. Legal developments meant common law served the interests of ‘middling’ agricultural society and the gentry and that by the 1680s, land, including copyhold, had been absorbed into the market for both property and credit. Finally, David Ormrod reflects on the significance of Tawney’s work in relation to long-standing theoretical debates regarding the rise of capitalism and the transition from feudalism to capitalism.

Whittle’s short conclusion effectively synthesizes the chapters, showing that debates have progressed since Tawney’s work not least with regard to the newer approaches towards political, social and rural history. Emphasis is placed on the ‘blurred boundaries’ which existed, leading to disputes notably over enclosure and tenure. Developments in England are viewed in a wider western European perspective, with reference to up-to-date research and future questions identified. The chapters form a coherent volume which, as the title suggests, focuses on the changing relationship between landlords and tenants, a well-established trend in agrarian historiography. Moreover, while it is recognized that any notion of a sixteenth-century agrarian revolution has been rejected, it nevertheless rightly argued that Tawney’s Agrarian Problem, ‘remains a crucial reference point’, containing much to, ‘inform and inspire the twenty-first-century historian seeking to understand the changes that took place in rural England between 1440 and 1660’ (pp. 17-18).

 

SAVE 25% when you order direct from the publisher using the offer code B125 online hereOffer ends 15th August 2019. Discount applies to print and eBook editions. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291, and quote the same code. Any queries please email marketing@boydell.co.uk

 

Note: this post appeared as a book review article in the Review. We have obtained the necessary permissions.