Poverty or Prosperity in Northern India? New Evidence on Real Wages, 1590s-1870s

by Pim de Zwart (Wageningen University) and Jan Lucassen (International Institute of Social History, Amsterdam)

The full article from this blog was published on The Economic History Review and it is now available open access on early view at this link 


At the end of the sixteenth century, the Indian subcontinent, largely unified under the Mughals, was one of the most developed parts of the global economy, with relatively high incomes and a thriving manufacturing sector. Over the centuries that followed, however, incomes declined and India deindustrialized. The precise timing and causes of this decline remain the subject of academic debate about the Great Divergence between Europe and Asia. Whereas some scholars depicted the eighteenth century in India as a period of economic growth and comparatively high living standards, other have suggested it was an era of decline and relatively low incomes. The evidence on which these contributions have been based is rather thin, however. In our paper, we add quantitative and qualitative data from numerous British and Dutch archival sources about the development of real wages and the functioning of the northern Indian labour market between the late sixteenth and late nineteenth centuries.

In particular, we introduce a new dataset with over 7500 observations on wages across various towns in northern India (Figure 1). The data pertain to the income earned in a wide range of occupations, from unskilled urban workers and farm servants, to skilled craftsmen and bookkeepers, and for both adult men and women and children. All these wage observations were coded following the HISCLASS scheme that allow us to compare trends in wages between groups of workers. The wage database provides information about the incomes of an important body of workers in northern India. There was little slavery and serfdom in India, and wage labour was relatively widespread. There was a functioning free labour market in which European companies enjoyed no clearly privileged position. The data thus obtained for India can therefore be viewed as comparable to those gathered for many European cities in which  the wages of construction workers were often paid by large institutions.

Picture 1
Figure 01 – Map of India and regional distribution of the wage data. Source: as per article

We calculated the value of the wage relative to a subsistence basket of goods. We made further adjustments to the real wage methodology by incorporating information about climate, regional consumption patterns, average heights, and BMI, to more accurately calculate the subsistence cost of living. Comparing the computed real wage ratios for northern India with those prevailing in other parts of Eurasia leads to a number of important insights (Figure 1). Our data suggests that the Great Divergence between Europe and India happened relatively early, from the late seventeenth century. The slightly downward trend since the late seventeenth century lasted and wage labourers saw their purchasing power diminish until the devastating Bengal famine in 1769-1770. Given this evidence, it is difficult to view the eighteenth century as period of generally rising prosperity across northern India. While British colonialism may have reduced growth in the nineteenth century — pretensions about the superiority of European administration and the virtues of the free market may have had long-lasting negative consequences — it is nonetheless clear that most of the decline in living standards preceded colonialism. Real wages in India stagnated in the nineteenth century, while Europe experienced significant growth; consequently, India lagged further behind.

Picture 2
Fig.02 – Real wages in India in comparison with Europe and Asia. Source: as per article

With real wages below subsistence level it is likely that Indian wage labourers worked more than the 250 days per year often assumed in the literature. This is also confirmed in our sources which suggest 30 days of labour per month. To accommodate this observation, we added a real wage series based on the assumption of 360 days labour per year (Figure 2). Yet even with 360 working days per year, male wages were at various moments in the eighteenth and nineteenth centuries insufficient to sustain a family at subsistence level. This evidence indicates the limits of what can be said about living standards based solely on the male wage. In many societies and in most time periods, women and children made significant contributions to household income. This also seems to have been the case for northern India. Over much of the eighteenth and nineteenth centuries, the gap between male and female wages was smaller in India than in England. The important contribution of women and children to household incomes may have allowed Indian families to survive despite low levels of male wages.


To contact the authors: 

Pim de Zwart (pim.dezwart@wur.nl)

Jan Lucassen (lucasjan@xs4all.nl)

Give Me Liberty Or Give Me Death

by Richard A. Easterlin (University of Southern California)

This blog is  part G of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. The full article from this blog is “How Beneficent Is the Market? A Look at the Modern History of Mortality.” European Review of Economic History 3, no. 3 (1999): 257-94. https://doi.org/10.1017/S1361491699000131


A child is vaccinated, Brazil, 1970.

Patrick Henry’s memorable plea for independence unintentionally also captured the long history of conflict between the free market and public health, evidenced in the current struggle of the United States with the coronavirus.  Efforts to contain the virus have centered on  measures to forestall transmission of the disease such as stay-at-home orders, social distancing, and avoiding large gatherings, each of which infringes on individual liberty.  These measures have given birth to a resistance movement objecting to violations of one’s freedom.

My 1999 article posed the question “How Beneficent is the Market?” The answer, based on “A Look at the Modern History of Mortality” was straightforward: because of the ubiquity of market failure, public intervention was essential to achieve control of major infectious disease. This intervention  centered on the creation of a public health system. “The functions of this system have included, in varying degrees, health education, regulation, compulsion, and the financing or direct provision of services.”

Regulation and compulsion, and the consequent infringement of individual liberties, have always been  critical building blocks of the public health system. Even before formal establishment of public health agencies, regulation and compulsion were features of measures aimed at controlling the spread of infectious disease in mid-19th century Britain. The “sanitation revolution” led to the regulation of water supply and sewage disposal, and, in time to regulation of slum-  building conditions.  As my article notes, there was fierce opposition to these measures:

“The backbone of the opposition was made up of those whose vested interests were threatened: landlords, builders, water companies, proprietors of refuse heaps and dung hills, burial concerns, slaughterhouses, and the like … The opposition appealed to the preservation of civil liberties and sought to debunk the new knowledge cited by the public health advocates …”

The greatest achievement of public health was the eradication of smallpox, the one disease in the world that has been eliminated from the face of the earth. Smallpox was the scourge of humankind until William Jenner’s discovery of a vaccine in 1798.   Throughout the 19th and 20th centuries, requirements for smallpox vaccination were fiercely opposed by anti-vaccinationists.  In 1959 the World Health Organization embarked on a program to eradicate the disease. Over the ensuing two decades its efforts to persuade governments worldwide to require vaccination of infants were eventually successful, and in 1980 WHO officially declared the disease eradicated. Eventually public health triumphed over liberty. But It took almost two centuries to realize Jenner’s hope that vaccination would annihilate smallpox.

In the face of the coronavirus pandemic the U. S. market-based health care system  has demonstrated once again the inability of the market to  deal with infectious disease, and the need for forceful public intervention. The  current health care system requires that:

 “every player, from insurers to hospitals to the pharmaceutical industry to doctors, be financially self-sustaining, to have a profitable business model. It excels in expensive specialty care. But there’s no return on investment in being positioned for the possibility of a pandemic” (Rosenthal 2020).

Commercial and hospital labs have been slow to respond to the need for developing a test for the virus.  Once tests became available, conducting them was handicapped by insufficient supplies of testing capacity — kits, chemical reagents, swabs, masks and other personal protective equipment. In hospitals, ventilators  were also in short supply. These deficiencies reflected the lack of profitability in responding to these needs, and of a government reluctant to compensate for market failure.

At the current time, the halting efforts of federal public health authorities  and state and local public officials to impose quarantine and “shelter at home” measures have been seriously handicapped by public protests over infringement of civil liberties, reminiscent of the dissidents of the 19th  and 20th centuries and their current day heirs. States are opening for business well in advance of guidelines of the Center for Disease Control.  The lesson of history regarding such actions is clear: The cost of liberty is sickness and death.  But do we learn from history? Sadly, one is put in mind of Warren Buffet’s aphorism: “What we learn from history is that people don’t learn from history.”



Rosenthal, Elizabeth, “A Health System Set up to Fail”,  New York Times, May 8, 2020, p.A29.


To contact the author: easterl@usc.edu

Unequal access to food during the nutritional transition: evidence from Mediterranean Spain

by Francisco J. Medina-Albaladejo & Salvador Calatayud (Universitat de València).

This article is forthcoming in the Economic History Review.


Figure 1 – General pathology ward, Hospital General de Valencia (Spain), 1949. Source: Consejo General de Colegios Médicos de España. Banco de imágenes de la medicina española. Real Academia Nacional de Medicina de España. Available here.

Over the last century, European historiography has debated whether industrialisation brought about an improvement in working class living standards.  Multiple demographic, economic, anthropometric and wellbeing indicators have been examined in this regard, but it was Eric Hobsbawm (1957) who, in the late 1950s, incorporated food consumption patterns into the analysis.

Between the mid-19th and the first half of the 20th century, the diet of European populations underwent radical changes. Caloric intake increased significantly, and cereals were to a large extent replaced by animal proteins and fat, resulting from a substantial increase in meat, milk, eggs and fish consumption. This transformation was referred to by Popkin (1993) as the ‘Nutritional transition’.

These dietary changes were  driven, inter alia,  by the evolution of income levels which raises the possibility  that significant inequalities between different social groups ensued. Dietary inequalities between different social groups are a key component in the analysis of inequality and living standards; they directly affect mortality, life expectancy, and morbidity. However, this hypothesis  remains unproven, as historians are still searching for adequate sources and methods with which to measure the effects of dietary changes on living standards.

This study contributes to the debate by analysing a relatively untapped source: hospital diets. We have analysed the diet of psychiatric patients and members of staff in the main hospital of the city of Valencia (Spain) between 1852 and 1923. The diet of patients depended on their social status and the amounts they paid for their upkeep. ‘Poor psychiatric patients’ and abandoned children, who paid no fee, were fed according to hospital regulations, whereas ‘well-off psychiatric patients’ paid a daily fee in exchange for a richer and more varied diet. There were also differences among members of staff, with nuns receiving a richer diet than other personnel (launderers, nurses and wet-nurses). We think that our source  broadly  reflects dietary patterns of the Spanish population and the effect of income levels thereon.

Figure 2 illustrates some of these differences in terms of animal-based caloric intake in each of the groups under study. Three population groups can be clearly distinguished: ‘well-off psychiatric patients’ and nuns, whose diet already presented some of the features of the nutritional transition by the mid-19th century, including fewer cereals and a meat-rich diet, as well as the inclusion of new products, such as olive oil, milk, eggs and fish; hospital staff, whose diet was rich in calories,to compensate for their demanding jobs, but still traditional in structure, being largely based on cereals, legumes, meat and wine; and, finally, ‘poor psychiatric patients’ and abandoned children, whose diet was poorer and which, by the 1920, had barely joined the trends that characterised the nutritional transition.


Figure 2. Percentage of animal calories in the daily average diet by population groups in the Hospital General de Valencia, 1852-1923 (%). Source: as per original article.


In conclusion, the nutritional transition was not a homogenous process, affecting all diets at the time or at the same pace. On the contrary, it was a process marked by social difference, and the progress of dietary changes was largely determined by social factors. By the mid-19th century, the diet structure of well-to-do social groups resembled diets that were more characteristic of the 1930s, while less favoured and intermediate social groups had to wait until the early 20th century before they could incorporate new foodstuffs into their diet. As this sequence clearly indicates, less favoured social groups always lagged behind.



Medina-Albaladejo, F. J. and Calatayud, S., “Unequal access to food during the nutritional transition: evidence from Mediterranean Spain”, Economic History Review, (forthcoming).

Hobsbawm, E. J., “The British Standard of Living, 1790-1850”, Economic History Review, 2nd ser., X (1957), pp. 46-68.

Popkin B. M., “Nutritional Patterns and Transitions”, Population and Development Review, 19, 1 (1993), pp. 138-157.

Fascistville: Mussolini’s new towns and the persistence of neo-fascism

by Mario F. Carillo (CSEF and University of Naples Federico II)

This blog is part of our EHS 2020 Annual Conference Blog Series.


March on Rome, 1922. Available at Wikimedia Commons.

Differences in political attitudes are prevalent in our society. People with the same occupation, age, gender, marital status, city of residence and similar background may have very different, and sometimes even opposite, political views. In a time in which the electorate is called to make important decisions with long-term consequences, understanding the origins of political attitudes, and then voting choices, is key.

My research documents that current differences in political attitudes have historical roots. Public expenditure allocation made almost a century ago help to explain differences in political attitudes today.

During the Italian fascist regime (1922-43), Mussolini undertook enormous investments in infrastructure by building cities from scratch. Fascistville (Littoria) and Mussolinia are two of the 147 new towns (Città di Fondazione) built by the regime on the Italian peninsula.


Towers shaped like the emblem of fascism (Torri Littorie) and majestic buildings as headquarters of the fascist party (Case del Fascio) dominated the centres of the new towns. While they were modern centres, their layout was inspired by the cities of the Roman Empire.

Intended to stimulate a process of identification of the masses based on the collective historical memory of the Roman Empire, the new towns were designed to instil the idea that fascism was building on, and improving, the imperial Roman past.

My study presents three main findings. First, the foundation of the new towns enhanced local electoral support for the fascist party, facilitating the emergence of the fascist regime.

Second, such an effect persisted through democratisation, favouring the emergence and persistence of the strongest neo-fascist party in the advanced industrial countries — the Movimento Sociale Italiano (MSI).

Finally, survey respondents near the fascist new towns are more likely today to have nationalistic views, prefer a stronger leader in politics and exhibit sympathy for the fascists. Direct experience of life under the regime strengthens this link, which appears to be transmitted across generations inside the family.


Thus, the fascist new towns explain differences in current political and cultural attitudes that can be traced back to the fascist ideology.

These findings suggest that public spending may have long-lasting effects on political and cultural attitudes, which persist across major institutional changes and affect the functioning of future institutions. This is a result that may inspire future research to study whether policy interventions may be effective in promoting the adoption of growth-enhancing cultural traits.

Turkey’s Experience with Economic Development since 1820

by Sevket Pamuk, University of Bogazici (Bosphorus) 

This research is part of a broader article published in the Economic History Review.

A podcast of Sevket’s Tawney lecture can be found here.


Pamuk 1

New Map of Turkey in Europe, Divided into its Provinces, 1801. Available at Wikimedia Commons.

The Tawney lecture, based on my recent book – Uneven centuries: economic development of Turkey since 1820, Princeton University Press, 2018 – examined the economic development of Turkey from a comparative global perspective. Using GDP per capita and other data, the book showed that Turkey’s record in economic growth and human development since 1820 has been close to the world average and a little above the average for developing countries. The early focus of the lecture was on the proximate causes — average rates of investment, below average rates of schooling, low rates of total productivity growth, and low technology content of production —which provide important insights into why improvements in GDP per capita were not higher. For more fundamental explanations I emphasized the role of institutions and institutional change. Since the nineteenth century Turkey’s formal economic institutions were influenced by international rules which did not always support economic development. Turkey’s elites also made extensive changes in formal political and economic institutions. However, these institutions provide only part of the story:  the direction of institutional change also depended on the political order and the degree of understanding between different groups and their elites. When political institutions could not manage the recurring tensions and cleavages between the different elites, economic outcomes suffered.

There are a number of ways in which my study reflects some of the key trends in the historiography in recent decades.  For example, until fairly recently, economic historians focused almost exclusively on the developed economies of western Europe, North America, and Japan. Lately, however, economic historians have been changing their focus to developing economies. Moreover, as part of this reorientation, considerable effort has been expended on constructing long-run economic series, especially GDP and GDP per capita, as well as series on health and education.  In this context, I have constructed long-run series for the area within the present-day borders of Turkey. These series rely mostly on official estimates for the period after 1923 and make use of a variety of evidence for the Ottoman era, including wages, tax revenues and foreign trade series. In common with the series for other developing countries, many of my calculations involving Turkey  are subject to larger margins of error than similar series for developed countries. Nonetheless, they provide insights into the developmental experience of Turkey and other developing countries that would not have been possible two or three decades ago. Finally, in recent years, economists and economic historians have made an important distinction between the proximate causes and the deeper determinants of economic development. While literature on the proximate causes of development focuses on investment, accumulation of inputs, technology, and productivity, discussions of the deeper causes consider the broader social, political, and institutional environment. Both sets of arguments are utilized in my book.

I argue that an interest-based explanation can address both the causes of long-run economic growth and its limits. Turkey’s formal economic institutions and economic policies underwent extensive change during the last two centuries. In each of the four historical periods I define, Turkey’s economic institutions and policies were influenced by international or global rules which were enforced either by the leading global powers or, more recently, by international agencies. Additionally, since the nineteenth century, elites in Turkey made extensive changes to formal political institutions.  In response to European military and economic advances, the Ottoman elites adopted a programme of institutional changes that mirrored European developments; this programme  continued during the twentieth century. Such fundamental  changes helped foster significant increases in per capita income as well as  major improvements in health and education.

But it is also necessary to examine how these new formal institutions interacted with the process of economic change – for example, changing social structure and variations in the distribution of power and expectations — to understand the scale and characteristics of growth that the new institutional configurations generated.

These interactions were complex. It is not easy to ascribe the outcomes created in Turkey during these two centuries to a single cause. Nonetheless, it is safe to state that in each of the four periods, the successful development of  new institutions depended on the state making use of the different powers and capacities of the various elites. More generally, economic outcomes depended closely on the nature of the political order and the degree of understanding between different groups in society and the elites that led them. However, one of the more important characteristics of Turkey’s social structure has been the recurrence of tensions and cleavages between its elites. While they often appeared to be based on culture, these tensions overlapped with competing economic interests which were, in turn, shaped by the economic institutions and policies generated by the global economic system. When political institutions could not manage these tensions well, Turkey’s economic outcomes remained close to the world average.

A Policy of Credit Disruption: The Punjab Land Alienation Act of 1900

by Latika Chaudhary (Naval Postgraduate School) and Anand V. Swamy (Williams College)

This research is due to be published in the Economic History Review and is currently available on Early View.


Farming, farms, crops, agriculture in North India. Available at Wikimedia Commons.

In the late 19th century the British-Indian government (the Raj) became preoccupied with default on debt and the consequent transfer of land in rural India. In many regions Raj officials made the following argument: British rule had created or clarified individual property rights in land, which had for the first time made land available as collateral for debt. Consequently, peasants could now borrow up to the full value of their land. The Raj had also replaced informal village-based forms of dispute resolution with a formal legal system operating outside the village, which favored the lender over the borrower. Peasants were spendthrift and naïve, and unable to negotiate the new formal courts created by British rule, whereas lenders were predatory and legally savvy. Borrowers were frequently defaulting, and land was rapidly passing from long-standing resident peasants to professional moneylenders who were often either immigrant, of another religion, or sometimes both.  This would lead to social unrest and threaten British rule. To preserve British rule it was essential that one of the links in the chain be broken, even if this meant abandoning cherished notions of sanctity of property and contract.

The Punjab Land Alienation Act (PLAA) of 1900 was the most ambitious policy intervention motivated by this thinking. It sought to prevent professional moneylenders from acquiring the property of traditional landowners. To this end it banned, except under some conditions, the permanent transfer of land from an owner belonging to an ‘agricultural tribe’ to a buyer or creditor who was not from this tribe. Moreover, a lender who was not from an agricultural tribe could no longer seize the land of a defaulting debtor who was from an agricultural tribe.

The PLAA made direct restrictions on the transfer of land a respectable part of the policy apparatus of the Raj and its influence persists to the present-day. There is a substantial literature on the emergence of the PLAA, yet there is no econometric work on two basic questions regarding its impact. First, did the PLAA reduce the availability of mortgage-backed credit? Or were borrowers and lenders able to use various devices to evade the Act, thereby neutralizing it?  Second, if less credit was available, what were the effects on agricultural outcomes and productivity? We use panel data methods to address these questions, for the first time, so far as we know.

Our work provides evidence regarding an unusual policy experiment that is relevant to a hypothesis of broad interest. It is often argued that ‘clean titling’ of assets can facilitate their use as collateral, increasing access to credit, leading to more investment and faster growth. Informed by this hypothesis, many studies estimate the effects of titling on credit and other outcomes, but they usually pertain to making assets more usable as collateral. The PLAA went in the opposite direction – it reduced the “collateralizability” of land which should have  reduced investment and growth, based on the argument we have described. We investigate whether it did.

To identify the effects of the PLAA, we assembled a panel dataset on 25 districts in Punjab from 1890 to 1910. Our dataset contains information on mortgages and sales of land, economic outcomes, such as acreage and ownership of cattle, and control variables like rainfall and population. Because the PLAA targeted professional moneylenders, it should have reduced mortgage-backed credit more in places where they were bigger players in the credit market. Hence, we interact a measure of the importance of the professional, that is, non-agricultural, moneylenders in the mortgage market with an indicator variable for the introduction of the PLAA, which takes the value of  1 from 1900 onward. As expected, we find that  that the PLAA contracted credit more in places where professional moneylenders played a larger role – compared to  districts with no professional moneylenders.  The PLAA reduced mortgage-backed credit by 48 percentage points more at the 25th percentile of our measure of moneylender-importance and by 61 percentage points more at the 75th percentile.

However, this decrease of mortgage-backed credit in professional moneylender-dominated areas did not lead to lower acreage or less ownership of cattle. In short, the PLAA affected credit markets as we might expect without undermining agricultural productivity. Because we have panel data, we are able to account for potential confounding factors such as time-invariant unobserved differences across districts (using district fixed effects), common district-specific shocks (using year effects) and the possibility that districts were trending differently independent of the PLAA (using district-specific time trends).

British officials provided a plausible explanation for the non-impact of PLAA on agricultural production: lenders had merely become more judicious – they were still willing to lend for productive activity, but not for ‘extravagant’ expenditures, such as social ceremonies.  There may be a general lesson here:  policies that make it harder for lenders to recover money may have the beneficial effect of encouraging due diligence.



To contact the authors:



Falling Behind and Catching up: India’s Transition from a Colonial Economy

by Bishnupriya Gupta (University of Warwick and CAGE)

The full paper of this blog post was published by The Economic History Review and it is available here 

Official of the East India Company riding in an Indian procession, watercolour on paper, c. 1825–30; in the Victoria and Albert Museum, London. Available at <https://www.britannica.com/topic/East-India-Company/media/1/176643/162308&gt;

There has been much discussion in recent years about India’s growth failure in the first 30 years after independence in 1947. India became a highly-regulated economy and withdrew from the global market. This led to inefficiency and low growth. The architect of Indian planning –Jawaharlal Nehru, the first prime minister, did not put India on an East Asian path. As a contrast, the last decade of the 20th century has seen a reintegration into the global economy and today India is one of the fastest growing economies.

Any analysis of Indian growth and development that starts in 1947, is deeply flawed. It ignores the history of development and the impact of colonization. This paper takes a long run view of India’s economic development and argues that the Indian economy stagnated under colonial rule and a reversal came with independence. Although a slow growth in comparison to East Asia, the Nehruvian legacy put India on a growth path.

Tharoor (2017) in his book Inglorious Empire argues that Britain’s industrial revolution was built on the destruction of Indian textile industries and British rule turned India from an exporter of agricultural goods.  A different view on colonial rule comes from Niall Ferguson in his book Empire: How Britain Made the Modern World. Ferguson claimed that even if the British rule did not increase Indian incomes, things might have been much worse under a restored Mughal regime in 1857. The British build the railways and connected India to the rest of the world.

Neither of these views are based on statistical evidence. Data on GDP per capita (Figure 1), shows that there was a slow decline and stagnation over a long period. Evidence on wages and per capita GDP show a prosperous economy in 1600 under the Mughal Emperor Akbar. Living standards began to decline from the middle of the 17th century, before colonization, continued as the East India Company gained territorial control in 1757. It is important to note that the decline coincided with increased integration with international markets and the rising trade in textiles to Europe. In 1857, India became a part of the global economy of the British Empire. Indian trade volume increased, but from an exporter or industrial products, India became an exporter if food and raw material. Per capita income stagnated even as trade increased, the colonial government built a railway network and British entrepreneurs owned large parts of the industrial sector. In 1947, the country was one of the poorest in the world. Figure 1 below also tells us that growth picked up after independence as India moved towards regulation and restrictions on trade and private investment.

What explains the stagnation in income prior to independence? The colonial government invested very little in the main sector, agriculture. The bulk of British investment went to the railways, but not in irrigation. The railways, initially connected the hinterland with the ports, but over time integrated markets, reducing price variability across markets. However, it did not contribute increasing agricultural productivity. Without large investment in irrigation, output per acre declined in areas that did not get canals. Industry on the other had was the fastest growing sector, but employed only 10 per cent of the work force. Stagnation of the economy under control rule had little to do with trade.

Indian GDP per capita between 1600 and 2000. Source: Aniruddha Bagchi, “Why did the Indian economy stagnate under the colonial rule?”  in Ideas for India 2013

Indian growth reversal began in independent India with regulation of trade and industry and a break with the global economy. For the first time in the 20th century, the Indian economy began to grow as the graph shows with investment in capital goods industries and agricultural infrastructure. Industrial growth and the green revolution in agriculture, moved the economy from stagnation to growth. This growth slowed down, but the economy did not stagnate as in the colonial period. Following economic reforms after the 1980s, India has entered a high growth regime. The initial increase in growth was a response to removal of restrictions on domestic private investment, well before reintegration into the global economy in the 1990s. The foundations for growth were laid in the first three decades after independence.

The institutional legacy of British rule, had long run consequences. One example is education policy that prioritized investment in secondary and tertiary education, creating a small group with higher education, but few with basic primary schooling. In 1947, less than one–fifth of the population had basic education. The higher education bias in education continued after independence and has created an advantage for the service sector. There are lessons from history to understand Indian growth after independence.


To contact the author: B.Gupta@warwick.ac.uk

Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69

by Jean-Pascal Bassino (ENS Lyon) and Pierre van der Eng (Australian National University)

This blog is part of a larger research paper published in the Economic History Review.


Vietnam, rice paddy. Available at Pixabay.

In the ‘great divergence’ debate, China, India, and Japan have been used to represent the Asian continent. However, their development experience is not likely to be representative of the whole of Asia. The countries of Southeast Asia were relatively underpopulated for a considerable period.  Very different endowments of natural resources (particularly land) and labour were key parameters that determined economic development options.

Maddison’s series of per-capita GDP in purchasing power parity (PPP) adjusted international dollars, based on a single 1990 benchmark and backward extrapolation, indicate that a divergence took place in 19th century Asia: Japan was well above other Asian countries in 1913. In 2018 the Maddison Project Database released a new international series of GDP per capita that accommodate the available historical PPP-based converters. Due to the very limited availability of historical PPP-based converters for Asian countries, the 2018 database retains many of the shortcomings of the single-year extrapolation.

Maddison’s estimates indicate that Japan’s GDP per capita in 1913 was much higher than in other Asian countries, and that Asian countries started their development experiences from broadly comparable levels of GDP per capita in the early nineteenth century. This implies that an Asian divergence took place in the 19th century as a consequence of Japan’s economic transformation during the Meiji era (1868-1912). There is now  growing recognition that the use of a single benchmark year and the choice of a particular year may influence the historical levels of GDP per capita across countries. Relative levels of Asian countries based on Maddison’s estimates of per capita GDP are not confirmed by other indicators such as real unskilled wages or the average height of adults.

Our study uses available estimates of GDP per capita in current prices from historical national accounting projects, and estimates PPP-based converters and PPP-adjusted GDP with multiple benchmarks years (1913, 1922, 1938, 1952, 1958, and 1969) for India, Indonesia, Korea, Malaya, Myanmar (then Burma), the Philippines, Sri Lanka (then Ceylon), Taiwan, Thailand and Vietnam, relative to Japan. China is added on the basis of other studies. PPP-based converters are used to calculate GDP per capita in constant PPP yen. The indices of GDP per capita in Japan and other countries were expressed as a proportion of GDP per capita in Japan during the years 1910–70 in 1934–6 yen, and then converted to 1990 international dollars by relying on PPP-adjusted Japanese series comparable to US GDP series. Figure 1 presents the resulting series for Asian countries.


Figure 1. GDP per capita in selected Asian countries, 1910–1970 (1934–6 Japanese yen)

Sources: see original article.


The conventional view dates the start of the divergence to the nineteenth century. Our study identifies the First World War and the 1920s as the era during which the little divergence in Asia occurred. During the 1920s, most countries in Asia — except Japan —depended significantly on exports of primary commodities. The growth experience of Southeast Asia seems to have been largely characterised by market integration in national economies and by the mobilisation of hitherto underutilised resources (labour and land) for export production. Particularly in the land-abundant parts of Asia, the opening-up of land for agricultural production led to economic growth.

Commodity price changes may have become debilitating when their volatility increased after 1913. This was followed by episodes of import-substituting industrialisation, particularly during after 1945.  While Japan rapidly developed its export-oriented manufacturing industries from the First World War, other Asian countries increasingly had inward-looking economies. This pattern lasted until the 1970s, when some Asian countries followed Japan on a path of export-oriented industrialisation and economic growth. For some countries this was a staggered process that lasted well into the 1990s, when the World Bank labelled this development the ‘East Asian miracle’.


To contact the authors:





Bassino, J-P. and Van der Eng, P., ‘Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69’, Economic History Review (forthcoming).

Fouquet, R. and Broadberry, S., ‘Seven centuries of European economic growth and decline’, Journal of Economic Perspectives, 29 (2015), pp. 227–44.

Fukao, K., Ma, D., and Yuan, T., ‘Real GDP in pre-war Asia: a 1934–36 benchmark purchasing power parity comparison with the US’, Review of Income and Wealth, 53 (2007), pp. 503–37.

Inklaar, R., de Jong, H., Bolt, J., and van Zanden, J. L., ‘Rebasing “Maddison”: new income comparisons and the shape of long-run economic development’, Groningen Growth and Development Centre Research Memorandum no. 174 (2018).

Link to the website of the Southeast Asian Development in the Long Term (SEA-DELT) project:  https://seadelt.net

Factor Endowments on the “Frontier”: Algerian Settler Agriculture at the Beginning of the 1900s

by Laura Maravall Buckwalter (University of Tübingen)

This research is due to be published in the Economic History Review and is currently available on Early View.


It is often claimed that access to land and labour during the colonial years determined land redistribution policies and labour regimes that had persistent, long-run effects.  For this reason, the amount of land and labour available in a colonized country at a fixed point in time are being included more frequently in regression frameworks as proxies for the types of colonial modes of production and institutions. However, despite the relevance of these variables within the scholarly literature on settlement economies, little is known about the way in which they changed during the process of settlement. This is because most studies focus on long-term effects and tend to exclude relevant inter-country heterogeneities that should be included in the assessment of the impact of colonization on economic development.

In my article, I show how colonial land policy and settler modes of production responded differently within a colony.  I examine rural settlement in French Algeria at the start of the 1900s and focus on cereal cultivation which was the crop that allowed the arable frontier to expand. I rely upon the literature that reintroduces the notion of ‘land frontier expansion’ into the understanding of settler economies. By including the frontier in my analysis, it is possible to assess how colonial land policy and settler farming adapted to very different local conditions. For exanple,  because settlers were located in the interior regions they encountered growing land aridity. I argue that the expansion of rural settlement into the frontier was strongly dependent upon the adoption of modern ploughs, intensive labour (modern ploughs were non-labour saving) and larger cultivated fields (because they removed fallow areas) which, in turn, had a direct impact on  colonial land policy and settler farming.

Figure 1. Threshing wheat in French Algeria (Zibans)

Buckwalter 1
Source: Retrieved from https://www.flickr.com/photos/internetarchivebookimages/14764127875/in/photostream/, last accessed 31st of May, 2019.


My research takes advantage of annual agricultural statistics reported by the French administration at the municipal level in Constantine for the years 1904/05 and 1913/14. The data are analysed in a cross-section and panel regression framework and, although the dataset provides a snapshot at only two points in time, the ability to identify the timing of settlement after the 1840s for each municipality provides a broader temporal framework.

Figure 2. Constantine at the beginning of the 1900s

Buckwalter 2
Source: Original outline of the map derives from mainly from Carte de la Colonisation Officielle, Algérie (1902), available online at the digital library from the Bibliothèque Nationale de France, retrieved from http://catalogue.bnf.fr/ark:/12148/cb40710721s (accessed on 28 Apr. 2019) and ANOM-iREL, http://anom.archivesnationales.culture.gouv.fr/ (accessed on 28 Apr. 2019).


The results illustrate how the limited amount of arable land on the Algerian frontier forced  colonial policymakers to relax  restrictions on the amount of land owned by settlers. This change in policy occurred because expanding the frontier into less fertile regions and consolidating settlement required agricultural intensification –  changes in the frequency of crop rotation and more intensive ploughing. These techniques required larger fields and were therefore incompatible  with the French colonial ideal of establishing a small-scale, family farm type of settler economy.

My results also indicate that settler farmers were able to adopt more intensive techniques mainly by relying on the abundant indigenous labour force. The man-to-cultivable land ratio, which increased after the 1870s due to continuous indigenous population growth and colonial land expropriation measures, eased settler cultivation, particularly on the frontier. This confirms that the availability of labour relative to land is an important variable that should be taken into consideration to assess the impact of settlement on economic development. My findings are in accord with Lloyd and Metzer (2013, p. 20), who argue that, in Africa, where the indigenous peasantry was significant, the labour surplus allowed low wages and ‘verged on servility’, leading to a ‘segmented labour and agricultural production system’. Moreover, it is precisely the presence of a large indigenous population relative to that of the settlers, and the reliance of settlers upon the indigenous labour and the state (to access land and labour), that has allowed Lloyd and Metzer to describe Algeria (together with Southern Rhodesia, Kenya and South Africa) as having a “somewhat different type of settler colonialism that emerged in Africa over the 19th and early 20th Centuries” (2013, p.2).

In conclusion, it is reasonable to assume that, as rural settlement gains ground within a colony, local endowments and cultivation requirements change. The case of rural settlement in Constantine reveals how settler farmers and colonial restrictions on ownership size adapted to the varying amounts of land and labour.


To contact: 


Twitter: @lmaravall



Ageron, C. R. (1991). Modern Algeria: a history from 1830 to the present (9th ed). Africa World Press.

Frankema, E. (2010). The colonial roots of land inequality: geography, factor endowments, or institutions? The Economic History Review, 63(2):418–451.

Frankema, E., Green, E., and Hillbom, E. (2016). Endogenous processes of colonial settlement. the success and failure of European settler farming in Sub-Saharan Africa. Revista de Historia Económica-Journal of Iberian and Latin American Economic History, 34(2), 237-265.

Easterly, W., & Levine, R. (2003). Tropics, germs, and crops: how endowments influence economic development. Journal of monetary economics, 50(1), 3-39.

Engerman, S. L., and Sokoloff, K. L. (2012). Economic development in the Americas since 1500: endowments and institutions. Cambridge University Press.

Lloyd, C. and Metzer, J. (2013). Settler colonization and societies in world history: patterns and concepts. In Settler Economies in World History, Global Economic History Series 9:1.

Lützelschwab, C. (2007). Populations and Economies of European Settlement Colonies in Africa (South Africa, Algeria, Kenya, and Southern Rhodesia). In Annales de démographie historique (No. 1, pp. 33-58). Belin.

Lützelschwab, C. (2013). Settler colonialism in Africa Lloyd, C., Metzer, J., and Sutch, R. (2013), Settler economies in world history. Brill.

Willebald, H., and Juambeltz, J. (2018). Land Frontier Expansion in Settler Economies, 1830–1950: Was It a Ricardian Process? In Agricultural Development in the World Periphery (pp. 439-466). Palgrave Macmillan, Cham.

Plague and long-term development

by Guido Alfani (Bocconi University, Dondena Centre and IGIER)


The full paper has been published in The Economic History Review and is available here.

A YouTube video accompanies this work and can be found here.


How did preindustrial economies react to extreme mortality crises caused by severe epidemics of plague? Were health shocks of this kind able to shape long-term development patterns? While past research focused on the Black Death that affected Europe during 1347-52 ( Álvarez Nogal and Prados de la Escosura 2013; Clark 2007; Voigtländer and Voth 2013), in a forthcoming article with Marco Percoco we analyse the long-term consequences of what was by far the worst mortality crisis affecting Italy during the Early Modern period: the 1629-30 plague which killed an estimated 30-35% of the northern Italian population — about two million victims.


Figure 1 Luigi Pellegrini Scaramuccia (1670), Federico Borromeo visits the plague ward during the 1630 plague,

Alfani 1

Source: Milan, Biblioteca Ambrosiana


This episode is significant in Italian history, and more generally, for our understanding of the Little Divergence between the North and South of Europe. It had recently been hypothesized that the 1630 plague was the source of Italy’s relative decline during the seventeenth century (Alfani 2013). However, this hypothesis lacked solid empirical evidence. To resolve this question, we take a different approach from previous studies, and  demonstrate that plague lowered the trajectory of development of Italian cities. We argue that this was mostly due to a productivity shock caused by the plague, but we also explore other contributing factors. Consequently,  we provide support for the view that the economic consequences of severe demographic shocks need to be understood and studied on a case-by-case basis, as the historical context in which they occurred can lead to very different outcomes (Alfani and Murphy 2017).

After assembling a new database of mortality rates in a sample of 56 cities, we estimate a model of population growth allowing for different regimes of growth. We build on the seminal papers by Davis and Weinstein (2002), and Brakman et al. (2004) who based their analysis on a new framework in economic geography framework in which a relative city size growth model is estimated to determine whether a shock has temporary or persistent effects. We find that cities affected by the 1629-30 plague experienced persistent, long-term effects (i.e., up to 1800) on their pattern of relative population growth.


Figure 2. Giacomo Borlone de Buschis (attributed), Triumph of Death (1485), fresco

Alfani 2

Source: Oratorio dei Disciplini, Clusone (Italy).


We complete our analysis by estimating the absolute impact of the epidemic. We find that in northern Italian regions the plague caused a lasting decline in both the size and rate of change  of urban populations. The lasting damage done to the urban population are shown in Figure 3. For urbanization rates it will suffice to notice that across the North of Italy, by 1700 (70 years after the 1630 plague), they were still more than 20 per cent lower than in the decades preceding the catastrophe (16.1 per cent in 1700 versus an estimated 20.4 per cent in 1600, for cities >5,000). Overall, these findings suggest that surges in plagues may contribute to the decline of economic regions or whole countries. Our conclusions are  strengthened by showing that while there is clear evidence of the negative consequences of the 1630 plague, there is hardly any evidence for a positive effect (Pamuk 2007). We hypothesize that the potential positive consequences of the 1630 plague were entirely eroded by a negative productivity shock.


Figure 3. Size of the urban population in Piedmont, Lombardy, and Veneto (1620-1700)

Alfani 3

Source: see original article


Demonstrating that the plague had a persistent negative effect on many key Italian urban economies, we provide support for the hypothesis that the origins of  relative economic decline in northern Italy are to be found in particularly unfavorable epidemiological conditions. It was the context in which an epidemic occurred that increased its ability to affect the economy, not the plague itself.  Indeed, the 1630 plague affected the main states of the Italian Peninsula at the worst possible moment when its manufacturing were dealing with increasing competition from northern European countries. This explanation, however, provides a different interpretation to the Little Divergence in recent literature.


To contact the author: guido.alfani@unibocconi.it



Alfani, G., ‘Plague in seventeenth century Europe and the decline of Italy: and epidemiological hypothesis’, European Review of Economic History, 17, 4 (2013), pp.  408-430

Alfani, G. and Murphy, T., ‘Plague and Lethal Epidemics in the Pre-Industrial World’, Journal of Economic History, 77, 1 (2017), pp. 314-343.

Alfani, G. and Percoco, M., ‘Plague and long-term development: the lasting effects of the 1629-30 epidemic on the Italian cities’, The Economic History Review, forthcoming, https://doi.org/10.1111/ehr.12652

Álvarez Nogal, C. and Prados de la Escosura,L., ‘The Rise and Fall of Spain (1270-1850)’, Economic History Review, 66, 1 (2013), pp. 1–37.

Brakman, S., Garretsen H., Schramm M. ‘The Strategic Bombing of German Cities during World War II and its Impact on City Growth’, Journal of Economic Geography, 4 (2004), pp. 201-218.

Clark, G., A Farewell to Alms (Princeton, 2007).

Davis, D.R. and Weinstein, D.E. ‘Bones, Bombs, and Break Points: The Geography of Economic Activity’, American Economic Review, 92, 5 (2002), pp. 1269-1289.

Pamuk, S., ‘The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600’, European Review of Economic History, 11 (2007), pp. 289-317.

Voigtländer, N. and H.J. Voth, ‘The Three Horsemen of Riches: Plague, War, and Urbanization in Early Modern Europe’, Review of Economic Studies 80, 2 (2013), pp. 774–811.