Turkey’s Experience with Economic Development since 1820

by Sevket Pamuk, University of Bogazici (Bosphorus) 

This research is part of a broader article published in the Economic History Review.

A podcast of Sevket’s Tawney lecture can be found here.


Pamuk 1

New Map of Turkey in Europe, Divided into its Provinces, 1801. Available at Wikimedia Commons.

The Tawney lecture, based on my recent book – Uneven centuries: economic development of Turkey since 1820, Princeton University Press, 2018 – examined the economic development of Turkey from a comparative global perspective. Using GDP per capita and other data, the book showed that Turkey’s record in economic growth and human development since 1820 has been close to the world average and a little above the average for developing countries. The early focus of the lecture was on the proximate causes — average rates of investment, below average rates of schooling, low rates of total productivity growth, and low technology content of production —which provide important insights into why improvements in GDP per capita were not higher. For more fundamental explanations I emphasized the role of institutions and institutional change. Since the nineteenth century Turkey’s formal economic institutions were influenced by international rules which did not always support economic development. Turkey’s elites also made extensive changes in formal political and economic institutions. However, these institutions provide only part of the story:  the direction of institutional change also depended on the political order and the degree of understanding between different groups and their elites. When political institutions could not manage the recurring tensions and cleavages between the different elites, economic outcomes suffered.

There are a number of ways in which my study reflects some of the key trends in the historiography in recent decades.  For example, until fairly recently, economic historians focused almost exclusively on the developed economies of western Europe, North America, and Japan. Lately, however, economic historians have been changing their focus to developing economies. Moreover, as part of this reorientation, considerable effort has been expended on constructing long-run economic series, especially GDP and GDP per capita, as well as series on health and education.  In this context, I have constructed long-run series for the area within the present-day borders of Turkey. These series rely mostly on official estimates for the period after 1923 and make use of a variety of evidence for the Ottoman era, including wages, tax revenues and foreign trade series. In common with the series for other developing countries, many of my calculations involving Turkey  are subject to larger margins of error than similar series for developed countries. Nonetheless, they provide insights into the developmental experience of Turkey and other developing countries that would not have been possible two or three decades ago. Finally, in recent years, economists and economic historians have made an important distinction between the proximate causes and the deeper determinants of economic development. While literature on the proximate causes of development focuses on investment, accumulation of inputs, technology, and productivity, discussions of the deeper causes consider the broader social, political, and institutional environment. Both sets of arguments are utilized in my book.

I argue that an interest-based explanation can address both the causes of long-run economic growth and its limits. Turkey’s formal economic institutions and economic policies underwent extensive change during the last two centuries. In each of the four historical periods I define, Turkey’s economic institutions and policies were influenced by international or global rules which were enforced either by the leading global powers or, more recently, by international agencies. Additionally, since the nineteenth century, elites in Turkey made extensive changes to formal political institutions.  In response to European military and economic advances, the Ottoman elites adopted a programme of institutional changes that mirrored European developments; this programme  continued during the twentieth century. Such fundamental  changes helped foster significant increases in per capita income as well as  major improvements in health and education.

But it is also necessary to examine how these new formal institutions interacted with the process of economic change – for example, changing social structure and variations in the distribution of power and expectations — to understand the scale and characteristics of growth that the new institutional configurations generated.

These interactions were complex. It is not easy to ascribe the outcomes created in Turkey during these two centuries to a single cause. Nonetheless, it is safe to state that in each of the four periods, the successful development of  new institutions depended on the state making use of the different powers and capacities of the various elites. More generally, economic outcomes depended closely on the nature of the political order and the degree of understanding between different groups in society and the elites that led them. However, one of the more important characteristics of Turkey’s social structure has been the recurrence of tensions and cleavages between its elites. While they often appeared to be based on culture, these tensions overlapped with competing economic interests which were, in turn, shaped by the economic institutions and policies generated by the global economic system. When political institutions could not manage these tensions well, Turkey’s economic outcomes remained close to the world average.

All quiet before the take-off? Pre-industrial regional inequality in Sweden (1571-1850)

by Anna Missiaia and Kersten Enflo (Lund University)

This research is due to be published in the Economic History Review and is currently available on Early View.


Missiaia Main.jpg
Södra Bancohuset (The Southern National Bank Building), Stockholm. Available here at Wikimedia Commons.

For a long time, scholars have thought about regional inequality merely as a by-product of modern economic growth: following a Kuznets-style interpretation, the front-running regions increase their income levels and regional inequality during industrialization; and it is only when the other regions catch-up that overall regional inequality decreases and completes the inverted-U shaped pattern. But early empirical research on this theme was largely focused on the  the 20th century, ignoring industrial take-off of many countries (Williamson, 1965).  More recent empirical studies have pushed the temporal boundary back to the mid-19th century, finding that inequality in regional GDP was already high at the outset of modern industrialization (see for instance Rosés et al., 2010 on Spain and Felice, 2018 on Italy).

The main constraint for taking the estimations well into the pre-industrial period is the availability of suitable regional sources. The exceptional quality of Swedish sources allowed us for the first time to estimate a dataset of regional GDP for a European economy going back to the 16th century (Enflo and Missiaia, 2018). The estimates used here for 1571 are largely based on a one-off tax proportional to the yearly production: the Swedish Crown imposed this tax on all Swedish citizens in order to pay a ransom for the strategic Älvsborg castle that had just been conquered by Denmark. For the period 1750-1850, the estimates rely on standard population censuses. By connecting the new series to the existing ones from 1860 onwards by Enflo et al. (2014), we obtain the longest regional GDP series for any given country.

We find that inequality increased dramatically between 1571 and 1750 and remained high until the mid-19th century. Thereafter, it declined during the modern industrialization of the country (Figure 1). Our results discard the traditional  view that regional divergence can only originate during an industrial take-off.


Figure 1. Coefficient of variation of GDP per capita across Swedish counties, 1571-2010.

Missiaia 1
Sources: 1571-1850: Enflo and. Missiaia, ‘Regional GDP estimates for Sweden, 1571-1850’; 1860-2010: Enflo et al, ‘Swedish regional GDP 1855-2000 and Rosés and Wolf, ‘The Economic Development of Europe’s Regions’.


Figure 2 shows the relative disparities in four benchmark years. If the country appeared relatively equal in 1571, between 1750 and 1850 both the mining districts in central and northern Sweden and the port cities of Stockholm and Gothenburg emerged.


Figure 2. The relative evolution of GDP per capita, 1571-1850 (Sweden=100).

Missiaia 2
Sources: 1571-1850: Enflo and. Missiaia, ‘Regional GDP estimates for Sweden, 1571-1850’; 2010: Rosés and Wolf, ‘The Economic Development of Europe’s Regions’.

The second part of the paper is devoted to the study of the drivers of pre-industrial regional inequality. Decomposing the Theil index for GDP per worker, we show that regional inequality was driven by structural change, meaning that regions diverged because they specialized in different sectors. A handful of regions specialized in either early manufacturing or in mining, both with a much higher productivity per worker compared to agriculture.

To explain this different trajectory, we use a theoretical framework introduced by Strulik and Weisdorf (2008) in the context of the British Industrial Revolution: in regions with a higher share of GDP in agriculture, technological advancements lead to productivity improvements but also to a proportional increase in population, impeding the growth in GDP per capita as in a classic Malthusian framework. Regions with a higher share of GDP in industry, on the other hand, experienced limited population growth due to the increasing relative price of children, leading to a higher level of GDP per capita. Regional inequality in this framework arises from a different role of the Malthusian mechanism in the two sectors.

Our work speaks to a growing literature on the origin of regional divergence and represents the first effort to perform this type of analysis before the 19th century.


To contact the authors:





Enflo, K. and Missiaia, A., ‘Regional GDP estimates for Sweden, 1571-1850’, Historical Methods, 51(2018), 115-137.

Enflo, K., Henning, M. and Schön, L., ‘Swedish regional GDP 1855-2000 Estimations and general trends in the Swedish regional system’, Research in Economic History, 30(2014), pp. 47-89.

Felice, E., ‘The roots of a dual equilibrium: GDP, productivity, and structural change in the Italian regions in the long run (1871-2011)’, European Review of Economic History, (2018), forthcoming.

Rosés, J., Martínez-Galarraga, J. and Tirado, D., ‘The upswing of regional income inequality in Spain (1860–1930)’,  Explorations in Economic History, 47(2010), pp. 244-257.

Strulik, H., and J. Weisdorf. ‘Population, food, and knowledge: a simple unified growth theory.’ Journal of Economic Growth 13.3 (2008): 195.

Williamson, J., ‘Regional Inequality and the Process of National Development: A Description of the Patterns’, Economic Development and Cultural Change 13(1965), pp. 1-84.


A Policy of Credit Disruption: The Punjab Land Alienation Act of 1900

by Latika Chaudhary (Naval Postgraduate School) and Anand V. Swamy (Williams College)

This research is due to be published in the Economic History Review and is currently available on Early View.


Farming, farms, crops, agriculture in North India. Available at Wikimedia Commons.

In the late 19th century the British-Indian government (the Raj) became preoccupied with default on debt and the consequent transfer of land in rural India. In many regions Raj officials made the following argument: British rule had created or clarified individual property rights in land, which had for the first time made land available as collateral for debt. Consequently, peasants could now borrow up to the full value of their land. The Raj had also replaced informal village-based forms of dispute resolution with a formal legal system operating outside the village, which favored the lender over the borrower. Peasants were spendthrift and naïve, and unable to negotiate the new formal courts created by British rule, whereas lenders were predatory and legally savvy. Borrowers were frequently defaulting, and land was rapidly passing from long-standing resident peasants to professional moneylenders who were often either immigrant, of another religion, or sometimes both.  This would lead to social unrest and threaten British rule. To preserve British rule it was essential that one of the links in the chain be broken, even if this meant abandoning cherished notions of sanctity of property and contract.

The Punjab Land Alienation Act (PLAA) of 1900 was the most ambitious policy intervention motivated by this thinking. It sought to prevent professional moneylenders from acquiring the property of traditional landowners. To this end it banned, except under some conditions, the permanent transfer of land from an owner belonging to an ‘agricultural tribe’ to a buyer or creditor who was not from this tribe. Moreover, a lender who was not from an agricultural tribe could no longer seize the land of a defaulting debtor who was from an agricultural tribe.

The PLAA made direct restrictions on the transfer of land a respectable part of the policy apparatus of the Raj and its influence persists to the present-day. There is a substantial literature on the emergence of the PLAA, yet there is no econometric work on two basic questions regarding its impact. First, did the PLAA reduce the availability of mortgage-backed credit? Or were borrowers and lenders able to use various devices to evade the Act, thereby neutralizing it?  Second, if less credit was available, what were the effects on agricultural outcomes and productivity? We use panel data methods to address these questions, for the first time, so far as we know.

Our work provides evidence regarding an unusual policy experiment that is relevant to a hypothesis of broad interest. It is often argued that ‘clean titling’ of assets can facilitate their use as collateral, increasing access to credit, leading to more investment and faster growth. Informed by this hypothesis, many studies estimate the effects of titling on credit and other outcomes, but they usually pertain to making assets more usable as collateral. The PLAA went in the opposite direction – it reduced the “collateralizability” of land which should have  reduced investment and growth, based on the argument we have described. We investigate whether it did.

To identify the effects of the PLAA, we assembled a panel dataset on 25 districts in Punjab from 1890 to 1910. Our dataset contains information on mortgages and sales of land, economic outcomes, such as acreage and ownership of cattle, and control variables like rainfall and population. Because the PLAA targeted professional moneylenders, it should have reduced mortgage-backed credit more in places where they were bigger players in the credit market. Hence, we interact a measure of the importance of the professional, that is, non-agricultural, moneylenders in the mortgage market with an indicator variable for the introduction of the PLAA, which takes the value of  1 from 1900 onward. As expected, we find that  that the PLAA contracted credit more in places where professional moneylenders played a larger role – compared to  districts with no professional moneylenders.  The PLAA reduced mortgage-backed credit by 48 percentage points more at the 25th percentile of our measure of moneylender-importance and by 61 percentage points more at the 75th percentile.

However, this decrease of mortgage-backed credit in professional moneylender-dominated areas did not lead to lower acreage or less ownership of cattle. In short, the PLAA affected credit markets as we might expect without undermining agricultural productivity. Because we have panel data, we are able to account for potential confounding factors such as time-invariant unobserved differences across districts (using district fixed effects), common district-specific shocks (using year effects) and the possibility that districts were trending differently independent of the PLAA (using district-specific time trends).

British officials provided a plausible explanation for the non-impact of PLAA on agricultural production: lenders had merely become more judicious – they were still willing to lend for productive activity, but not for ‘extravagant’ expenditures, such as social ceremonies.  There may be a general lesson here:  policies that make it harder for lenders to recover money may have the beneficial effect of encouraging due diligence.



To contact the authors:



Why did the industrial diet triumph?

by Fernando Collantes (University of Zaragoza and Instituto Agroalimentario de Aragón)

This blog is part of a larger research paper published in the Economic History Review.


Harvard food pyramid. Available at Wikimedia Commons.

Consumers in the Northern hemisphere are feeling increasingly uneasy about their industrial diet. Few question that during the twentieth century the industrial diet helped us solve the nutritional problems related to scarcity. But there is now growing recognition that the triumph of the industrial diet triggered new problems related to abundance, among them obesity, excessive consumerism and environmental degradation. Currently, alternatives, ranging from organic food and those bearing geographical-‘quality’ labels, struggle to transcend the industrial diet. Frequently, these alternatives face a major obstacle: their relatively high price compared to mass-produced and mass-retailed food.

The research that I have conducted examines the literature on nutritional transitions, food regimes and food history, and positions it within present-day debates on diet change in affluent societies.  I employ a case-study of the growth in mass consumption of dairy products in Spain between 1965 and 1990. In the mid-1960s, dairy consumption was very low in Spain and many suffered from calcium deficiency.  Subsequently, there was a rapid growth in consumption. Milk, especially, became an integral part of the diet for the population. Alongside mass consumption there was also mass-production and complementary technical change. In the early 1960s, most consumers only drank raw milk, but by the 1990s milk was being sterilised and pasteurised to standard specifications by an emergent national dairy industry.

In the early 1960s, the regular purchase of milk was too expensive for most households. By the early 1990s, an increase in household incomes, complemented by (alleged) price reductions generated by dairy industrialization, facilitated rapid milk consumption. A further factor aiding consumption was changing consumer preferences. Previously, consumers perceptions of milk were affected by recurrent episodes of poisoning and fraud. The process of dairy industrialization ensured a greater supply of ‘safe’ milk and this encouraged consumers to use their increased real incomes to buy more milk. ‘Quality’ milk meant milk that was safe to consume became the main theme in the advertising campaigns employed milk processers (Figure 1).


Figure 1. Advertisement by La Lactaria Española in the early 1970s.

Collantes Pic
Source: Revista Española de Lechería, no. 90 (1973).


What are the implications of my research to contemporary debates on food quality? First the transition toward a diet richer in organic foods and foods characterised by short food-supply chains and artisan-like production, backed by geographical-quality labels has more than niche relevance. There are historical precedents (such as the one studied in this article) of large sections of the populace willing to pay premium prices for food products that in some senses are  perceived as qualitatively superior to other, more ‘conventional’ alternatives. If it happened in the past, it can happen again.  Indeed, new qualitative substitutions are already taking place. The key issue is the direction of this substitution. Will consumers use their affluence to ‘green’ their diet? Or will they use higher incomes  to purchase more highly processed foods — with possibly negative implications for  public health and environmental sustainability? This juncture between  food-system dynamics and  public policy is crucial. As Fernand Braudel argued, it is the extraordinary capacity for adaption that defines capitalism.  My research suggests that we need public policies that reorient food capitalism towards socially progressive ends.


To contact the author:  collantf@unizar.es

Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69

by Jean-Pascal Bassino (ENS Lyon) and Pierre van der Eng (Australian National University)

This blog is part of a larger research paper published in the Economic History Review.


Vietnam, rice paddy. Available at Pixabay.

In the ‘great divergence’ debate, China, India, and Japan have been used to represent the Asian continent. However, their development experience is not likely to be representative of the whole of Asia. The countries of Southeast Asia were relatively underpopulated for a considerable period.  Very different endowments of natural resources (particularly land) and labour were key parameters that determined economic development options.

Maddison’s series of per-capita GDP in purchasing power parity (PPP) adjusted international dollars, based on a single 1990 benchmark and backward extrapolation, indicate that a divergence took place in 19th century Asia: Japan was well above other Asian countries in 1913. In 2018 the Maddison Project Database released a new international series of GDP per capita that accommodate the available historical PPP-based converters. Due to the very limited availability of historical PPP-based converters for Asian countries, the 2018 database retains many of the shortcomings of the single-year extrapolation.

Maddison’s estimates indicate that Japan’s GDP per capita in 1913 was much higher than in other Asian countries, and that Asian countries started their development experiences from broadly comparable levels of GDP per capita in the early nineteenth century. This implies that an Asian divergence took place in the 19th century as a consequence of Japan’s economic transformation during the Meiji era (1868-1912). There is now  growing recognition that the use of a single benchmark year and the choice of a particular year may influence the historical levels of GDP per capita across countries. Relative levels of Asian countries based on Maddison’s estimates of per capita GDP are not confirmed by other indicators such as real unskilled wages or the average height of adults.

Our study uses available estimates of GDP per capita in current prices from historical national accounting projects, and estimates PPP-based converters and PPP-adjusted GDP with multiple benchmarks years (1913, 1922, 1938, 1952, 1958, and 1969) for India, Indonesia, Korea, Malaya, Myanmar (then Burma), the Philippines, Sri Lanka (then Ceylon), Taiwan, Thailand and Vietnam, relative to Japan. China is added on the basis of other studies. PPP-based converters are used to calculate GDP per capita in constant PPP yen. The indices of GDP per capita in Japan and other countries were expressed as a proportion of GDP per capita in Japan during the years 1910–70 in 1934–6 yen, and then converted to 1990 international dollars by relying on PPP-adjusted Japanese series comparable to US GDP series. Figure 1 presents the resulting series for Asian countries.


Figure 1. GDP per capita in selected Asian countries, 1910–1970 (1934–6 Japanese yen)

Sources: see original article.


The conventional view dates the start of the divergence to the nineteenth century. Our study identifies the First World War and the 1920s as the era during which the little divergence in Asia occurred. During the 1920s, most countries in Asia — except Japan —depended significantly on exports of primary commodities. The growth experience of Southeast Asia seems to have been largely characterised by market integration in national economies and by the mobilisation of hitherto underutilised resources (labour and land) for export production. Particularly in the land-abundant parts of Asia, the opening-up of land for agricultural production led to economic growth.

Commodity price changes may have become debilitating when their volatility increased after 1913. This was followed by episodes of import-substituting industrialisation, particularly during after 1945.  While Japan rapidly developed its export-oriented manufacturing industries from the First World War, other Asian countries increasingly had inward-looking economies. This pattern lasted until the 1970s, when some Asian countries followed Japan on a path of export-oriented industrialisation and economic growth. For some countries this was a staggered process that lasted well into the 1990s, when the World Bank labelled this development the ‘East Asian miracle’.


To contact the authors:





Bassino, J-P. and Van der Eng, P., ‘Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69’, Economic History Review (forthcoming).

Fouquet, R. and Broadberry, S., ‘Seven centuries of European economic growth and decline’, Journal of Economic Perspectives, 29 (2015), pp. 227–44.

Fukao, K., Ma, D., and Yuan, T., ‘Real GDP in pre-war Asia: a 1934–36 benchmark purchasing power parity comparison with the US’, Review of Income and Wealth, 53 (2007), pp. 503–37.

Inklaar, R., de Jong, H., Bolt, J., and van Zanden, J. L., ‘Rebasing “Maddison”: new income comparisons and the shape of long-run economic development’, Groningen Growth and Development Centre Research Memorandum no. 174 (2018).

Link to the website of the Southeast Asian Development in the Long Term (SEA-DELT) project:  https://seadelt.net

Institutional choice in the governance of the early Atlantic sugar trade: diasporas, markets and courts

by Daniel Strum (University of São Paulo)

This article is published by The Economic History Review, and it is available for EHS members here.


Strum Pic
Figure 1. Cartographic chart of the Atlantic Ocean (c. 1600). Source: Biblioteca Nazionale Centrale di Firenze, Florence, Italy. Port. 27.  By kind permission of the Ministero per i Beni e le Attivitá Culturali della Repubblica Italiana.
Reproduction of this image by any means is strictly prohibited.

In the age of sailboats, how could traders be confident that the parties with whom they were considering working on the other side of the ocean would not act opportunistically? Commercial agents overseas spared merchants time and the hazards of travel and allowed them to diversify their investments; but agents might also cheat or renege on or neglect their commitments.

My research about the merchants of Jewish origin plying the sugar trade linking Brazil, Portugal and the Netherlands demonstrates that the same merchants chose different feasible mechanisms (institutions) to curb opportunism in different types of transactions. Its main contribution is to establish a clear pattern linking the attributes of these transactions to those of the mechanisms chosen to enforce them. It also shows how these mechanisms interrelated.

Around 1600, Europe experienced rapidly growing urban populations and dependence on trade for supplies of basic products, while overseas possessions contributed to a surging output of marketable commodities, including sugar. Brazil was turned into the first large-scale plantation economy and became the world’s main sugar producer, with Amsterdam emerging as its main distribution and refining centre. Most of the Brazilian sugar trade was intermediated by merchants in Portugal, and traders of Jewish origin scattered along this trade route played a prominent role in the sugar trade.  The Brazilian sugar trade required institutions with low costs in agency services and contract enforcement because it was a significantly competitive market. Its political, legal, and administrative framework raised relatively few obstacles to market entrants, and trade in a semi-luxury commodity necessitated low start-up costs.

Sources reveal that merchants of Jewish origin engaged mostly individuals of other backgrounds in transactions in which agents had little latitude, performed simple tasks over short periods, and managed small sums (see table 1). Insiders were not left out in these transactions, but the background of agents was not determinant.The research shows that these transactions were primarily enforced by an informal mechanism that linked one’s expected income to one’s professional reputation. Bad conduct led to marginalization while good behaviour vouched for more opportunities by the same and other principals. This mechanism functioned among all traders, despite their differing backgrounds, who were active in these interconnected marketplaces. This professional reputation mechanism worked because a standardization of basic mercantile practices produced a shared understanding of how trade should be conducted. At the same time, the marketplaces’ structure together with patterns of transportation and correspondence increased the speed, frequency, volume, and diversity of the information flow within and between these marketplaces. This information system facilitated both the detection of good and bad conduct and relatively rapid response to news about it.


Strum Pic 2
Figure 2. Sugar crate being weighted at the Palace Square in Lisbon. Source: Dirk Stoop – Terreiro do Paço no século XVII, 1662. Painting. Museu da Cidade, Lisboa, Portugal. MC.PIN.261.© Museu da Cidade – Câmara Municipal de Lisboa.

The professional reputation mechanism worked better on transactions involving small sums and fewer, simpler, and shorter tasks. Misconduct in these tasks were easier to detect and expose amid an extensive and heterogeneous network; and if the agent cheated, the small sums assigned were not enough to live on while forsaking trade.


Table 1. Backgrounds of agents in complex and simple arrangements

Type of transaction Outsiders Probable outsiders Insiders Probable insiders Relatives
Complex 2.6% 4.9% 69.9% 2.1% 20.6%
Simple 20.0% 70.0% 0% 10.0% 0%

Source: original article in the Economic History Review.


On the other hand, merchants of Jewish origin preferred to engage members of their diaspora in complex, larger, and longer transactions (see table 1). A reputation mechanism within diaspora was more effective in governing transactions that were difficult to follow. Although enforcement within the diaspora benefitted from the general information system, the diaspora’s social structure generated more information more rapidly about the conduct of its members. In each centre, insiders knew each other and marriages and socialization within the group prevailed. Insiders usually had personal acquaintances and often relatives in other centres as well. They were conscious of their common history and fragile status. Such social structure also provided greater economic and social incentives for honesty and diligence than the professional mechanism, making the internal mechanism preferable in transactions involving larger sums and wider latitude.

Finally, the research shows that the legal system was able to impose sanctions across wide distances and political units. Yet owing to courts’ slowness and costliness, merchants resorted to litigation only after nonjudicial mechanisms failed. Furthermore, courts could not punish inattention that did not breach legal, customary, or contractual specifications, nor could courts reward accomplishment.

Litigation had to supplemented the professional mechanism because its incentives were not homogeneous across all marketplaces and diasporas. Courts also reinforced the diaspora mechanism by limiting the future income an agent expected to gain from misappropriating large sums from one or many principals. Finally, the professional mechanism supplemented the diaspora mechanism by limiting alternative agency relations with outsiders for insiders who had engaged in misconduct.

Because merchants were capable of matching transactions with the most appropriate governing mechanisms, they were able to diversify their transactions, expand the market for agents, better allocate agents to tasks, and stimulate competition among them. The resulting decrease in agency costs was critical in a significantly competitive market as the sugar trade. Institutional choice thus supported and reinforced—rather than caused—expansion of exchange.

Almshouses in early modern England: charitable housing in the mixed economy of welfare 1550-1725

review by David Hitchcock (Christ Church University)

book written by Angela Nicholls

‘Almshouses in early modern England: charitable housing in the mixed economy of welfare 1550-1725’ is published by Boydell and Brewer. SAVE  25% when you order direct from the publisher – offer ends on the 7th May 2019. See below for details.



Almshouses were ‘curious institutions’, ‘built by the rich to be lived in by the poor’ (p. 1). In the first monograph to focus exclusively on the role of early modern almshouses in welfare provision, Angela Nicholls traces not only the development of almshouse foundations and the motivations of their founders, but also crucially the lived experience and material benefits of an alms place as a respectable or ancient pauper in early modern English parishes. Until recently a ‘known unknown’ (p. 3) in early modern welfare history, charitable housing of any kind was of course far more than simply the provision of a roof and walls, it was also a guarantee of place, of belonging and of social meaning within the context of parish and community. Nicholls examines the almshouse from many angles; first set within an overview of early modern housing policy, and subsequently in chapters dedicated to donors and founders, to residents and their experiences, and finally to a detailed case study of the parish of Leamington Hastings. Nicholls argues that early modern almshouses were distinct from their medieval predecessors and eighteenth-century descendants for a number of reasons, not least their prominent and sustained place in the mixed economy of parish welfare between monastic dissolution and Knatchbull’s Workhouse Test Act of 1723. The study focuses broadly on evidence from three dispersed counties; Durham, Warwickshire, and Kent, and importantly uses a generous definition of what constitutes an ‘almshouse’ in the first place, thus excavating many more humble institutions than previous historiography accounts for.

Chapter one on housing policy opens with a strong statement about the quintessential purpose of Tudor and Stuart poor relief, and particularly of welfare legislation: the prevention of vagrancy and of idleness. Nicholls’ reading of the roles housing provision played within the poor laws chimes generally with the historiographical consensus, though she makes some important new suggestions. For instance, the 1547 act actually enjoined parishes to provide ‘cotages’ to vagrants once they had been forcibly returned to their places of origin (p. 22), and Nicholls makes a strong case that the language of ‘Abiding Places’ in the ’47 and indeed 1572 laws might well refer to the English equivalent of hôpital général places for former vagrants and not strictly to their commitment to houses of correction. The effective result of these sorts of injunctions was the accumulation of a robust stock of pauper housing in parishes across the kingdom, housing which remained reserved to the poor well into the eighteenth century, until attitudes towards personal subsistence and idleness hardened still further. Chapter two charts the surge in almshouse provision and endowment across the period and visualizes this provision brilliantly across several figures and maps (Figure 2.2, p. 45, graphing almshouse foundations by decade is particularly revealing). Nicholls concludes here that endowing an almshouse was often a response to generalised, national anxieties or prompts rather than just to local concerns, in effect demonstrating another way that the ‘integration’ thesis of Keith Wrightson was borne out by the bequests of local propertied elites.

The second set of chapters focus on founders and inhabitants. Nicholls unpacks the manifold motivations of almshouse founders such as Rev. Nicholas Chamberlaine with dexterity, going well beyond the traditional ‘purchase of prayer’ model (p. 62). She disagrees with W.K. Jordan’s account of a secular shift in the rationales behind charitable giving, and outlines a suite of additional motives which prominently included local memorialisation and social status and the buttressing of confessional Protestant identities. I found it interesting that Nicholls actually explores ‘order and good governance’ (pp. 86-88) of the parish in subsequent chapters as a desired outcome of endowment, and broadly from the historical perspective of almshouse inhabitants, rather than in the same chapter as other founder motivations. In the section on inhabitants and the material benefits of alms places Nicholls questions how ‘fastidious’ early modern almshouse foundations actually were with respect to inhabitants (p. 90). Some criteria such as geographical proximity were consistent across most almshouses; others such as old age, gender, infirmity, or fraternal or confessional membership were endowment specific. Nicholls also notes that the historiographical interest in ‘rules of behaviour’ for almshouses is out of proportion with the actual number of houses (very few) which actually had rules at all (p. 126). She also debunks the contention that the material benefits of an alms place created a ‘pauper elite’ (p. 184) and demonstrates wide variation across hundreds of endowed places.

The final chapter brings together the rich records of county Warwickshire to produce a parish history of a ‘seventeenth-century Welfare Republic’ in Leamington Hastings (p. 188). Nicholls traces the origins of the Hastings house to Humphrey Davis and his will of 1607, which subsequently falls into ‘legal limbo’ (p. 195) until revival under Thomas Trevor as lord of Hasting manor estate in the 1630s. Nicholls situates the almshouse within the private charitable economy of Leamington Hastings which also included the ‘Poors Plot’ charity subsidising access to land and schemes for parish stock and further cottage housing (p. 221). Nicholls concludes that we cannot view almshouses—however privately endowed and idiosyncratically managed—as hermetically sealed off from state welfare provision as it was, after all, often the same people managing both. Almshouses in early modern England is a definitive monograph, cogently assembled and clearly written, with the histories of alms-people and charity at its heart. It is also filled with evidence of the care and nuance with which Nicholls approaches her subject, visible not least in the author’s photography, detailed online appendices and databases, and encyclopaedic knowledge of the associated archives. If you want to learn about the history of early modern charitable housing, you should read this book.


SAVE 25% when you order direct from the publisher using the offer code B125 online hereOffer ends 7th May 2019. Discount applies to print and eBook editions. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291, and quote the same code. Any queries please email marketing@boydell.co.uk


Note: this post appeared as a book review article in the Review. We have obtained the necessary permissions.

The age of mass migration in Latin America

by Blanca Sánchez-Alonso (Universidad San Pablo-CEU, Madrid)

This article is published by The Economic History Review, and it is available on the EHS website.


Blanca Blog
General Carneiro station which belonged to Minas and Rio railway. Minas Gerais province, Brazil, c.1884. Available at Wikimedia Commons.

Latin America was considered a ‘land of opportunity’ between 1870 and 1930.  During that period 13 million Europeans migrated to this region.  However, the experiences of Latin American countries are not fully incorporated into current debates concerning the age of mass migration.

The main objective of my article, ‘The age of mass migration in Latin America’,  is to rethink the role of European migration to the region in the light of new research. It addresses several major questions suggested by the economic literature on migration: whether immigrants were positively selected from their sending countries, how immigrants assimilated into host economies, the role of immigration policies, and the long-run effects of European immigration on Latin America.

Immigrants overwhelmingly originated from the economically backward areas of southern Europe. Traditional interpretations have tended to extrapolate the economic backwardness of Italy, Spain, and Portugal (measured in terms of per capita GDP and relative to advanced European countries) to emigration flows. Yet, judging by literacy levels, migrants to Latin America from southern European countries were positively selected. Immigrants to Latin America from Spain, Italy and Portugal were drawn from the northern regions which had higher levels of literacy.  There were very few immigrants to Latin America  from the southern regions of these countries. .  When immigrant literacy is compared with that of potential emigrants from regions of high emigration, positive selection appears quite clear.

One proxy often used to signal positive self-selection is upward mobility within and across generations. Recent empirical research shows that it was the possibility of rapid social upgrading that made Argentina attractive to immigrants. First-generation immigrants experienced faster occupational upgrading than natives; upward occupational mobility occurred for a large proportion of those who declared unskilled occupations on arrival. Immigrants to Argentina experienced a very fast growth in occupational earnings (6 per cent faster than natives) between 1869 and 1895. For the city of Buenos Aires in 1895, new evidence shows that Italian and Spanish males received, on average, 80 per cent of average native-born earnings. In some categories, such as crafts and services, immigrants obtained higher wages than natives. These findings provide an economic rationale why some Europeans chose Argentina over the US, despite a smaller wage differential between originating country  and destination.

Immigrants appear to have adjusted successfully to Latin American labour markets.  This is evidenced by access to property and in the large ownership of businesses.  Almost all European communities experienced strong and fast upward social mobility in the destination countries. Whether this was because of positive selection at home or because of the relatively low skill levels in the host societies is still an open question.

European immigrants to Latin America had higher levels of literacy than the native population. Despite non-selective immigration policies, Latin American countries received immigrants with higher levels of human capital compared to natives. Linking immigrants’ human capital to long run economic and educational outcomes has been the focus of recent research for Brazil and Argentina. The impact of immigration in those areas with higher shares of Europeans appears to be important since immigrants demanded and created schools (public or private). New research presents evidence of path dependency linking past immigrants’ human capital with present outcomes in economic development in the region.

Immigration policies in Latin America raised few barriers to European immigration. However, the political economy of immigration policy of Argentina shows a more complicated story than the classic representation of landowners constantly supporting an open-door policy.

Brazil developed a long-lasting programme of subsidized immigration. The expected income of immigrants to São Paulo was augmented by prospective savings, a guaranteed job on arrival, and the subsidized transportation cost. Going to Brazil was perceived as a good investment in southern Europe. Transport subsidies and the peculiarities of the colono contract in the coffee areas seem more important explanations than real wage differentials for understanding how Brazil competed for workers in the international labour market. The Lewis model merits further investigation for two main reasons. First, labour supply increased faster than the number of workers needed for the coffee expansion because of subsidies and, second, labour markets in São Paulo were segmented. European immigrants supplied only a fraction (though a substantial one) of the total labour force needed for the coffee plantations. The internal supply of workers became increasingly important and must be included in the total labour supply.

Recent literature shows that researchers are either identifying new quantitative evidence or exploiting existing data in new ways. Consequently,  new research is providing answers and posing questions to show that Latin America has much to add to debates on the economic and social impact of historical immigration.


To contact Blanca Sánchez-Alonso: blanca@ceu.es

The gender division of labour in early modern England: why study women’s work?

by Jane Whittle (University of Exeter) and Mark Hailwood (University of Bristol)

This article is published by The Economic History Review, and it is available on the EHS website.


Interior with an Old Woman at the Spinning Wheel. Available at Wikimedia Commons.

Here are ten reasons to know more about women’s work and read our article on ‘The gender division of labour in early modern England’. We have collected evidence about work tasks in order to quantify the differences between women’s and men’s work in the period from 1500-1700. This research allows us to dispel some common misconceptions.


  1. Men did most of the work didn’t they? This is unlikely, when both paid and unpaid work are counted, modern time-use studies show that women do the majority of work – 55% of rural areas of developing countries and 51% in modern industrial countries (UN Human Development report 1995). There is no reason why the pattern would have been markedly different in preindustrial England.
  2. But we know about occupational structure in the past don’t we? Documents from the medieval period onwards describe men by their occupations, but women by their marital status. As a result we know quite a lot about male occupations but very little about women’s.
  3. But women worked in households headed by their father, husband or employer. Surely, if we know what these men did, then we know what women were doing too? Recent research undertaken by Amy Erickson, Alex Shepard and Jane Whittle shows that married women often had different occupations from their husbands. If we do not know what women did, we are missing an important part of the economy.
  4. But we have evidence of women working for wages. It shows that around 20% of agricultural workers were women, surely this demonstrates that women’s work wasn’t as important as men’s in the wider economy? This evidence only relates to labourers paid by the day, and before 1700 most agricultural labour was not carried out by day labourers, so this isn’t a very good measure. Our article shows that women carried out a third of agricultural work tasks, not 20%.
  5. But women mostly did domestic stuff – cooking, housework and childcare – didn’t they, and that type of work doesn’t change much across history? Women did do most cooking, housework and childcare, but our research suggests it did not take up the majority of their working time. These forms of work did change markedly over time. A third of early modern housework took place outside, and our data suggests the majority was done for other households, not as unpaid work for one’s own family.
  6. But women only worked in a narrow range occupations, didn’t they? Our research shows that women worked in all the major sectors of the economy, but often doing slightly different tasks from men. They undertook a third of work tasks in agriculture, around half of the work in everyday commerce and almost two thirds of work tasks in textile production. But women also did forms of work we might not expect, such as shearing sheep, dealing in second-hand iron, and droving cattle.
  7. Women’s work was all low skilled wasn’t it? Women very rarely benefitted from formal apprenticeship in the way that men did, but that does not mean the tasks they undertook were unskilled. Women undertook many tasks, such as making lace and providing medical care, which required a great deal of skill.
  8. But this was all in the past, what relevance does it have now? Many gendered patterns of work are remarkably persistent over time. Analysis by the Office of National Statistics states that one third of the gender pay gap in modern Britain can be explained by men and women working in different occupations, and by the lower rates of pay for part-time work, which is more commonly undertaken by women than men.
  9. So nothing ever changes …? Well, not necessarily. In fact looking carefully at patterns of women’s work in the past shows some noticeably shifts over time. For instance, women worked as tailors and weavers in the medieval period and in the eighteenth century, but not in the sixteenth century.
  10. But we know why women work differently from men, particularly in preindustrial societies – isn’t it because they are less physically strong and all the child-bearing stuff? Physical strength does not explain why women did some physically taxing forms of work and not others (why they walked for miles carrying heavy loads on their heads rather than driving carts). And not all women were married or had children. Neither physical strength nor child-bearing can explain why women were excluded from tailoring between 1500 and 1650, but worked successfully and skilfully in this and other closely related crafts in other periods.

We now have data which allows us to look more carefully at these issues, but there is still much more to uncover.


To contact Jane Whittle: j.c.whittle@ex.ac.uk, Twitter: @jcwhittle1

To contact Mark Hailwood: m.hailwood@bristol.ac.uk, Twitter: @mark_hailwood

‘Stop-go’ policy and the restriction of post-war British house-building

by Peter Scott (Henley Business School, University of Reading) and James T. Walker (Henley Business School, University of Reading)

This article is published by The Economic History Review, and it is available on the EHS website.


British house-building
A member of the Pioneer Corps assists a civilian building labourer in tiling a roof. Available at Wikimedia Commons.

Britain’s unusually high house price to income ratio plays an important role in reducing living standards and increasing “housing poverty”. This article shows that Britain’s housing shortage partly stems from deliberate long-term government policies aimed at restricting both public and private sector house-building. From the 1950s to the early 1980s, successive governments reduced housing starts as part of `stop-go’ macroeconomic policy, with major cumulative impacts.

This policy had its roots in the Second World War, when an influential coalition of Bank of England and Treasury officials pressed for a post-war policy of savage deflation, to restore sterling’s credibility and re-establish London as a major financial centre. John Maynard Keynes warned that prioritising international ‘obligations’ over the war-time commitment to build a fairer society would be repeating the 1920s gold standard error – though his direct influence ended with his untimely death. Deflationary policy proved politically impracticable in the short-term, as evidenced by Labour’s 1945 landslide election victory, though its supporters bided their time and were able to implement much of their agenda in the changed political climate of the 1950s.

The Conservatives’ 1951 election victory was based on a pledge to build 300,000 new homes per year. This was achieved in 1953 and building peaked at 340,000 completions in 1954. However, officials took advantage of the 1955-57 credit squeeze to press for severe cuts in housing investment. Municipal house-building was cut, while private house-building was depressed largely through restricting the growth of building society funds (by pressurising the building societies’ cartel to keep interest rates at such low levels that they were starved of mortgage funds). While the severity of policy varied over time, these restrictions were maintained almost continually until the early 1980s.

These restrictions were never formally announced and were hidden from Cabinet for much of this period. Meanwhile, given the political importance of housing, the Conservative government simultaneously proposed ever-larger housing targets (culminating in a 1964 election pledge to build 400,000 per annum). This created a perverse situation, whereby the government was spending substantial sums on highly publicised policies to increase demand for private housing (such as the 1959 House Purchase and Housing Act and the 1963 abolition of Schedule A income tax), while covertly reducing housing supply through restricting mortgage funding, limiting building firms’ access to credit, and reducing municipal housing investment. The following Labour government found itself drawn in to a similarly restrictive housing policy, as part of its ill-fated commitment to avoid sterling devaluation (arguably based on misleading Treasury advice), while housing restrictions were also used as an instrument of macroeconomic stabilisation in the 1970s.

A 1974 Bank of England analysis found that this policy had created both an exaggerated housing cycle and a structural deficit (with house-building being held below market-clearing levels at all points in the cycle). This had in turn reduced the capacity of the housing market to respond to rising demand, by reducing builders’ land banks, building materials capacity, and building labour, which raised house-prices while lowering productivity and technical progress. There is also evidence of “learning effects” by house-builders, who avoided expanding their activities during cyclical upturns, as they correctly perceived that tighter government restrictions might be imposed before their houses were ready to sell. These pressures fuelled house price inflation, both directly, and because housing became increasingly regarded as as a hedge against inflation.


Figure 1: Capital formation in dwellings, as percentage of total capital formation, and housing completions per thousand families, private houses and all houses, 1924-38 and 1954-79

Housing Graph


British house-building during this era compared unfavourably to inter-war levels, as shown in Figure 1. Moreover, private house-building was even more depressed that total housing – as the Treasury found it easier to covertly restrict private housing than to reduce municipal building starts, where policy was more open to Cabinet and public scrutiny. British gross domestic fixed capital investment in housing was also very low relative to other European nations. Our time-series econometric analysis for 1955-1979 corroborates the `success’ of the restrictions and also shows the predicted asymmetric impact in `stop’ and `go’ phases of policy. This is an important finding – as stop-go policy is often examined in terms of the volatility of the variable under examination – based on the unrealistic assumption that industry would fail to realise that demand upturns might be rapidly terminated by the re-imposition of controls.

Housing restriction policy has persisting consequences. Additions to the housing stock were depressed for several decades, while the inflationary-hedge benefits for house-purchase became a self-fulfilling prophecy. Meanwhile restrictive planning policy (which was substantially intensified in the 1950s, as a further measure of housing restriction) has proved difficult to reverse. Average house-prices to income ratios have thus continued the upward trend established in this era, currently excluding a substantial and growing proportion of the population from owner-occupation.


To contact Peter Scott: p.m.scott@henley.ac.uk

To contact James T. Walker: j.t.walker@henley.ac.uk