Over the last century, European historiography has debated whether industrialisation brought about an improvement in working class living standards. Multiple demographic, economic, anthropometric and wellbeing indicators have been examined in this regard, but it was Eric Hobsbawm (1957) who, in the late 1950s, incorporated food consumption patterns into the analysis.
Between the mid-19th and the first half of the 20th century, the diet of European populations underwent radical changes. Caloric intake increased significantly, and cereals were to a large extent replaced by animal proteins and fat, resulting from a substantial increase in meat, milk, eggs and fish consumption. This transformation was referred to by Popkin (1993) as the ‘Nutritional transition’.
These dietary changes were driven, inter alia, by the evolution of income levels which raises the possibility that significant inequalities between different social groups ensued. Dietary inequalities between different social groups are a key component in the analysis of inequality and living standards; they directly affect mortality, life expectancy, and morbidity. However, this hypothesis remains unproven, as historians are still searching for adequate sources and methods with which to measure the effects of dietary changes on living standards.
This study contributes to the debate by analysing a relatively untapped source: hospital diets. We have analysed the diet of psychiatric patients and members of staff in the main hospital of the city of Valencia (Spain) between 1852 and 1923. The diet of patients depended on their social status and the amounts they paid for their upkeep. ‘Poor psychiatric patients’ and abandoned children, who paid no fee, were fed according to hospital regulations, whereas ‘well-off psychiatric patients’ paid a daily fee in exchange for a richer and more varied diet. There were also differences among members of staff, with nuns receiving a richer diet than other personnel (launderers, nurses and wet-nurses). We think that our source broadly reflects dietary patterns of the Spanish population and the effect of income levels thereon.
Figure 2 illustrates some of these differences in terms of animal-based caloric intake in each of the groups under study. Three population groups can be clearly distinguished: ‘well-off psychiatric patients’ and nuns, whose diet already presented some of the features of the nutritional transition by the mid-19th century, including fewer cereals and a meat-rich diet, as well as the inclusion of new products, such as olive oil, milk, eggs and fish; hospital staff, whose diet was rich in calories,to compensate for their demanding jobs, but still traditional in structure, being largely based on cereals, legumes, meat and wine; and, finally, ‘poor psychiatric patients’ and abandoned children, whose diet was poorer and which, by the 1920, had barely joined the trends that characterised the nutritional transition.
In conclusion, the nutritional transition was not a homogenous process, affecting all diets at the time or at the same pace. On the contrary, it was a process marked by social difference, and the progress of dietary changes was largely determined by social factors. By the mid-19th century, the diet structure of well-to-do social groups resembled diets that were more characteristic of the 1930s, while less favoured and intermediate social groups had to wait until the early 20th century before they could incorporate new foodstuffs into their diet. As this sequence clearly indicates, less favoured social groups always lagged behind.
Medina-Albaladejo, F. J. and Calatayud, S., “Unequal access to food during the nutritional transition: evidence from Mediterranean Spain”, Economic History Review, (forthcoming).
Hobsbawm, E. J., “The British Standard of Living, 1790-1850”, Economic History Review, 2nd ser., X (1957), pp. 46-68.
Popkin B. M., “Nutritional Patterns and Transitions”, Population and Development Review, 19, 1 (1993), pp. 138-157.
This article, written during the COVID-19 epidemic, provides a general introduction to the long-term history of infectious diseases, epidemics and the early phases of the spectacular long-term improvements in life expectancy since 1750, primarily with reference to English history. The story is a fundamentally optimistic one. In 2019 global life expectancy was approaching 73 years. In 1800 it was probably about 30. To understand the origins of this transition, we have to look at the historical sequence by which so many causes of premature death have been vanquished over time. In England that story begins much earlier than often supposed, in the years around 1600. The first two ‘victories’ were over famine and plague. However, economic changes with negative influences on mortality meant that, despite this, life expectancies were either falling or stable between the late sixteenth and mid eighteenth centuries. The late eighteenth and early nineteenth century saw major declines in deaths from smallpox, malaria and typhus and the beginnings of the long-run increases in life expectancy. The period also saw urban areas become capable of demographic growth without a constant stream of migrants from the countryside: a necessary precondition for the global urbanization of the last two centuries and for modern economic growth. Since 1840 the highest national life expectancy globally has increased by three years in every decade.
In a recent article[i] we reviewed research on preindustrial epidemics. We focused on large-scale, lethal events: those that have a deeper and more long-lasting impact on economy and society, thereby producing the historical documentation that allows for systematic study. Almost all these lethal pandemics have been caused by plague: from the “Justinian’s plague” (540-41) and the Black Death (1347-52) to the last great European plagues of the seventeenth century (1623-32 and 1647-57). These epidemics were devastating. The Black Death, killed between 35 and 60 per cent of the population of Europe and the Mediterranean (approximately 50 million victims).
These epidemics also had large-scale and persistent consequences. The Black Death might have positively influenced the development of Europe, even playing a role in the Great Divergence.[ii] Conversely, it is arguable that seventeenth-century plagues in Southern Europe (especially Italy), precipitated the Little Divergence.[iii] Clearly, epidemics can have asymmetric economic effects. The Black Death, for example, had negative long-term consequences for relatively under-populated areas of Europe, such as Spain or Ireland.[iv] More generally, the effects of an epidemic depend upon the context in which it happens. Below we focus on how institutions shaped the spread and the consequences of plagues.
Preindustrial epidemics and institutions
In preindustrial times, as today, institutions played a crucial role in determining the final intensity of epidemics. When the Black Death appeared, European societies were unprepared for the threat. But, when it became apparent that plague was a recurrent scourge, institutional adaptation commenced — typical of human reaction to a changing biological environment. From the late fourteenth century permanent health boards were established, able to take quicker action than the ad-hoc commissions created during the emergency of 1348. These boards monitored constantly the international situation, and provided the early warning necessary for implementing measures to contain epidemics[v]. From the late fourteenth century, quarantine procedures for suspected cases were developed, and in 1423 Venice built the first permanent lazzaretto (isolation hospital) on a lagoon island. By the early sixteenth century, at least in Italy, central and local government had implemented a broad range of anti-plague policies, including health controls at river and sea harbours, mountain passes, and political boundaries. Within each Italian state, infected communities or territories were isolated, and human contact was limited by quarantines.[vi] These, and other instruments developed against the plague, are the direct ancestors of those currently employed to contain Covid-19. However, such policies are not always successful: In 1629, for example, plague entered Northern Italy as infected armies from France and Germany arrived to fight in the War of the Mantuan Succession. Nobody has ever been able to quarantine an enemy army.
It is no accident that these policies were first developed in Italian trading cities which, because of their commercial networks, had good reason to fear infection. Such policies were quickly imitated in Spain and France.[vii] However, England in particular, “was unlike many other European countries in having no public precautions against plague at all before 1518”.[viii] Even in the seventeenth century, England was still trying to introduce institutions that had long-since been consolidated in Mediterranean Europe.
The development of institutions and procedures to fight plague has been extensively researched. Nonetheless, other aspects of preindustrial epidemics are less well-known. For example, how institutions tended to shift mortality towards specific socio-economic groups, especially the poor. Once doctors and health officials noticed that plague mortality was higher in the poorest parts of the city, they began to see the poor themselves as being responsible for the spread of the infection. As a result, during the early modern period their presence in cities was increasingly resented,[ix] and as a precautionary measure, vagrants and beggars were expelled. The death of many poor people was even regarded by some as one of the few positive consequences of plague. The friar, Antero Maria di San Bonaventura, wrote immediately after the 1656-57 plague in Genoa:
“What would the world be, if God did not sometimes touch it with the plague? How could he feed so many people? God would have to create new worlds, merely destined to provision this one […]. Genoa had grown so much that it no longer seemed a big city, but an anthill. You could neither take a walk without knocking into one another, nor was it possible to pray in church on account of the multitude of the poor […]. Thus it is necessary to confess that the contagion is the effect of divine providence, for the good governance of the universe”.[x]
While it seems certain that the marked socio-economic gradient of plague mortality was partly due to the action of health institutions, there is no clear evidence that officials were actively trying to kill the poor by infection. Sometimes, the anti-poor behaviour of the elites might have backfired. Our initial research on the 1630 epidemic in the Italian city of Carmagnola suggests that while poor households were more prone to being all interned in the lazzaretto for isolation at the mere suspicion of plague, this might have reduced, not increased, their individual risk of death compared to richer strata. Possibly, this was the combined result of effective isolation of the diseased, assured provisioning of victuals, basic care, and forced within-household distancing[xi].
Different health treatment reserved to rich and poor and economic elites making wrong and self-harming decisions: it would be nice if, occasionally, we learned something from history!
[i] Alfani, G. and T. Murphy. “Plague and Lethal Epidemics in the Pre-Industrial World.” Journal of Economic History 77 (1), 2017, 314–343.
[ii] Clark, G. A Farewell to the Alms: A Brief Economic History of the World. Princeton: Princeton University Press, 2007; Broadberry, S. Accounting for the Great Divergence, LSE Economic History Working Papers No. 184, 2013.
[iii] Alfani, G. “Plague in Seventeenth Century Europe and the Decline of Italy: An Epidemiological Hypothesis.” European Review of Economic History 17 (3), 2013, 408–430; Alfani, G. and M. Percoco. “Plague and Long-Term Development: the Lasting Effects of the 1629-30 Epidemic on the Italian Cities.” Economic History Review 72 (4), 2019, 1175–1201.
[v] Cipolla, C.M. Public Health and the Medical Profession in the Renaissance. Cambridge: CUP, 1976; Cohn, S.H. Cultures of Plague. Medical Thought at the End of the Renaissance. Oxford: OUP, 2009. Alfani, G. Calamities and the Economy in Renaissance Italy. The Grand Tour of the Horsemen of the Apocalypse. Basingstoke: Palgrave, 2013.
[vi] Alfani, G. Calamities and the Economy, cit.; Cipolla, C.M, Public Health and the Medical Profession, cit.; Henderson, J., Florence Under Siege: Surviving Plague in an Early Modern City, Yale University Press, 2019.
[vii] Cipolla, C.M, Public Health and the Medical Profession, cit..
[viii] Slack, Paul. The Impact of Plague in Tudor and Stuart England. London: Routledge, 1985, 201–26.
[ix] Pullan, B. “Plague and Perceptions of the Poor in Early Modern Italy.” In T. Ranger and P. Slack (eds.), Epidemics and Ideas. Essays on the Historical Perception of Pestilence. Cambridge: CUP, 1992, 101-23; Alfani, G., Calamities and the Economy.
This piece is the result of a collaboration between the Economic History Review, the Journal of Economic History, Explorations in Economic History and the European Review of Economic History. More details and special thanks below. Part B can be found here
As the world grapples with a pandemic, informed views based on facts and evidence have become all the more important. Economic history is a uniquely well-suited discipline to provide insights into the costs and consequences of rare events, such as pandemics, as it combines the tools of an economist with the long perspective and attention to context of historians. The editors of the main journals in economic history have thus gathered a selection of the recently-published articles on epidemics, disease and public health, generously made available by publishers to the public, free of access, so that we may continue to learn from the decisions of humans and policy makers confronting earlier episodes of widespread disease and pandemics.
Generations of economic historians have studied disease and its impact on societies across history. However, as the discipline has continued to evolve with improvements in both data and methods, researchers have uncovered new evidence about episodes from the distant past, such as the Black Death, as well as more recent global pandemics, such as the Spanish Influenza of 1918. We begin with a recent overview of scholarship on the history of premodern epidemics, and group the remaining articles thematically, into two short reading lists. The first consists of research exploring the impact of diseases in the most direct sense: the patterns of mortality they produce. The second group of articles explores the longer-term consequences of diseases for people’s health later in life.
Patterns of Mortality
The rich and complex body of historical work on epidemics is carefully surveyed by Guido Alfani and Tommy Murphy who provide an excellent guide to the economic, social, and demographic impact of plagues in human history: ‘Plague and Lethal Epidemics in the Pre-Industrial World’. The Journal of Economic History 77, no. 1 (2017): 314–43. https://doi.org/10.1017/S0022050717000092. The impact of epidemics varies over time and few studies have shown this so clearly as the penetrating article by Neil Cummins, Morgan Kelly and Cormac Ó Gráda, who provide a finely-detailed map of how the plague evolved in 16th and 17th century London to reveal who was most heavily burdened by this contagion. ‘Living Standards and Plague in London, 1560–1665’. Economic History Review 69, no. 1 (2016): 3-34. https://dx.doi.org/10.1111/ehr.12098 . Plagues shaped the history of nations and, indeed, global history, but we must not assume that the impact of plagues was as devastating as we might assume: in a classic piece of historical detective work, Ann Carlos and Frank Lewis show that mortality among native Americans in the Hudson Bay area was much lower than historians had suggested: ‘Smallpox and Native American Mortality: The 1780s Epidemic in the Hudson Bay Region’. Explorations in Economic History 49, no. 3 (2012): 277-90. https://doi.org/10.1016/j.eeh.2012.04.003
The effects of disease reflect a complex interaction of individual and social factors. A paper by Karen Clay, Joshua Lewis and Edson Severnini explains how the combination of air pollution and influenza was particularly deadly in the 1918 epidemic, and that cities in the US which were heavy users of coal had all-age mortality rates that were approximately 10 per cent higher than those with lower rates of coal use: ‘Pollution, Infectious Disease, and Mortality: Evidence from the 1918 Spanish Influenza Pandemic’. The Journal of Economic History 78, no. 4 (2018): 1179–1209. https://doi.org/10.1017/S002205071800058X. A remarkable analysis of how one of the great killers, smallpox, evolved during the 18th century, is provided by Romola Davenport, Leonard Schwarz and Jeremy Boulton, who concluded that it was a change in the transmissibility of the disease itself that mattered most for its impact: “The Decline of Adult Smallpox in Eighteenth‐century London.” Economic History Review 64, no. 4 (2011): 1289-314. https://dx.doi.org/10.1111/j.1468-0289.2011.00599.x The question of which sections of society experienced the heaviest burden of sickness during outbreaks of disease outbreaks has long troubled historians and epidemiologists. Outsiders and immigrants have often been blamed for disease outbreaks. Jonathan Pritchett and Insan Tunali show that poverty and immunisation, not immigration, explain who was infected during the Yellow Fever epidemic in 1853 New Orleans: ‘Strangers’ Disease: Determinants of Yellow Fever Mortality during the New Orleans Epidemic of 1853’. Explorations in Economic History 32, no. 4 (1995): 517. https://doi.org/10.1006/exeh.1995.1022
The Long Run Consequences of Disease
The way epidemics affects families is complex. John Parman wrestles wit h one of the most difficult issues – how parents respond to the harms caused by exposure to an epidemic. Parman shows that parents chose to concentrate resources on the children who were not affected by exposure to influenza in 1918, which reinforced the differences between their children: ‘Childhood Health and Sibling Outcomes: Nurture Reinforcing Nature during the 1918 Influenza Pandemic’, Explorations in Economic History 58 (2015): 22-43. https://doi.org/10.1016/j.eeh.2015.07.002. Martin Saavedra addresses a related question: how did exposure to disease in early childhood affect life in the long run? Using late 19th century census data from the US, Saavedra shows that children of immigrants who were exposed to yellow fever in the womb or early infancy, did less well in later life than their peers, because they were only able to secure lower-paid employment: ‘Early-life Disease Exposure and Occupational Status: The Impact of Yellow Fever during the 19th Century’. Explorations in Economic History 64, no. C (2017): 62-81. https://doi.org/10.1016/j.eeh.2017.01.003. One of the great advantages of historical research is its ability to reveal how the experiences of disease over a lifetime generates cumulative harms. Javier Birchenall’s extraordinary paper shows how soldiers’ exposure to disease during the American Civil War increased the probability they would contract tuberculosis later in life: ‘Airborne Diseases: Tuberculosis in the Union Army’. Explorations in Economic History 48, no. 2 (2011): 325-42. https://doi.org/10.1016/j.eeh.2011.01.004
A key challenge in economic history is to understand the macroeconomics of interwar Britain. This was a time of high unemployment, depressed economic activity and significant macroeconomic volatility. Economic historians have identified a number of causes, from the demise of the old staple industries to the shortening of the working week (Richardson, 1965; Broadberry, 1986).
Yet the historiography has not explored an important source of modern economic fluctuations: uncertainty (Bloom, 2009; Jurado et al., 2015; Baker et al., 2016). A fog of uncertainty may well have hung over Britain between the wars given the volume of extraordinary events. Economically, there was the return to and break from the gold standard, the fiscal aftermath of the First World War and the slide to protection. Politically, there were snap general elections in 1923 and 1931, hung parliaments following the elections of 1923 and 1929 and national governments during the 1930s.
In new research (Lennard, forthcoming EcHR), I investigated how economic policy uncertainty affected the British economy in the interwar period. As a nebulous concept, the first step was to measure uncertainty. In order to do so, I constructed an index based on the frequency of articles reporting uncertainty over economic policy in the Daily Mail, Guardian and The Times (Figure 1).
Figure 1. New economic policy uncertainty index for the United Kingdom, 1920-38 (average 1920-38 = 100)
The new index is plotted in Figure 1, which shows that there was significant variation in uncertainty in the United Kingdom between 1920 and 1938. Uncertainty spiked around recurring events in the calendar such as budget announcements and general elections but also in response to more specific factors such as the General Strike, looming inter-allied debt payments and changes in the likelihood of war.
The second step was to account for the macroeconomic effects of uncertainty. Using a vector autoregression, I found that uncertainty was associated with reduced credit (a ‘financial frictions effect’), fewer imports (a ‘magnification effect’), lost jobs, and lower economic activity. Overall, economic policy uncertainty accounted for more than 20 per cent of macroeconomic volatility.
A wealth of narrative evidence backs up the significance of uncertainty shocks. At the microeconomic level, there were regular reports of disruption to a number of industries from pianos to textiles. In the car industry, for example, Sir William Letts, chairman and managing director of Willys Overland Crossley, told shareholders at the annual general meeting that uncertainty is, ‘crippling business and holding back activity and energy in our great industry’ (Guardian, 25 Feb. 1930, p. 6).
At the macroeconomic level, there were frequent descriptions of depressed employment and output. In 1920, for example, the Daily Mail (30 Dec. 1920, p. 4) reported that, ‘among the main causes of unemployment at the present moment […] is uncertainty in the business world’. In 1930 Sir William Morris wrote that Britain was ‘floundering in a sea of uncertainty […] the result being colossal unemployment’ (Daily Mail, 29 Aug. 1930, p. 8). In 1932 the Economist (30 Jan. 1932, p. 1) summarised that ‘business this year has been overshadowed by the economic and political uncertainty at home and abroad’.
In summary, uncertainty has been a forgotten, albeit incomplete, explanation for macroeconomic instability in interwar Britain. How uncertainty affected economies in other historical contexts is an important and open question for future research.
My Tawney lecture reassessed the relationship between slavery and industrial capitalism in both Britain and the United States. The thesis expounded by Eric Williams held that slavery and the slave trade were vital for the expansion of British industry and commerce during the 18th century but were no longer needed by the 19th. My lecture confirmed both parts of the Williams thesis: the 18th-century Atlantic economy was dominated by sugar, which required slave labor; but after 1815, British manufactured goods found diverse new international markets that did not need captive colonial buyers, naval protection, or slavery. Long-distance trade became safer and cheaper, as freight rates fell, and international financial infrastructure developed. Figure 1 (below) shows that the slave economies absorbed the majority of British cotton goods during the 18th century, but lost their centrality during the 19th, supplanted by a diverse array of global destinations.
I argued that this formulation applies with equal force to the upstart economy across the Atlantic. The mainland North American colonies were intimately connected to the larger slave-based imperial economy. The northern colonies, holding relatively few slaves themselves, were nonetheless beneficiaries of the trading regime, protected against outsiders by British naval superiority. Between 1768 and 1772, the British West Indies were the largest single market for commodity exports from New England and the Middle Atlantic, dominating sales of wood products, fish and meat, and accounting for significant shares of whale products, grains and grain products. The prominence of slave-based commerce explains the arresting connections reported by C. S. Wilder, associating early American universities with slavery. Thus, part one of the Williams thesis also holds for 18th-century colonial America.
Insurgent scholars known as New Historians of Capitalism argue that slavery, specifically slave-grown cotton, was critical for the rise of the U.S. economy in the 19th century. In contrast, I argued that although industrial capitalism needed cheap cotton, cheap cotton did not need slavery. Unlike sugar, cotton required no large investments of fixed capital and could be cultivated efficiently at any scale, in locations that would have been settled by free farmers in the absence of slavery. Early mainland cotton growers deployed slave labour not because of its productivity or aptness for the new crop, but because they were already slave owners, searching for profitable alternatives to tobacco, indigo, and other declining crops. Slavery was, in effect, a ‘pre-existing condition’ for the 19th-century American South.
To be sure, U.S. cotton did indeed rise ‘on the backs of slaves’, and no cliometric counterfactual can gainsay this brute fact of history. But it is doubtful that this brutal system served the long-run interests of textile producers in Lancashire and New England, as many of them recognized at the time. As argued here, the slave South underperformed as a world cotton supplier, for three distinct though related reasons: in 1807 the region closed the African slave trade, yet failed to recruit free migrants, making labour supply inelastic; slave owners neglected transportation infrastructure, leaving large sections of potential cotton land on the margins of commercial agriculture; and because of the fixed-cost character of slavery, even large plantations aimed at self-sufficiency in foodstuffs, limiting the region’s overall degree of market specialization. The best evidence that slavery was not essential for cotton supply is demonstrated by what happened when slavery ended. After war and emancipation, merchants and railroads flooded into the southeast, enticing previously isolated farm areas into the cotton economy. Production in plantation areas gradually recovered, but the biggest source of new cotton came from white farmers in the Piedmont. When the dust settled in the 1880s, India, Egypt, and slave-using Brazil had retreated from world markets, and the price of cotton in Liverpool returned to its antebellum level. See Figure 2.
The New Historians of Capitalism also exaggerate the importance of the slave South for accelerated U.S. growth. The Cotton Staple Growth hypothesis advanced by Douglass North was decisively refuted by economic historians a generation ago. The South was not a major market for western foodstuffs and consumed only a small and declining share of northern manufactures. International and interregional financial connections were undeniably important, but thriving capital markets in northeastern cities clearly predated the rise of cotton, and connections to slavery were remote at best. Investments in western canals and railroads were in fact larger, accentuating the expansion of commerce along East-West lines.
It would be excessive to claim that Anglo-American industrial and financial interests recognized the growing dysfunction of the slave South, and in response fostered or encouraged the antislavery campaigns that culminated in the Civil War. A more appropriate conclusion is that because of profound changes in technologies and global economic structures, slavery — though still highly profitable to its practitioners — no longer seemed essential for the capitalist economies of the 19th-century world.
A podcast of Sevket’s Tawney lecture can be found here.
New Map of Turkey in Europe, Divided into its Provinces, 1801. Available at Wikimedia Commons.
The Tawney lecture, based on my recent book – Uneven centuries:economic development of Turkey since 1820, Princeton University Press, 2018 – examined the economic development of Turkey from a comparative global perspective. Using GDP per capita and other data, the book showed that Turkey’s record in economic growth and human development since 1820 has been close to the world average and a little above the average for developing countries. The early focus of the lecture was on the proximate causes — average rates of investment, below average rates of schooling, low rates of total productivity growth, and low technology content of production —which provide important insights into why improvements in GDP per capita were not higher. For more fundamental explanations I emphasized the role of institutions and institutional change. Since the nineteenth century Turkey’s formal economic institutions were influenced by international rules which did not always support economic development. Turkey’s elites also made extensive changes in formal political and economic institutions. However, these institutions provide only part of the story: the direction of institutional change also depended on the political order and the degree of understanding between different groups and their elites. When political institutions could not manage the recurring tensions and cleavages between the different elites, economic outcomes suffered.
There are a number of ways in which my study reflects some of the key trends in the historiography in recent decades. For example, until fairly recently, economic historians focused almost exclusively on the developed economies of western Europe, North America, and Japan. Lately, however, economic historians have been changing their focus to developing economies. Moreover, as part of this reorientation, considerable effort has been expended on constructing long-run economic series, especially GDP and GDP per capita, as well as series on health and education. In this context, I have constructed long-run series for the area within the present-day borders of Turkey. These series rely mostly on official estimates for the period after 1923 and make use of a variety of evidence for the Ottoman era, including wages, tax revenues and foreign trade series. In common with the series for other developing countries, many of my calculations involving Turkey are subject to larger margins of error than similar series for developed countries. Nonetheless, they provide insights into the developmental experience of Turkey and other developing countries that would not have been possible two or three decades ago. Finally, in recent years, economists and economic historians have made an important distinction between the proximate causes and the deeper determinants of economic development. While literature on the proximate causes of development focuses on investment, accumulation of inputs, technology, and productivity, discussions of the deeper causes consider the broader social, political, and institutional environment. Both sets of arguments are utilized in my book.
I argue that an interest-based explanation can address both the causes of long-run economic growth and its limits. Turkey’s formal economic institutions and economic policies underwent extensive change during the last two centuries. In each of the four historical periods I define, Turkey’s economic institutions and policies were influenced by international or global rules which were enforced either by the leading global powers or, more recently, by international agencies. Additionally, since the nineteenth century, elites in Turkey made extensive changes to formal political institutions. In response to European military and economic advances, the Ottoman elites adopted a programme of institutional changes that mirrored European developments; this programme continued during the twentieth century. Such fundamental changes helped foster significant increases in per capita income as well as major improvements in health and education.
But it is also necessary to examine how these new formal institutions interacted with the process of economic change – for example, changing social structure and variations in the distribution of power and expectations — to understand the scale and characteristics of growth that the new institutional configurations generated.
These interactions were complex. It is not easy to ascribe the outcomes created in Turkey during these two centuries to a single cause. Nonetheless, it is safe to state that in each of the four periods, the successful development of new institutions depended on the state making use of the different powers and capacities of the various elites. More generally, economic outcomes depended closely on the nature of the political order and the degree of understanding between different groups in society and the elites that led them. However, one of the more important characteristics of Turkey’s social structure has been the recurrence of tensions and cleavages between its elites. While they often appeared to be based on culture, these tensions overlapped with competing economic interests which were, in turn, shaped by the economic institutions and policies generated by the global economic system. When political institutions could not manage these tensions well, Turkey’s economic outcomes remained close to the world average.
by Anna Missiaia and Kersten Enflo (Lund University)
This research is due to be published in the Economic History Review and is currently available on Early View.
For a long time, scholars have thought about regional inequality merely as a by-product of modern economic growth: following a Kuznets-style interpretation, the front-running regions increase their income levels and regional inequality during industrialization; and it is only when the other regions catch-up that overall regional inequality decreases and completes the inverted-U shaped pattern. But early empirical research on this theme was largely focused on the the 20th century, ignoring industrial take-off of many countries (Williamson, 1965). More recent empirical studies have pushed the temporal boundary back to the mid-19th century, finding that inequality in regional GDP was already high at the outset of modern industrialization (see for instance Rosés et al., 2010 on Spain and Felice, 2018 on Italy).
The main constraint for taking the estimations well into the pre-industrial period is the availability of suitable regional sources. The exceptional quality of Swedish sources allowed us for the first time to estimate a dataset of regional GDP for a European economy going back to the 16th century (Enflo and Missiaia, 2018). The estimates used here for 1571 are largely based on a one-off tax proportional to the yearly production: the Swedish Crown imposed this tax on all Swedish citizens in order to pay a ransom for the strategic Älvsborg castle that had just been conquered by Denmark. For the period 1750-1850, the estimates rely on standard population censuses. By connecting the new series to the existing ones from 1860 onwards by Enflo et al. (2014), we obtain the longest regional GDP series for any given country.
We find that inequality increased dramatically between 1571 and 1750 and remained high until the mid-19th century. Thereafter, it declined during the modern industrialization of the country (Figure 1). Our results discard the traditional view that regional divergence can only originate during an industrial take-off.
Figure 1. Coefficient of variation of GDP per capita across Swedish counties, 1571-2010.
Figure 2 shows the relative disparities in four benchmark years. If the country appeared relatively equal in 1571, between 1750 and 1850 both the mining districts in central and northern Sweden and the port cities of Stockholm and Gothenburg emerged.
Figure 2. The relative evolution of GDP per capita, 1571-1850 (Sweden=100).
The second part of the paper is devoted to the study of the drivers of pre-industrial regional inequality. Decomposing the Theil index for GDP per worker, we show that regional inequality was driven by structural change, meaning that regions diverged because they specialized in different sectors. A handful of regions specialized in either early manufacturing or in mining, both with a much higher productivity per worker compared to agriculture.
To explain this different trajectory, we use a theoretical framework introduced by Strulik and Weisdorf (2008) in the context of the British Industrial Revolution: in regions with a higher share of GDP in agriculture, technological advancements lead to productivity improvements but also to a proportional increase in population, impeding the growth in GDP per capita as in a classic Malthusian framework. Regions with a higher share of GDP in industry, on the other hand, experienced limited population growth due to the increasing relative price of children, leading to a higher level of GDP per capita. Regional inequality in this framework arises from a different role of the Malthusian mechanism in the two sectors.
Our work speaks to a growing literature on the origin of regional divergence and represents the first effort to perform this type of analysis before the 19th century.
by Latika Chaudhary (Naval Postgraduate School) and Anand V. Swamy (Williams College)
This research is due to be published in the Economic History Review and is currently available on Early View.
In the late 19th century the British-Indian government (the Raj) became preoccupied with default on debt and the consequent transfer of land in rural India. In many regions Raj officials made the following argument: British rule had created or clarified individual property rights in land, which had for the first time made land available as collateral for debt. Consequently, peasants could now borrow up to the full value of their land. The Raj had also replaced informal village-based forms of dispute resolution with a formal legal system operating outside the village, which favored the lender over the borrower. Peasants were spendthrift and naïve, and unable to negotiate the new formal courts created by British rule, whereas lenders were predatory and legally savvy. Borrowers were frequently defaulting, and land was rapidly passing from long-standing resident peasants to professional moneylenders who were often either immigrant, of another religion, or sometimes both. This would lead to social unrest and threaten British rule. To preserve British rule it was essential that one of the links in the chain be broken, even if this meant abandoning cherished notions of sanctity of property and contract.
The Punjab Land Alienation Act (PLAA) of 1900 was the most ambitious policy intervention motivated by this thinking. It sought to prevent professional moneylenders from acquiring the property of traditional landowners. To this end it banned, except under some conditions, the permanent transfer of land from an owner belonging to an ‘agricultural tribe’ to a buyer or creditor who was not from this tribe. Moreover, a lender who was not from an agricultural tribe could no longer seize the land of a defaulting debtor who was from an agricultural tribe.
The PLAA made direct restrictions on the transfer of land a respectable part of the policy apparatus of the Raj and its influence persists to the present-day. There is a substantial literature on the emergence of the PLAA, yet there is no econometric work on two basic questions regarding its impact. First, did the PLAA reduce the availability of mortgage-backed credit? Or were borrowers and lenders able to use various devices to evade the Act, thereby neutralizing it? Second, if less credit was available, what were the effects on agricultural outcomes and productivity? We use panel data methods to address these questions, for the first time, so far as we know.
Our work provides evidence regarding an unusual policy experiment that is relevant to a hypothesis of broad interest. It is often argued that ‘clean titling’ of assets can facilitate their use as collateral, increasing access to credit, leading to more investment and faster growth. Informed by this hypothesis, many studies estimate the effects of titling on credit and other outcomes, but they usually pertain to making assets more usable as collateral. The PLAA went in the opposite direction – it reduced the “collateralizability” of land which should have reduced investment and growth, based on the argument we have described. We investigate whether it did.
To identify the effects of the PLAA, we assembled a panel dataset on 25 districts in Punjab from 1890 to 1910. Our dataset contains information on mortgages and sales of land, economic outcomes, such as acreage and ownership of cattle, and control variables like rainfall and population. Because the PLAA targeted professional moneylenders, it should have reduced mortgage-backed credit more in places where they were bigger players in the credit market. Hence, we interact a measure of the importance of the professional, that is, non-agricultural, moneylenders in the mortgage market with an indicator variable for the introduction of the PLAA, which takes the value of 1 from 1900 onward. As expected, we find that that the PLAA contracted credit more in places where professional moneylenders played a larger role – compared to districts with no professional moneylenders. The PLAA reduced mortgage-backed credit by 48 percentage points more at the 25th percentile of our measure of moneylender-importance and by 61 percentage points more at the 75th percentile.
However, this decrease of mortgage-backed credit in professional moneylender-dominated areas did not lead to lower acreage or less ownership of cattle. In short, the PLAA affected credit markets as we might expect without undermining agricultural productivity. Because we have panel data, we are able to account for potential confounding factors such as time-invariant unobserved differences across districts (using district fixed effects), common district-specific shocks (using year effects) and the possibility that districts were trending differently independent of the PLAA (using district-specific time trends).
British officials provided a plausible explanation for the non-impact of PLAA on agricultural production: lenders had merely become more judicious – they were still willing to lend for productive activity, but not for ‘extravagant’ expenditures, such as social ceremonies. There may be a general lesson here: policies that make it harder for lenders to recover money may have the beneficial effect of encouraging due diligence.
Consumers in the Northern hemisphere are feeling increasingly uneasy about their industrial diet. Few question that during the twentieth century the industrial diet helped us solve the nutritional problems related to scarcity. But there is now growing recognition that the triumph of the industrial diet triggered new problems related to abundance, among them obesity, excessive consumerism and environmental degradation. Currently, alternatives, ranging from organic food and those bearing geographical-‘quality’ labels, struggle to transcend the industrial diet. Frequently, these alternatives face a major obstacle: their relatively high price compared to mass-produced and mass-retailed food.
The research that I have conducted examines the literature on nutritional transitions, food regimes and food history, and positions it within present-day debates on diet change in affluent societies. I employ a case-study of the growth in mass consumption of dairy products in Spain between 1965 and 1990. In the mid-1960s, dairy consumption was very low in Spain and many suffered from calcium deficiency. Subsequently, there was a rapid growth in consumption. Milk, especially, became an integral part of the diet for the population. Alongside mass consumption there was also mass-production and complementary technical change. In the early 1960s, most consumers only drank raw milk, but by the 1990s milk was being sterilised and pasteurised to standard specifications by an emergent national dairy industry.
In the early 1960s, the regular purchase of milk was too expensive for most households. By the early 1990s, an increase in household incomes, complemented by (alleged) price reductions generated by dairy industrialization, facilitated rapid milk consumption. A further factor aiding consumption was changing consumer preferences. Previously, consumers perceptions of milk were affected by recurrent episodes of poisoning and fraud. The process of dairy industrialization ensured a greater supply of ‘safe’ milk and this encouraged consumers to use their increased real incomes to buy more milk. ‘Quality’ milk meant milk that was safe to consume became the main theme in the advertising campaigns employed milk processers (Figure 1).
Figure 1. Advertisement by La Lactaria Española in the early 1970s.
What are the implications of my research to contemporary debates on food quality? First the transition toward a diet richer in organic foods and foods characterised by short food-supply chains and artisan-like production, backed by geographical-quality labels has more than niche relevance. There are historical precedents (such as the one studied in this article) of large sections of the populace willing to pay premium prices for food products that in some senses are perceived as qualitatively superior to other, more ‘conventional’ alternatives. If it happened in the past, it can happen again. Indeed, new qualitative substitutions are already taking place. The key issue is the direction of this substitution. Will consumers use their affluence to ‘green’ their diet? Or will they use higher incomes to purchase more highly processed foods — with possibly negative implications for public health and environmental sustainability? This juncture between food-system dynamics and public policy is crucial. As Fernand Braudel argued, it is the extraordinary capacity for adaption that defines capitalism. My research suggests that we need public policies that reorient food capitalism towards socially progressive ends.