Italy and the Little Divergence in Wages and Prices: Evidence from Stable Employment in Rural Areas, 1500-1850

by Mauro Rota (Sapienza University of Rome) and Jacob Weisdorf (Sapienza University of Rome)

The full article from this post was now published on The Economic History Review and it is available on Early View at this link

The Medieval Plow (Moldboard Plow). Farming in the Middle Ages. Available at Wikimedia Commons

More than half a century ago, Carlo Cipolla advocated that early-modern Italy suffered a prolonged economic downturn. Subsequently, Cipolla’s view was challenged by Domenico Sella, who contended that Italy’s downturn was mainly an urban experience, with the countryside witnessing both rising agricultural productivity and growing proto-industry at the time. If Sella’s view is correct, it is no longer certain  that rural Italy performed differently to its rural counterparts in North-Western Europe.  This potentially implicates how to think about long-run trends in historical workers’ living standards and how these varied across Europe.

The common narrative – that early-modern Europe witnessed a little divergence in living standards – is underpinned by daily wages paid to urban labour. These show that London workers earned considerably more than those in other leading European cities. There are two important reasons, however, why casual urban wages might overstate the living standards of most early-modern workers. First, urban workers made up only a modest fraction of the total workforce. They also received an urban wage premium to cover their urban living expenses – a premium that most workers therefore did not enjoy. Second, many workers were employed on casual terms and had to piece their annual earnings together from daily engagements. This entailed a risk of involuntary underemployment for which workers without stable engagements were compensated. Unless this compensation is accounted for, day wages, on the usual but potentially ahistorical assumption of full-year casual employment, will overstate historical workers’ annual earnings and thus their living standards.

We present an early-modern wage index for ‘stable’ rural workers in the Grand Duchy of Tuscany, an important region in pre-industrial Italy (Figure 1).  Since these wages avoid the premiums described above, we argue that our wages offer a more suitable estimate of most historical workers’ living standards, and that the little divergence therefore should be considered using such wages instead. We draw a number of important conclusions on the basis of the new data and their comparison with pre-existing urban casual wages for Italy and rural stable wages for England.

Figure 1: Implied daily real wages of unskilled urban and rural workers in Italy, 1500-1850. Source: as per article.

First, we observe that early modern Italy’s downturn was effectively an urban one, with stable rural workers able to largely maintain their real annual income across the entire early-modern period (Figure 1). Indeed, the urban decline came from a pedestal of unprecedented high wages, possibly the highest in early-modern Europe, and certainly at heights that suggests that urban casual workers were paid considerable wage premiums to cover urban penalties alongside the risk of underemployment.

Our ‘apple-to-apple’ wage comparison within the Grand Duchy of Tuscany gives a precise indication of the size of the wage premiums discussed above. Figure 2 suggests that casual workers received a premium for job insecurity, and that urban workers, unlike rural workers, also received a wage premium. Further, when we compare the premium-free wages in the Grand Duchy of Tuscany with similar ones for England, we find that annual English earnings increased from 10 per cent higher than those in Italy in 1650, to 150 per cent higher by 1800 (Figure 3). If wages reflected labour productivity, then unskilled English workers – but not their Italian equals – grew increasingly more productive in the period preceding the Industrial Revolution.

Figure 2: The implied daily real wages of unskilled casual and stable workers in Tuscany, 1500-1850. Source: As per article.
Figure 3: Real annual income of unskilled workers in Italy and England, 1500-1850. Source: As per article.

We make three main conclusions based on our findings. First, our data support the hypothesis that early-modern Italy’s downturn was mainly an urban experience. Real rural earnings in Tuscany stayed flat between 1500 and 1850. Second, we find that rural England pulled away from Italy (Tuscany) after c. 1650. This divergence happened not because our sample of Italian workers lagged behind their North-Western European counterparts, as earlier studies based on urban casual wages have suggested, but because English workers were paid increasingly more than their Southern European peers. This observation brings us to our final conclusion: to the extent that annual labour productivity in England was reflected in the development of annual earnings, it increasingly outgrew  Italian achievements.

To contact the authors:

Mauro Rota, mauro.rota@uniroma1.it

Jacob Weisdorf, jacob.weisdorf@uniroma1.it

Baumol, Engel, and Beyond: Accounting for a century of structural transformation in Japan, 1885-1985

by Kyoji Fukao (Hitotsubashi University) and Saumik Paul (Newcastle University and IZA)

The full article from this blog post was published on The Economic History Review, and it is now available on Early View at this link

Bank of Japan, silver convertible yen. Available on Wiki Commons

Over the past two centuries, many industrialized countries have experienced dramatic changes in the sectoral composition of output and employment. The pattern of structural transformation, depicted for most of the developed countries, entails a steady fall in the primary sector, a steady increase in the tertiary sector, and a hump shape in the secondary sector. In the literature, the process of structural transformation is explained through two broad channels: the income effect, driven by the generalization of Engel’s law, and the substitution effect, following the differences in the rate of productivity across sectors, also known as “Baumol’s cost disease effect”.

At the same time, an input-output (I-O) model provides a comprehensive way to study the process of structural transformation. The input-output analysis accounts for intermediate input production by a sector, as many sectors predominantly produce intermediate inputs, and their outputs rarely enter directly into consumer preferences. Moreover, an input-output analysis relies on observed data and a national income identity to handle imports and exports. The input-output analysis has considerable advantages in the context of Japanese structural transformation first from agriculture to manufactured final consumption goods, and then to services, alongside transformations in Japanese exports and imports that have radically changed over time.

We examine the drivers of the long-run structural transformation in Japan over a period of 100 years, from 1885 to 1985. During this period, the value-added share of the primary sector dropped from 60 per cent  to less than 1 per cent, whereas that of the tertiary sector rose from 27 to nearly 60 per cent in Japan (Figure 1). We apply the Chenery, Shishido, and Watanabe framework to examine changes in the composition of sectoral output shares. Chenery, Shishido, and Watanabe used an inter-industry model to explain deviations from proportional growth in output in each sector and decomposed the deviation in sectoral output into two factors: the demand side effect, a combination of the Engel and Baumol effects (discussed above), and  the supply side effect, a change in the technique of production. However, the current input-output framework is unable to uniquely separate the demand side effect into forces labelled under the Engel and Baumol effects.

Figure 1. Structural transformation in Japan, 1874-2008. Source: Fukao and Paul (2017). 
Note: Sectoral shares in GDP are calculated using real GDP in constant 1934-36 prices for 1874-1940 and constant 2000 prices for 1955-2008. In the current study, the pre-WWII era is from 1885 to1935, and the post-WWII era is from 1955 to 1985. 

To conduct the decomposition analysis, we use seven I-O tables (every 10 years) in the prewar era from 1885 to 1935 and six I-O tables (every 5 years) in the postwar era from 1955 to 1985. These seven sectors include: agriculture, forestry, and fishery; commerce and services; construction;  food;  mining and manufacturing (excluding food and textiles); textiles, and  transport, communication, and utilities.

The results show that the annual growth rate of GDP more than doubled in the post-WWII era compared to the pre-WWII era. The real output growth was the highest in the commerce and services sector throughout the period under study, but there was also rapid growth of output in mining and manufacturing, especially in the second half of the 20th century. Sectoral output growth in mining and manufacturing (textile, food, and the other manufacturing), commerce and services, and transport, communications, and utilities outgrew the pace of growth in GDP in most of the periods. Detailed decomposition results show that in most of the sectors (agriculture, commerce and services, food, textiles, and transport, communication, and utilities), changes in private consumption were the dominant force behind the demand-side explanations. The demand-side effect was strongest in the commerce and services sector.

Overall, demand-side factors — a combination of the Baumol and Engel effects, were the main explanatory factors in the pre-WWII period, whereas  supply-side factors were the key driver of structural transformation in the post-WWII period.

To contact the authors:

Kyoji Fukao, k.fukao@r.hit-u.ac.jp

Saumik Paul, paulsaumik@gmail.com, @saumik78267353

Notes

Baumol, William J., “Macroeconomics of unbalanced growth: the anatomy of urban crisis”. American Economic Review 57, (1967) 415–426.

Chenery, Hollis B., Shuntaro Shishido and Tsunehiko Watanabe. “The pattern of Japanese growth, 1914−1954”, Econometrica30 (1962), 1, 98−139.

Fukao, Kyoji and Saumik Paul “The Role of Structural Transformation in Regional Convergence in Japan: 1874-2008.” Institute of Economic Research Discussion Paper No. 665. Tokyo: Institute of Economic Research (2017).

Colonialism, institutional quality and the resource curse

by Jubril Animashaun (University of Manchester)

This blog is part of a series of New Researcher blogs.

Why are so many oil-rich countries characterised by slow economic growth and corruption? Are they cursed by the resource endowment per se or is it the mismanagement of oil wealth? We used to think that it is mostly the latter. These days, however, we know that it is far more complicated than that: institutional reform is challenging because institutions are multifaceted and path-dependent.

A primary objective of European colonialism was to expand the economic base of the home country through the imposition of institutions that favoured rent-seeking in the colony. If inherited, such structures can constitute a significant reason for the resource curse and why post-colonial institutional reform is hard. Following this argument, post-colonial groups that benefitted from the institutional system may be able to reproduce this system after independence.

Our study finds support for this argument in oil-rich countries. This suggests the enduring impact of the sixteenth to nineteenth century European colonial practices as an obstacle to institutional reforms in oil-rich countries today.

We come to this conclusion by investigating the changes in economic development over the period 1960-2015 in 69 countries. Our results show that the variation in economic development over these 45 years can be explained to a large extent by institutional quality and oil abundance and their interaction. Our findings are unchanged after controlling for countries that became independent after 1960 (many former Portuguese colonies are in this category).

In our study, we define colonial experience if an oil-rich country had European colonial settlement (for example, settler mortality records) and/or if any of the colonial European languages (English, French, Spanish, etc.) persist as official post-independence language. Persistence of the colonial language helps to distinguish colonies based on the depth of colonial economic engagement.

We further capture colonialism with a dummy variable to reduce the measurement error with estimates of both settler mortality and language. Institutions are measured as the unweighted averages of executive constraints, expropriation risk and government effectiveness (institutional quality index).

Figure 1: Log of settler mortality on institutional quality in oil and gas-rich countries that were former European colonies

It is important in our kind of research to distinguish the impact of colonial legacy from the pre-colonial conditions in the colonised states to validate our result. This is because places with sophisticated technologies could have resisted colonial occupation, and such historical technologies may also have a persistent long-term effect. As our sample comprises countries with giant oil discoveries, and because oil discoveries did not drive the sixteenth to nineteenth century European colonialism, our findings rule out other backdoor effects of colonial and pre-colonial impact on current performance.

Figure 2: Log illiteracy and experience of colonialism in oil-rich countries with control for log GDP and population

We find a significant gap in illiteracy levels between colonised and non-colonised countries. We also find that countries with colonial heritage have less trust. We suggest that to reverse the resource curse, higher priorities should be placed on investment in human capital and education. These will boost citizens’ ability to demand accountability and good governance from elected officials and improve the quality of discourse on civic engagement on institutional reforms.

Figure 3: Social trust index and the experience of colonialism

Coordinating Decline: Governmental Regulation of Disappearing Horse Markets in Britain, 1873-1957 (NR Online Session 5)

By Luise Elsaesser (European University Institute)

This research is due to be presented in the fifth New Researcher Online Session: ‘Government & Colonization’.

 

Elsaesser1
Milkman and horse-drawn cart – Alfred Denny, Victoria Dairy, Kew Gardens, Est 1900. Available at Wikimedia Commons.

The enormous horse drawn society of 1900 was new. An unprecedented amount of goods and people could only be moved by trains and ships between terminal points and therefore, horses were required by anybody and for everything to reach its final destination. But, the moment the need for horsepower peaked, new technologies had already started to make the working horse redundant for everyday economic life. The disappearance of the horse was a rapid process in the urban areas, whereas the horse remained an economic necessity much longer in other areas of use such as agriculture. The horses decline left behind deep traces causing fundamental changes in soundscapes, landscapes, and smells of human environment and economic life.

Elsaesser2

Against prevailing narratives of a laissez-faire approach, the British government monitored and shaped this major shift in the use of energy source actively. The exploration of the political economy of a disappearing commercial good examines the regulatory practices and ways the British government interacted with producers and consumers of markets. This demonstrates that governmental regulations are inseparable from modern British economy and that government intervention follows the careful assessment of costs and benefits as well as self-interest over the long time period.

Public pressure groups such as the RSPCA as well as social and business elites were often strongly connected to government circles embracing the opportunity to influence policy outcomes. For instance, the Royal Commission on Horse Breeding was formed in December 1887 is telling because it shows where policy making power that passed through Westminster originated. The commissionaires were without exception holders of heredity titles, members of the gentry, politicians, or businessmen, and all were avid horsemen and breeders. To name but two, Henry Chaplin, the President of the Board of Agriculture, had a family background of Tory country gentlemen and was a dedicated rider, and Mr. John Gilmour, whose merchant father grew rich in the Empire, owned a Clydesdale stud of national reputation. Their self-interest and devotion to horse breeding seems obvious, especially in the context of the agricultural depression when livestock proved more profitable than the cultivation of grain.

Although economic agents of the horse markets were often moving within government circles, they had to face regulations. For example, a legal framework was developed which fashioned the scope of manoeuvre for import and export markets for horses. The most prominent case during the transition from horse to motor-power was the emergence of an export market of horses for slaughter. British charitable organisations such as the RSPCA, the Women’s Guild for Empire, and the National Federation of Women’s Institute pressured the government to prevent the export of horses for slaughter on grounds of “national honour” since the 1930s. However, though the government never publicly admitted it, the meat market was endorsed to manage the declining utility of horsepower. With technologies becoming cheaper, horsemeat markets were greeted by large businesses such as railway companies as way to dispose of their working horses without making a financial loss. Hence, the markets for working horses were not merely associated with the economic use and demand for their muscle power but were linked to government regulation.

Ultimately, an analysis of governmental coordination can be linked to wider socio-cultural and economic systems of consumption because policy outcome indeed influenced the use of the horse but likewise coordination was monitored by the agents of the working horse markets.


Luise Elsaesser

luise.elsaesser@eui.eu

Twitter: @Luise_Elsaesser

How to Keep Society Equal: The Case of Pre-industrial East Asia (NR Online Session 4)

By Yuzuru Kumon (Bocconi University)

This research is due to be presented in the fourth New Researcher Online Session: ‘Equality & Wages’.

Kumon2

Theatrum orbis terrarum: Map Tartaria, by Abraham Ortelius. Available at State Library Victoria.

 

 

Is high inequality destiny? The established view is that societies naturally converge towards high inequality in the absence of catastrophes (world wars or revolutions) or the progressive taxation of the rich. Yet, I show that rural Japan, 1700-1870, is an unexpected historical case in which a stable equality was sustained without such aids. Most peasants owned land, the most valuable asset in an agricultural economy, and Japan remained a society of land-owning peasants. This contrasts with the landless laborer societies of contemporary Western Europe which were highly unequal. Why were the outcomes so different?

My research shows that the relative equality of pre-industrial Japan can partly be explained by the widespread use of adoptions in Japan, which was used as a means of securing a male heir. The reasoning becomes clear if we first consider the case of the Earls Cowper in 18th century England where adoption was not practiced. The first Earl Cowper was a modest landowner and married Mary Clavering in 1706. When Mary’s brother subsequently died, she became the heiress and the couple inherited the Clavering estate. Similar (miss)fortunes for their heirs led the Cowpers to become one of the greatest landed families of England. The Cowpers were not particularly lucky, as one quarter of families were heirless during this era of high child mortality. The outcome of this death lottery was inequality.

Had the Cowpers lived in contemporary Japan, they would have remained modest landowners. An heirless household in Japan would adopt a son. Hence, the Claverings would have an adopted son and the family estate would have remained in the family. To keep the blood in the family, the adopted son may have married a daughter if available. If unavailable, the next generation could be formed by total strangers but they would continue the family line. Amassing a fortune in Japan was unrelated to demographic luck.

Widespread adoptions were not a peculiarity of Japan and this mechanism can also explain why East Asian societies were landowning peasant societies. China also had high rates of adoption in addition to equal distributions of land according to surveys from the 1930s. Perhaps more surprisingly, adoptions were common in ancient Europe where the Greeks and Romans practiced adoptions to secure heirs. For example, Augustus, the first emperor of the Roman Empire, was adopted. Adoptions were a natural means of keeping wealth under the control of the family.

Europe changed due to the church discouraging adoptions from the early middle ages, leading to adoptions becoming rarities by the 11th century. The church was partially motivated by theology but also by the possibility that heir-less wealth would get willed to the church. They almost certainly did not foresee that their policies would lead to greater wealth inequality during the subsequent eras.

 

Figure 1. Land Distribution under Differing Adoption Regimes and Impartible Inheritance

Kumon1

 

My study shows by simulation that a large portion of the difference in wealth inequality outcomes between east and west can be explained by adoption (see figure 1). Societies without adoption have wealth distribution that are heavily skewed with many landless households unlike those with adoptions. Therefore, family institutions played a key role in determining inequality which had huge implications for the way society was organized in these two regions.

Interestingly, East Asian societies still have greater equality in wealth distributions today. Moreover, adoptions still amount to 10% of marriages in Japan which is a remarkably large share. Adoption may have continued creating a relatively equal society in Japan up to today.

Poverty or Prosperity in Northern India? New Evidence on Real Wages, 1590s-1870s

by Pim de Zwart (Wageningen University) and Jan Lucassen (International Institute of Social History, Amsterdam)

The full article from this blog was published on The Economic History Review and it is now available open access on early view at this link 

 

At the end of the sixteenth century, the Indian subcontinent, largely unified under the Mughals, was one of the most developed parts of the global economy, with relatively high incomes and a thriving manufacturing sector. Over the centuries that followed, however, incomes declined and India deindustrialized. The precise timing and causes of this decline remain the subject of academic debate about the Great Divergence between Europe and Asia. Whereas some scholars depicted the eighteenth century in India as a period of economic growth and comparatively high living standards, other have suggested it was an era of decline and relatively low incomes. The evidence on which these contributions have been based is rather thin, however. In our paper, we add quantitative and qualitative data from numerous British and Dutch archival sources about the development of real wages and the functioning of the northern Indian labour market between the late sixteenth and late nineteenth centuries.

In particular, we introduce a new dataset with over 7500 observations on wages across various towns in northern India (Figure 1). The data pertain to the income earned in a wide range of occupations, from unskilled urban workers and farm servants, to skilled craftsmen and bookkeepers, and for both adult men and women and children. All these wage observations were coded following the HISCLASS scheme that allow us to compare trends in wages between groups of workers. The wage database provides information about the incomes of an important body of workers in northern India. There was little slavery and serfdom in India, and wage labour was relatively widespread. There was a functioning free labour market in which European companies enjoyed no clearly privileged position. The data thus obtained for India can therefore be viewed as comparable to those gathered for many European cities in which  the wages of construction workers were often paid by large institutions.

Picture 1
Figure 01 – Map of India and regional distribution of the wage data. Source: as per article

We calculated the value of the wage relative to a subsistence basket of goods. We made further adjustments to the real wage methodology by incorporating information about climate, regional consumption patterns, average heights, and BMI, to more accurately calculate the subsistence cost of living. Comparing the computed real wage ratios for northern India with those prevailing in other parts of Eurasia leads to a number of important insights (Figure 1). Our data suggests that the Great Divergence between Europe and India happened relatively early, from the late seventeenth century. The slightly downward trend since the late seventeenth century lasted and wage labourers saw their purchasing power diminish until the devastating Bengal famine in 1769-1770. Given this evidence, it is difficult to view the eighteenth century as period of generally rising prosperity across northern India. While British colonialism may have reduced growth in the nineteenth century — pretensions about the superiority of European administration and the virtues of the free market may have had long-lasting negative consequences — it is nonetheless clear that most of the decline in living standards preceded colonialism. Real wages in India stagnated in the nineteenth century, while Europe experienced significant growth; consequently, India lagged further behind.

Picture 2
Fig.02 – Real wages in India in comparison with Europe and Asia. Source: as per article

With real wages below subsistence level it is likely that Indian wage labourers worked more than the 250 days per year often assumed in the literature. This is also confirmed in our sources which suggest 30 days of labour per month. To accommodate this observation, we added a real wage series based on the assumption of 360 days labour per year (Figure 2). Yet even with 360 working days per year, male wages were at various moments in the eighteenth and nineteenth centuries insufficient to sustain a family at subsistence level. This evidence indicates the limits of what can be said about living standards based solely on the male wage. In many societies and in most time periods, women and children made significant contributions to household income. This also seems to have been the case for northern India. Over much of the eighteenth and nineteenth centuries, the gap between male and female wages was smaller in India than in England. The important contribution of women and children to household incomes may have allowed Indian families to survive despite low levels of male wages.

 

To contact the authors: 

Pim de Zwart (pim.dezwart@wur.nl)

Jan Lucassen (lucasjan@xs4all.nl)

Give Me Liberty Or Give Me Death

by Richard A. Easterlin (University of Southern California)

This blog is  part G of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. The full article from this blog is “How Beneficent Is the Market? A Look at the Modern History of Mortality.” European Review of Economic History 3, no. 3 (1999): 257-94. https://doi.org/10.1017/S1361491699000131

 

VACCINATION_06
A child is vaccinated, Brazil, 1970.

Patrick Henry’s memorable plea for independence unintentionally also captured the long history of conflict between the free market and public health, evidenced in the current struggle of the United States with the coronavirus.  Efforts to contain the virus have centered on  measures to forestall transmission of the disease such as stay-at-home orders, social distancing, and avoiding large gatherings, each of which infringes on individual liberty.  These measures have given birth to a resistance movement objecting to violations of one’s freedom.

My 1999 article posed the question “How Beneficent is the Market?” The answer, based on “A Look at the Modern History of Mortality” was straightforward: because of the ubiquity of market failure, public intervention was essential to achieve control of major infectious disease. This intervention  centered on the creation of a public health system. “The functions of this system have included, in varying degrees, health education, regulation, compulsion, and the financing or direct provision of services.”

Regulation and compulsion, and the consequent infringement of individual liberties, have always been  critical building blocks of the public health system. Even before formal establishment of public health agencies, regulation and compulsion were features of measures aimed at controlling the spread of infectious disease in mid-19th century Britain. The “sanitation revolution” led to the regulation of water supply and sewage disposal, and, in time to regulation of slum-  building conditions.  As my article notes, there was fierce opposition to these measures:

“The backbone of the opposition was made up of those whose vested interests were threatened: landlords, builders, water companies, proprietors of refuse heaps and dung hills, burial concerns, slaughterhouses, and the like … The opposition appealed to the preservation of civil liberties and sought to debunk the new knowledge cited by the public health advocates …”

The greatest achievement of public health was the eradication of smallpox, the one disease in the world that has been eliminated from the face of the earth. Smallpox was the scourge of humankind until William Jenner’s discovery of a vaccine in 1798.   Throughout the 19th and 20th centuries, requirements for smallpox vaccination were fiercely opposed by anti-vaccinationists.  In 1959 the World Health Organization embarked on a program to eradicate the disease. Over the ensuing two decades its efforts to persuade governments worldwide to require vaccination of infants were eventually successful, and in 1980 WHO officially declared the disease eradicated. Eventually public health triumphed over liberty. But It took almost two centuries to realize Jenner’s hope that vaccination would annihilate smallpox.

In the face of the coronavirus pandemic the U. S. market-based health care system  has demonstrated once again the inability of the market to  deal with infectious disease, and the need for forceful public intervention. The  current health care system requires that:

 “every player, from insurers to hospitals to the pharmaceutical industry to doctors, be financially self-sustaining, to have a profitable business model. It excels in expensive specialty care. But there’s no return on investment in being positioned for the possibility of a pandemic” (Rosenthal 2020).

Commercial and hospital labs have been slow to respond to the need for developing a test for the virus.  Once tests became available, conducting them was handicapped by insufficient supplies of testing capacity — kits, chemical reagents, swabs, masks and other personal protective equipment. In hospitals, ventilators  were also in short supply. These deficiencies reflected the lack of profitability in responding to these needs, and of a government reluctant to compensate for market failure.

At the current time, the halting efforts of federal public health authorities  and state and local public officials to impose quarantine and “shelter at home” measures have been seriously handicapped by public protests over infringement of civil liberties, reminiscent of the dissidents of the 19th  and 20th centuries and their current day heirs. States are opening for business well in advance of guidelines of the Center for Disease Control.  The lesson of history regarding such actions is clear: The cost of liberty is sickness and death.  But do we learn from history? Sadly, one is put in mind of Warren Buffet’s aphorism: “What we learn from history is that people don’t learn from history.”

 

Reference

Rosenthal, Elizabeth, “A Health System Set up to Fail”,  New York Times, May 8, 2020, p.A29.

 

To contact the author: easterl@usc.edu

Unequal access to food during the nutritional transition: evidence from Mediterranean Spain

by Francisco J. Medina-Albaladejo & Salvador Calatayud (Universitat de València).

This article is forthcoming in the Economic History Review.

 

Medina1
Figure 1 – General pathology ward, Hospital General de Valencia (Spain), 1949. Source: Consejo General de Colegios Médicos de España. Banco de imágenes de la medicina española. Real Academia Nacional de Medicina de España. Available here.

Over the last century, European historiography has debated whether industrialisation brought about an improvement in working class living standards.  Multiple demographic, economic, anthropometric and wellbeing indicators have been examined in this regard, but it was Eric Hobsbawm (1957) who, in the late 1950s, incorporated food consumption patterns into the analysis.

Between the mid-19th and the first half of the 20th century, the diet of European populations underwent radical changes. Caloric intake increased significantly, and cereals were to a large extent replaced by animal proteins and fat, resulting from a substantial increase in meat, milk, eggs and fish consumption. This transformation was referred to by Popkin (1993) as the ‘Nutritional transition’.

These dietary changes were  driven, inter alia,  by the evolution of income levels which raises the possibility  that significant inequalities between different social groups ensued. Dietary inequalities between different social groups are a key component in the analysis of inequality and living standards; they directly affect mortality, life expectancy, and morbidity. However, this hypothesis  remains unproven, as historians are still searching for adequate sources and methods with which to measure the effects of dietary changes on living standards.

This study contributes to the debate by analysing a relatively untapped source: hospital diets. We have analysed the diet of psychiatric patients and members of staff in the main hospital of the city of Valencia (Spain) between 1852 and 1923. The diet of patients depended on their social status and the amounts they paid for their upkeep. ‘Poor psychiatric patients’ and abandoned children, who paid no fee, were fed according to hospital regulations, whereas ‘well-off psychiatric patients’ paid a daily fee in exchange for a richer and more varied diet. There were also differences among members of staff, with nuns receiving a richer diet than other personnel (launderers, nurses and wet-nurses). We think that our source  broadly  reflects dietary patterns of the Spanish population and the effect of income levels thereon.

Figure 2 illustrates some of these differences in terms of animal-based caloric intake in each of the groups under study. Three population groups can be clearly distinguished: ‘well-off psychiatric patients’ and nuns, whose diet already presented some of the features of the nutritional transition by the mid-19th century, including fewer cereals and a meat-rich diet, as well as the inclusion of new products, such as olive oil, milk, eggs and fish; hospital staff, whose diet was rich in calories,to compensate for their demanding jobs, but still traditional in structure, being largely based on cereals, legumes, meat and wine; and, finally, ‘poor psychiatric patients’ and abandoned children, whose diet was poorer and which, by the 1920, had barely joined the trends that characterised the nutritional transition.

 

Medina2
Figure 2. Percentage of animal calories in the daily average diet by population groups in the Hospital General de Valencia, 1852-1923 (%). Source: as per original article.

 

In conclusion, the nutritional transition was not a homogenous process, affecting all diets at the time or at the same pace. On the contrary, it was a process marked by social difference, and the progress of dietary changes was largely determined by social factors. By the mid-19th century, the diet structure of well-to-do social groups resembled diets that were more characteristic of the 1930s, while less favoured and intermediate social groups had to wait until the early 20th century before they could incorporate new foodstuffs into their diet. As this sequence clearly indicates, less favoured social groups always lagged behind.

 

References

Medina-Albaladejo, F. J. and Calatayud, S., “Unequal access to food during the nutritional transition: evidence from Mediterranean Spain”, Economic History Review, (forthcoming).

Hobsbawm, E. J., “The British Standard of Living, 1790-1850”, Economic History Review, 2nd ser., X (1957), pp. 46-68.

Popkin B. M., “Nutritional Patterns and Transitions”, Population and Development Review, 19, 1 (1993), pp. 138-157.

Fascistville: Mussolini’s new towns and the persistence of neo-fascism

by Mario F. Carillo (CSEF and University of Naples Federico II)

This blog is part of our EHS 2020 Annual Conference Blog Series.


 

Carillo3
March on Rome, 1922. Available at Wikimedia Commons.

Differences in political attitudes are prevalent in our society. People with the same occupation, age, gender, marital status, city of residence and similar background may have very different, and sometimes even opposite, political views. In a time in which the electorate is called to make important decisions with long-term consequences, understanding the origins of political attitudes, and then voting choices, is key.

My research documents that current differences in political attitudes have historical roots. Public expenditure allocation made almost a century ago help to explain differences in political attitudes today.

During the Italian fascist regime (1922-43), Mussolini undertook enormous investments in infrastructure by building cities from scratch. Fascistville (Littoria) and Mussolinia are two of the 147 new towns (Città di Fondazione) built by the regime on the Italian peninsula.

Carillo1

Towers shaped like the emblem of fascism (Torri Littorie) and majestic buildings as headquarters of the fascist party (Case del Fascio) dominated the centres of the new towns. While they were modern centres, their layout was inspired by the cities of the Roman Empire.

Intended to stimulate a process of identification of the masses based on the collective historical memory of the Roman Empire, the new towns were designed to instil the idea that fascism was building on, and improving, the imperial Roman past.

My study presents three main findings. First, the foundation of the new towns enhanced local electoral support for the fascist party, facilitating the emergence of the fascist regime.

Second, such an effect persisted through democratisation, favouring the emergence and persistence of the strongest neo-fascist party in the advanced industrial countries — the Movimento Sociale Italiano (MSI).

Finally, survey respondents near the fascist new towns are more likely today to have nationalistic views, prefer a stronger leader in politics and exhibit sympathy for the fascists. Direct experience of life under the regime strengthens this link, which appears to be transmitted across generations inside the family.

Carillo2

Thus, the fascist new towns explain differences in current political and cultural attitudes that can be traced back to the fascist ideology.

These findings suggest that public spending may have long-lasting effects on political and cultural attitudes, which persist across major institutional changes and affect the functioning of future institutions. This is a result that may inspire future research to study whether policy interventions may be effective in promoting the adoption of growth-enhancing cultural traits.

Turkey’s Experience with Economic Development since 1820

by Sevket Pamuk, University of Bogazici (Bosphorus) 

This research is part of a broader article published in the Economic History Review.

A podcast of Sevket’s Tawney lecture can be found here.

 

Pamuk 1

New Map of Turkey in Europe, Divided into its Provinces, 1801. Available at Wikimedia Commons.

The Tawney lecture, based on my recent book – Uneven centuries: economic development of Turkey since 1820, Princeton University Press, 2018 – examined the economic development of Turkey from a comparative global perspective. Using GDP per capita and other data, the book showed that Turkey’s record in economic growth and human development since 1820 has been close to the world average and a little above the average for developing countries. The early focus of the lecture was on the proximate causes — average rates of investment, below average rates of schooling, low rates of total productivity growth, and low technology content of production —which provide important insights into why improvements in GDP per capita were not higher. For more fundamental explanations I emphasized the role of institutions and institutional change. Since the nineteenth century Turkey’s formal economic institutions were influenced by international rules which did not always support economic development. Turkey’s elites also made extensive changes in formal political and economic institutions. However, these institutions provide only part of the story:  the direction of institutional change also depended on the political order and the degree of understanding between different groups and their elites. When political institutions could not manage the recurring tensions and cleavages between the different elites, economic outcomes suffered.

There are a number of ways in which my study reflects some of the key trends in the historiography in recent decades.  For example, until fairly recently, economic historians focused almost exclusively on the developed economies of western Europe, North America, and Japan. Lately, however, economic historians have been changing their focus to developing economies. Moreover, as part of this reorientation, considerable effort has been expended on constructing long-run economic series, especially GDP and GDP per capita, as well as series on health and education.  In this context, I have constructed long-run series for the area within the present-day borders of Turkey. These series rely mostly on official estimates for the period after 1923 and make use of a variety of evidence for the Ottoman era, including wages, tax revenues and foreign trade series. In common with the series for other developing countries, many of my calculations involving Turkey  are subject to larger margins of error than similar series for developed countries. Nonetheless, they provide insights into the developmental experience of Turkey and other developing countries that would not have been possible two or three decades ago. Finally, in recent years, economists and economic historians have made an important distinction between the proximate causes and the deeper determinants of economic development. While literature on the proximate causes of development focuses on investment, accumulation of inputs, technology, and productivity, discussions of the deeper causes consider the broader social, political, and institutional environment. Both sets of arguments are utilized in my book.

I argue that an interest-based explanation can address both the causes of long-run economic growth and its limits. Turkey’s formal economic institutions and economic policies underwent extensive change during the last two centuries. In each of the four historical periods I define, Turkey’s economic institutions and policies were influenced by international or global rules which were enforced either by the leading global powers or, more recently, by international agencies. Additionally, since the nineteenth century, elites in Turkey made extensive changes to formal political institutions.  In response to European military and economic advances, the Ottoman elites adopted a programme of institutional changes that mirrored European developments; this programme  continued during the twentieth century. Such fundamental  changes helped foster significant increases in per capita income as well as  major improvements in health and education.

But it is also necessary to examine how these new formal institutions interacted with the process of economic change – for example, changing social structure and variations in the distribution of power and expectations — to understand the scale and characteristics of growth that the new institutional configurations generated.

These interactions were complex. It is not easy to ascribe the outcomes created in Turkey during these two centuries to a single cause. Nonetheless, it is safe to state that in each of the four periods, the successful development of  new institutions depended on the state making use of the different powers and capacities of the various elites. More generally, economic outcomes depended closely on the nature of the political order and the degree of understanding between different groups in society and the elites that led them. However, one of the more important characteristics of Turkey’s social structure has been the recurrence of tensions and cleavages between its elites. While they often appeared to be based on culture, these tensions overlapped with competing economic interests which were, in turn, shaped by the economic institutions and policies generated by the global economic system. When political institutions could not manage these tensions well, Turkey’s economic outcomes remained close to the world average.