Corporate Social Responsibility for workers: Pirelli (1950-1980)

by Ilaria Suffia (Università Cattolica, Milan)

This blog is part of our EHS Annual Conference 2020 Blog Series.



Pirelli headquarters in Milan’s Bicocca district. Available at Wikimedia Commons.

Corporate social responsibility (CSR) in relation to the workforce has generated extensive academic and public debate. In this paper I evaluate Pirelli’s approach to CSR, by exploring its archives over the period 1950 to 1967.

Pirelli, founded in Milan by Giovanni Battista Pirelli in 1872, introduced industrial welfare for its employees and their family from its inception. In 1950, it deepened its relationship with them by publishing ‘Fatti e Notizie’ [Events and News], the company’s in-house newspaper. The journal was intended to share information with workers, at any level and, above all, it was meant to strengthen relationships within the ‘Pirelli family’.

Pirelli industrial welfare began in the 1870s and, by the end of the decade, a mutual aid fund and some institutions for its employees families (kindergarten and school), were established. Over the next 20 years, the company set the basis of its welfare policy which encompassed three main features: a series of ‘workplace’ protections, including accident and maternity assistance;  ‘family assistance’, including (in addition to kindergarten and school), seasonal care for children and, finally,  commitment to the professional training of its workers.

In the 1920s, the company’s welfare enlarged. In 1926, Pirelli created a health care service for the whole family and, in the same period, sport, culture and ‘free time’ activities became the main pillars of its CSR. Pirelli also provided houses for its workers, best exemplified in 1921, with the ‘Pirelli Village’. After 1945, Pirelli continued its welfare policy. The Company started a new programme of construction of workers’ houses (based on national provision), expanding its Village, and founding a professional training institute, dedicated to Piero Pirelli. The establishment in 1950 of the company journal, ‘Fatti e Notizie’, can be considered part of Pirelli’s welfare activities.

‘Fatti e Notizie’ was designed to improve internal communication about the company, especially Pirelli’s workers.  Subsequently, Pirelli also introduced in-house articles on current news or special pieces on economics, law and politics. My analysis of ‘Fatti e Notizie’ demonstrates that welfare news initially occupied about 80 per cent of coverage, but after the mid-1950s it decreased to 50 per cent in the late 1960s.

The welfare articles indicate that the type of communication depended on subject matter. Thus, health care, news on colleagues, sport and culture were mainly ‘instructive’, reporting information and keeping up to date with events. ‘Official’ communications on subjects such as CEO reports and financial statements, utilised ‘top to bottom’ articles. Cooperation, often reinforced with propaganda language, was promoted for accident prevention and workplace safety. Moreover, this kind of communication was applied to ‘bottom to top’ messages, such as an ‘ideas box’ in which workers presented their suggestions to improve production processes or safety.

My analysis shows that the communication model implemented by Pirelli in the 1950s and 1960s, navigated models of capitulation, (where the managerial view prevails) in the 1950s, to trivialisation (dealing only with ‘neutral’ topics, from the 1960s.



Ilaria Suffia:

Land, Ladies, and the Law: A Case Study on Women’s Land Rights and Welfare in Southeast Asia in the Nineteenth Century

by Thanyaporn Chankrajang and Jessica Vechbanyongratana (Chulalongkorn University)

The full article from this blog is forthcoming on The Economic History Review


Security of land rights  empowers women with greater decision-making power (Doss, 2013), potentially impacting both land-related investment decisions and the allocation of goods within the households (Allendorf, 2007; Goldstein et al., 2008; Menon et al., 2017). In historical contexts where land was the main factor of production for most economic activities, little is known about women’s land entitlements. Historical gender-disaggregated land ownership data are scarce, making  quantitative investigations of the past challenging. In new research we overcome this problem by analyzing rare, gender-disaggregated, historical land rights records to determine the extent of women’s land rights, and its implications, in nineteenth-century Bangkok.

First, we utilized orchard land deeds issued in Bangkok during the 1880s (Figure 1).  These deeds were both landownership and tax documents. Land tax assessment  was assessed based on the enumeration of mature orchard trees producing  high-value fruits, such as areca nuts, mangoes and durian. From 9,018 surviving orchard deeds, we find that 82 per cent of Bangkok orchards listed at least one woman as an owner, indicating that women did possess de jure usufruct land rights under the traditional land rights system. By analyzing the number of trees cultivated on each property (proxied by tax per hectare), we find these rights were upheld in practice and incentivized agricultural productivity. Controlling for owner and plot characteristics, plots with only female owners on average cultivated 6.7 per cent more trees per hectare than plots with mixed gender ownership, while male-owned plots cultivated 6.7 per cent fewer trees per hectare than mixed gender plots. The evidence indicates higher levels of investment in cash crop cultivation among female landowners.

Picture 1fd
Figure 1. An 1880s Government Copy of an Orchard Land Deed Issued to Two Women. Source: Department of Lands Museum, Ministry of Interior.


The second part of our analysis assesses 217 land-related court cases to determine whether women’s land rights in Bangkok were protected from the late nineteenth century when land disputes increased. We find that ‘commoner’ women acted as both plaintiffs and defendants, and were able to win cases even against politically powerful men. Such secure land rights helped preserve women’s livelihoods.

Finally, based on an internationally comparable welfare estimation (Allen et al. 2011; Cha, 2015), we calculate an equivalent measure of a ‘bare bones’ consumption basket. We find that the median woman-owned orchard could annually support up to 10 adults. By recognizing women’s contributions to family income (Table 1), Bangkok’s welfare ratio was as high as 1.66 for the median household, demonstrating a larger household surplus than found in Japan, and comparable to those in Beijing and Milan during the same period (Allen et al. 2011).

Superficially, our findings seem to contradict historical and contemporary observations that land rights structures favor men (Doepke et al., 2012). However, our study typifies women’s economic empowerment in Thailand and Southeast Asia more generally. Since at least the early modern period, women in Southeast Asia possessed relatively high social status and autonomy in marriage and family, literacy and literature, diplomacy and politics, and economic activities ( Hirschman, 2017; Adulyapichet, 2001; Baker et al., 2017). The evidence we provide supports the latter interpretation, and is consonant with other Southeast Asian land-related features, such as matrilocality and matrilineage (Huntrakul, 2003).


Screenshot 2020-06-15 at 16.23.37

To contact the authors:

Thanyaporn Chankrajang,

Jessica Vechbanyongratana,



Adulyapichet, A., ‘Status and roles of Siamese women and men in the past: a case study from      Khun Chang Khun Phan’ (thesis, Silpakorn Univ., 2001).

Allen, R. C., Bassino, J. P., Ma, D., Moll‐Murata, C., and Van Zanden, J. L. ‘Wages, prices, and living standards in China, 1738–1925: in comparison with Europe, Japan, and India’, Economic History Review, 64 (2011), pp. 8-38.

Allendorf, K., ‘Do women’s land rights promote empowerment and child health in Nepal?’,  World development, 35 (2007), pp. 1975-88.

Baker, C., and Phongpaichit, P., A history of Ayutthaya: Siam in the early modern world (Cambridge, 2017).

Cha, M. S. ‘Unskilled wage gaps within the Japanese Empire’, Economic History Review, 68 (2015), pp. 23-47.

Chankrajang, T. and Vechbanyongratana, J. ‘Canals and orchards: the impact of transport network access on agricultural productivity in nineteenth-century Bangkok’, Journal of Economic History, forthcoming.

Chankrajang, T. and Vechbanyongratana, J. ‘Land, ladies, and the law: a case study on women’s land rights and welfare in Southeast Asia in the nineteenth century’, Economic History Review, forthcoming.

Doepke, M., Tertilt, M., and Voena, A., ‘The economics and politics of women’s rights’, Annual Review of Economics, 4 (2012), pp. 339-72.

Doss, C., ‘Intrahousehold bargaining and resource allocation in developing countries’, World Bank Research Observer 28 (2013), pp.52-78.

Goldstein, M., and Udry, C., ‘The profits of power land rights and agricultural investment in Ghana’, Journal of Political Economy, 116 (2008), pp. 981-1022.

Hirschman, C. ‘Gender, the status of women, and family structure in Malaysia’, Malaysian Journal of Economic Studies 53 (2017), pp. 33-50.

Huntrakul, P., ‘Thai women in the three seals code: from matriarchy to subordination’, Journal of Letters, 32 (2003), pp. 246-99.

Menon, N., van der Meulen Rodgers, Y., and Kennedy, A. R., ‘Land reform and welfare in Vietnam: why gender of the land‐rights holder matters’, Journal of International Development, 29 (2017), pp. 454-72.


The Great Indian Earthquake: colonialism, politics and nationalism in 1934

by Tirthankar Ghosh (Department of History, Kazi Nazrul University, Asansol, India)

This blog is part of our EHS Annual Conference 2020 Blog Series.


Gandhi in Bihar after the 1934 Nepal–Bihar earthquake. Available at Wikipedia.

The Great Indian earthquake of 1934 gave new life to nationalist politics in India. The colonial state too had to devise a new tool to deal with the devastation caused by the disaster. But the post-disaster settlements became a site of contestation between government and non-governmental agencies.

In this earthquake, thousands of lives were lost, houses were destroyed, crops and agricultural fields were devastated, towns and villages were ruined, bridges and railway tracks were warped, and drainage and water-sources had been distorted for a vast area of Bihar.

The multi-layered relief works, which included official and governmental measures, involvement of the organised party leadership and political workers, and voluntary private donations and contributions from several non-political and charitable organisations had to accommodate with several contradictory forces and elements.

Although it is sometime argued that the main objective of these relief works was to gain ‘political capital’ and ‘goodwill’; the mobilisation of fund, sympathy and fellow feelings should not be underestimated. Thus, a whole range of new nationalist politics emerged from the ruins of the disaster, which mobilised a great amount of popular engagement, political energy, and public subscriptions. The colonial state had to release prominent political leaders who could massively contribute to the relief operations.

Now the question is: was there any contestation or competition between the government and non-governmental agencies in the sphere of relief and reconstruction? Or did the disaster temporarily redefine the relationship between the state and subjects during the period of anti-colonial movement?

While the government had to embark on relief operations without having a proper idea about the depth of sufferings of the people, the political organisations, charged with sympathy and nationalism, performed the great task with more efficient organisational skills and dedication.

This time, India had witnessed what was the largest political involvement in a non-political agenda to date, where public involvement and support had not only compensated the administrative deficit, but shared an equal sense of victimhood. The non-political or non-governmental organisations, like Ramakrishna Mission, Marwari Relief Society etc. had also played a leading role in the relief operations.

The 1934 earthquake drew on massive popular sentiment, which was similar to the Bhuj earthquake of 2001 in India. In the long run, the disaster prompted the state to introduce the concept of public safety, hitherto unknown in India, and a whole new set of earthquake resistant building codes and modern urban planning using the latest technologies.

Real urban wage in an agricultural economy without landless farmers: Serbia, 1862-1910

by Branko Milanović (City University New York and LSE)

This blog is based on a forthcoming article on The Economic History Review

Screenshot 2020-06-10 at 17.10.50
Railway construction workers, ca.1900.

Calculations of historical welfare ratios (wages expressed in relation to the  subsistence needs of a wage-earner’s family) exist for many countries and time periods.  The original methodology was developed by Robert Allen (2001).  The objective of real wage studies is not only to estimate real wages but to assess living standards before the advent of national accounts.  This methodology has been employed to address key questions in economic history: income divergence between Northern Europe and China (Li and van Zanden, 2012; Allen, Bassino, Ma, Moll-Murata, and van Zanden, 2011); the  “Little Divergence”  (Pamuk 2007); development of North v. South America (Allen, Murphy and Schneider, 2012), and even the causes of the Industrial Revolution (Allen 2009; Humphries 2011; Stephenson 2018, 2019).

We apply this methodology to Serbia between 1862 and 1910, to consider  the extent to which  small, peasant-owned farms and backward agricultural technology can be  used to approximate  real income.   Further,  we develop debates on   North v. South European divergence by focusing on  Serbia (a South-Eastern country), in contrast to previous studies which focus on Mediterranean countries (Pamuk 2007; Losa and Zarauz, forthcoming). This approach allows us to formulate a hypothesis regarding the social determination of wages.

Using Serbian wage and price data from 1862 to 1910, we calculate welfare ratios for unskilled (ordinary) and skilled (construction) urban workers. We use two different baskets of goods for wage comparison: a ‘subsistence’ basket that includes a very austere diet, clothing and housing needs, but no alcohol, and a ‘respectability’ basket, composed of a greater quantity and variety of goods, including alcohol.  We modify some of the usual assumptions found in the literature to better reflect the economic and demographic conditions of Serbia in the second half of the 19th century.  Based on contemporary sources, we make the assumption that the ‘work year’ was 200, not  250 days, and that the average family size was six, not four.  Both assumptions reduce the level of the welfare ratio, but do not affect its evolution.

We find that the urban wage of unskilled workers was, on average, about 50 per cent higher than the subsistence basket for the family (Figure 1), and remained broadly constant throughout the period. This result confirms the absence of modern economic growth in Serbia (at least as far as the low income population is concerned), and indicates economic divergence between South-East and Western Europe. Serbia, diverged from Western Europe’s standard of living during the second half of the 19th century:  in 1860 the welfare ratio in London was about three times  higher than urban Serbia but by 1907, this gap had widened to more than five  to one (Figure 1).

Picture 1ee
Figure 1. Welfare ratio (using subsistence basket), urban Serbia 1862-1910. Note: Under the assumptions of 200 working days per year, household size of 6, and inclusive of the daily food and wine allowance provided by the employer. Source: as per article.


In contrast, the welfare ratio of skilled construction workers was between 20 to 30 percent higher in the 1900s compared to the 1860s (Figure 1). This trend reflects modest economic progress as well as an increase in the skill premium, which has been observed for Ottoman Turkey (Pamuk 2016).

The wages of ordinary workers appear to move more closely with the ‘subsistence basket’, whereas the wages of construction (skilled) workers wage seem to vary with the cost of the ‘respectability basket’. This leads us to hypothesize that the wages of both groups of workers were implicitly “indexed” to different baskets, reflecting the different value of the work done by each group.

Our results enhance provide further insights on economic conditions in 19th century Balkans, and generate searching questions about the assumptions used in Allen-inspired work on real wages. The standard assumption of 250 days work per annum and a ‘typical’ family size of four, may be undesirable for comparative purposes. The ultimate objective of real wage/welfare ratio studies is to provide more accurate assessments of real incomes between counties. Consequently, the assumptions underlying welfare ratios need to be country-specific.

To contact the



Allen, Robert C. (2001), “The Great Divergence in European Wages and Prices from the Middle Ages to the First World War“, Explorations in Economic History, October.

Allen, Robert C. (2009), The British Industrial Revolution in Global Perspective, New Approaches to Economic and Social History, Cambridge.

Allen Robert C., Jean-Pascal Bassino, Debin Ma, Christine Moll-Murata and Jan Luiten van Zanden (2011), “Wages, prices, and living standards in China, 1738-1925: in comparison with Europe, Japan, and India”.  Economic History Review, vol. 64, pp. 8-36.

Allen, Robert C., Tommy E. Murphy and Eric B. Schneider (2012), “The colonial origins of the divergence in the Americas: A labor market approach”, Journal of Economic History, vol. 72, no. 4, December.

Humphries, Jane (2011), “The Lure of Aggregates and the Pitfalls of the Patriarchal Perspective: A Critique of the High-Wage Economy Interpretation of the British Industrial Revolution”, Discussion Papers in Economic and Social History, University of Oxford, No. 91.

Li, Bozhong and Jan Luiten van Zanden (2012), “Before the Great Divergence: Comparing the Yangzi delta and the Netherlands at the beginning of the nineteenth century”, Journal of Economic History, vol. 72, No. 4, pp. 956-989.

Losa, Ernesto Lopez and Santiao Paquero Zarauz, “Spanish Subsistence Wages and the Little Divergence in Europe, 1500-1800”, European Review of Economic History, forthcomng.

Pamuk, Şevket (2007), “The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600”, European Review of Economic History, vol. 11, 2007, pp. 280-317.

Pamuk, Şevket (2016),  “Economic Growth in Southeastern Europe and Eastern Mediterranean, 1820-1914”, Economic Alternatives, No. 3.

Stephenson, Judy Z. (2018), “ ‘Real’ wages? Contractors, workers, and pay in London building trades, 1650–1800’,  Economic History Review, vol. 71 (1), pp. 106-132.

Stephenson, Judy Z. (2019), “Working days in a London construction team in the eighteenth century: evidence from St Paul’s Cathedral”, The Economic History Review, published 18 September 2019.



How JP Morgan Picked Winners and Losers in the Panic of 1907: The Importance of Individuals over Institutions

by Jon Moen (University of Mississippi) & Mary Rodgers (SUNY, Oswego).

This blog is part of our EHS 2020 Annual Conference Blog Series.


Moen 1
A cartoon on the cover of Puck Magazine, from 1910, titled: ‘The Central Bank – Why should Uncle Sam establish one, when Uncle Pierpont is already on the job?’. Available at Wikimedia Commons.


We study J. P. Morgan’s decision making during the Panic of 1907 and find insights for understanding the outcomes of current financial crises.  Morgan relied as much on his personal experience as on formal institutions like the New York Clearing House, when deciding how to combat the Panic. Our main conclusion is that lenders may rely on their past experience during a crisis rather than on institutional and legal arrangements in formulating a response to a financial crisis. The existence of sophisticated and powerful institutions like the Bank of England or the Federal Reserve System may not guarantee optimal policy responses if leaders make their decisions on the basis of personal experience rather than well-established guidelines.  This will result in decisions yielding sub-par outcomes for society compared to those made if formal procedures and data-based decisions had been proffered.

Morgan’s influence in arresting the Panic of 1907 is widely acknowledged. In the absence of a formal lender of last resort in the United States, he personally determined which financial institutions to save and which to let fail in New York. Morgan had two sources of information about the distressed firms: (1) analysis done by six committees of financial experts he assigned to estimate firms’ solvency and (2) decades of personal experience working with those same institutions and their leaders in his investment banking underwriting syndicates. Morgan’s decisions to provide or withhold aid to the teetering institutions appears to track more closely with his prior syndicate experience with each banker, rather than with the recommendations made by committees’ analysis of available data. Crucially, he chose to let the Knickerbocker Trust fail despite one committee’s estimate it was solvent and another’s that it had too little time to make a strong recommendation. Morgan had had a very bad business experience with the Knickerbocker and its president,  Charles Barney, but he had had positive experiences with all the other firms requesting aid. Had the Knickerbocker been aided, the panic might have been avoided all together.

The lesson we draw for present day policy is that the individuals responsible for crisis resolution will bring to the table policies based on personal experience that will influence the crisis resolution in ways that may not have been expected a priori. Their policies might not be consistent with the general well-being of the financial markets involved, as may have been the case with Morgan letting Knickerbocker fail.  A recent example that echoes the experience of Morgan in 1907 can be seen in the leadership of Ben Bernanke, Timothy Geithner and Henry Paulson during the financial crisis in 2008.  They had a formal lender of last resort, the Federal Reserve System, to guide them in responding to the crisis in 2008.  While they may have had the well-being of financial markets more in the forefront of their decision making from the start, controversy still surrounds the failure of Lehman Brothers and the lack of support to provide them with a lifeline from the Federal Reserve.  The latter could have provided aid, and this reveals that the individuals making the decisions, and not the mere existence of a lender of last resort institution and the analysis such an institution will muster, can greatly affect the course of a financial crisis.  Reliance on personal experience at the expense of institutional arrangements is clearly not limited only to the responses made to financial crises.  The coronavirus epidemic is one such example worth examining with this framework.


Jon Moen –

Give Me Liberty Or Give Me Death

by Richard A. Easterlin (University of Southern California)

This blog is  part G of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. The full article from this blog is “How Beneficent Is the Market? A Look at the Modern History of Mortality.” European Review of Economic History 3, no. 3 (1999): 257-94.


A child is vaccinated, Brazil, 1970.

Patrick Henry’s memorable plea for independence unintentionally also captured the long history of conflict between the free market and public health, evidenced in the current struggle of the United States with the coronavirus.  Efforts to contain the virus have centered on  measures to forestall transmission of the disease such as stay-at-home orders, social distancing, and avoiding large gatherings, each of which infringes on individual liberty.  These measures have given birth to a resistance movement objecting to violations of one’s freedom.

My 1999 article posed the question “How Beneficent is the Market?” The answer, based on “A Look at the Modern History of Mortality” was straightforward: because of the ubiquity of market failure, public intervention was essential to achieve control of major infectious disease. This intervention  centered on the creation of a public health system. “The functions of this system have included, in varying degrees, health education, regulation, compulsion, and the financing or direct provision of services.”

Regulation and compulsion, and the consequent infringement of individual liberties, have always been  critical building blocks of the public health system. Even before formal establishment of public health agencies, regulation and compulsion were features of measures aimed at controlling the spread of infectious disease in mid-19th century Britain. The “sanitation revolution” led to the regulation of water supply and sewage disposal, and, in time to regulation of slum-  building conditions.  As my article notes, there was fierce opposition to these measures:

“The backbone of the opposition was made up of those whose vested interests were threatened: landlords, builders, water companies, proprietors of refuse heaps and dung hills, burial concerns, slaughterhouses, and the like … The opposition appealed to the preservation of civil liberties and sought to debunk the new knowledge cited by the public health advocates …”

The greatest achievement of public health was the eradication of smallpox, the one disease in the world that has been eliminated from the face of the earth. Smallpox was the scourge of humankind until William Jenner’s discovery of a vaccine in 1798.   Throughout the 19th and 20th centuries, requirements for smallpox vaccination were fiercely opposed by anti-vaccinationists.  In 1959 the World Health Organization embarked on a program to eradicate the disease. Over the ensuing two decades its efforts to persuade governments worldwide to require vaccination of infants were eventually successful, and in 1980 WHO officially declared the disease eradicated. Eventually public health triumphed over liberty. But It took almost two centuries to realize Jenner’s hope that vaccination would annihilate smallpox.

In the face of the coronavirus pandemic the U. S. market-based health care system  has demonstrated once again the inability of the market to  deal with infectious disease, and the need for forceful public intervention. The  current health care system requires that:

 “every player, from insurers to hospitals to the pharmaceutical industry to doctors, be financially self-sustaining, to have a profitable business model. It excels in expensive specialty care. But there’s no return on investment in being positioned for the possibility of a pandemic” (Rosenthal 2020).

Commercial and hospital labs have been slow to respond to the need for developing a test for the virus.  Once tests became available, conducting them was handicapped by insufficient supplies of testing capacity — kits, chemical reagents, swabs, masks and other personal protective equipment. In hospitals, ventilators  were also in short supply. These deficiencies reflected the lack of profitability in responding to these needs, and of a government reluctant to compensate for market failure.

At the current time, the halting efforts of federal public health authorities  and state and local public officials to impose quarantine and “shelter at home” measures have been seriously handicapped by public protests over infringement of civil liberties, reminiscent of the dissidents of the 19th  and 20th centuries and their current day heirs. States are opening for business well in advance of guidelines of the Center for Disease Control.  The lesson of history regarding such actions is clear: The cost of liberty is sickness and death.  But do we learn from history? Sadly, one is put in mind of Warren Buffet’s aphorism: “What we learn from history is that people don’t learn from history.”



Rosenthal, Elizabeth, “A Health System Set up to Fail”,  New York Times, May 8, 2020, p.A29.


To contact the author:

Police as ploughmen in the First World War

by Mary Fraser (Associate, The Scottish Centre for Crime & Justice Research, University of Glasgow)

This blog is part of our EHS 2020 Annual Conference Blog Series.


Police group portrait Bury St Edmunds Suffolk. Available at Wikimedia Commons.

That policemen across Britain were released to plough the fields in the food shortages of 1917 is currently unrecognised, although soldiers, prisoners of war, women and school children have been widely acknowledged as helping agriculture. A national project is seeking to redress the imbalance in our understanding.

In March 1917, Britain faced starvation. Massive losses of shipping, which brought around 80% of the population’s requirements of grain, mainly from America and Canada, were being sunk by enemy U-boats. Added to this, the harsh and lengthy winter rotted the potato crop in the ground. These factors largely removed two staple items from the diet: bread and potatoes. Together with soaring food prices, the poor faced starvation.

To overcome this threat, the campaign to change the balance from pasture to arable began in December 1916 (Ernle, 1961). Government took control of farming and demanded a huge increase in home-grown grain and potatoes, so that Britain could become self-sufficient in food.

But the land had been stripped of much of its skilled labour by the attraction of joining the army or navy, so that farmers felt helpless to respond. Also, equipment was idle due to lack of maintenance as mechanics had similarly signed up to war or had left for better-paid work in the munitions factories. The need to help farmers to produce home-grown food was so great that every avenue was explored.

When the severe winter broke around mid-March, not only were many hundreds of soldiers deployed to farms, but also local authorities were asked to help. One of the first groups to come forward was the police. Many had been skilled farm workers in their previous employment and so were ideal to operate the manual ploughs, which needed skill and strength to turn over heavy soil, some of which had not been ploughed for many years.

A police popular journal at the time revealed ‘Police as Ploughmen’ and gave many of the 18 locations across Britain (Fraser, 2019). Estimates are that between 500 and 600 policemen were released, some for around two months.

For example, Glasgow agreed to the release of 90 policemen while Berwick, Roxburgh and Selkirk agreed to release 40. These two areas were often held up as examples of how other police forces across Britain could help farmers: Glasgow being an urban police force while Berwick, Roxburgh and Selkirk was rural.

To release this number was a considerable contribution by police forces, as many of their young fit policemen had also been recruited into the army, to be partially replaced by part-time older Special Constables.

This help to farmers paid huge dividends. It prevented the food riots seen in other combatant nations, such as Austria-Hungary, Germany, Russia and France (Ziemann, 2014). By the harvest of 1917, the substitution of ploughmen allowed Britain to claim an increase of 1,000,000 acres of arable land, producing over 4,000,000 more tons of wheat, barley, oats and potatoes (Ernle, 1961). Britain was also able to send food to troops in France and Italy, supplementing their local failed harvests.

It is now time that policemen were recognised for their social conscience by helping their local populations. This example of ‘Police as Ploughmen’ shows that as well as investigations, cautions and arrests, the police in Britain also have a remit to help local people, particularly in times of dire need, such as in the food crisis of the First World War.



Ernle, Lord (RE Prothero) (1961) English Farming, Past and Present, 6th edition, Heinemann Educational Book Ltd.

Fraser, M (2019) Policing the Home Front, 1914-1918: The control of the British population at war, Routledge.

Ziemann, B (2014) The Cambridge History of the First World War. Volume 2: The State.


Mary Fraser 


Unequal access to food during the nutritional transition: evidence from Mediterranean Spain

by Francisco J. Medina-Albaladejo & Salvador Calatayud (Universitat de València).

This article is forthcoming in the Economic History Review.


Figure 1 – General pathology ward, Hospital General de Valencia (Spain), 1949. Source: Consejo General de Colegios Médicos de España. Banco de imágenes de la medicina española. Real Academia Nacional de Medicina de España. Available here.

Over the last century, European historiography has debated whether industrialisation brought about an improvement in working class living standards.  Multiple demographic, economic, anthropometric and wellbeing indicators have been examined in this regard, but it was Eric Hobsbawm (1957) who, in the late 1950s, incorporated food consumption patterns into the analysis.

Between the mid-19th and the first half of the 20th century, the diet of European populations underwent radical changes. Caloric intake increased significantly, and cereals were to a large extent replaced by animal proteins and fat, resulting from a substantial increase in meat, milk, eggs and fish consumption. This transformation was referred to by Popkin (1993) as the ‘Nutritional transition’.

These dietary changes were  driven, inter alia,  by the evolution of income levels which raises the possibility  that significant inequalities between different social groups ensued. Dietary inequalities between different social groups are a key component in the analysis of inequality and living standards; they directly affect mortality, life expectancy, and morbidity. However, this hypothesis  remains unproven, as historians are still searching for adequate sources and methods with which to measure the effects of dietary changes on living standards.

This study contributes to the debate by analysing a relatively untapped source: hospital diets. We have analysed the diet of psychiatric patients and members of staff in the main hospital of the city of Valencia (Spain) between 1852 and 1923. The diet of patients depended on their social status and the amounts they paid for their upkeep. ‘Poor psychiatric patients’ and abandoned children, who paid no fee, were fed according to hospital regulations, whereas ‘well-off psychiatric patients’ paid a daily fee in exchange for a richer and more varied diet. There were also differences among members of staff, with nuns receiving a richer diet than other personnel (launderers, nurses and wet-nurses). We think that our source  broadly  reflects dietary patterns of the Spanish population and the effect of income levels thereon.

Figure 2 illustrates some of these differences in terms of animal-based caloric intake in each of the groups under study. Three population groups can be clearly distinguished: ‘well-off psychiatric patients’ and nuns, whose diet already presented some of the features of the nutritional transition by the mid-19th century, including fewer cereals and a meat-rich diet, as well as the inclusion of new products, such as olive oil, milk, eggs and fish; hospital staff, whose diet was rich in calories,to compensate for their demanding jobs, but still traditional in structure, being largely based on cereals, legumes, meat and wine; and, finally, ‘poor psychiatric patients’ and abandoned children, whose diet was poorer and which, by the 1920, had barely joined the trends that characterised the nutritional transition.


Figure 2. Percentage of animal calories in the daily average diet by population groups in the Hospital General de Valencia, 1852-1923 (%). Source: as per original article.


In conclusion, the nutritional transition was not a homogenous process, affecting all diets at the time or at the same pace. On the contrary, it was a process marked by social difference, and the progress of dietary changes was largely determined by social factors. By the mid-19th century, the diet structure of well-to-do social groups resembled diets that were more characteristic of the 1930s, while less favoured and intermediate social groups had to wait until the early 20th century before they could incorporate new foodstuffs into their diet. As this sequence clearly indicates, less favoured social groups always lagged behind.



Medina-Albaladejo, F. J. and Calatayud, S., “Unequal access to food during the nutritional transition: evidence from Mediterranean Spain”, Economic History Review, (forthcoming).

Hobsbawm, E. J., “The British Standard of Living, 1790-1850”, Economic History Review, 2nd ser., X (1957), pp. 46-68.

Popkin B. M., “Nutritional Patterns and Transitions”, Population and Development Review, 19, 1 (1993), pp. 138-157.

Fascistville: Mussolini’s new towns and the persistence of neo-fascism

by Mario F. Carillo (CSEF and University of Naples Federico II)

This blog is part of our EHS 2020 Annual Conference Blog Series.


March on Rome, 1922. Available at Wikimedia Commons.

Differences in political attitudes are prevalent in our society. People with the same occupation, age, gender, marital status, city of residence and similar background may have very different, and sometimes even opposite, political views. In a time in which the electorate is called to make important decisions with long-term consequences, understanding the origins of political attitudes, and then voting choices, is key.

My research documents that current differences in political attitudes have historical roots. Public expenditure allocation made almost a century ago help to explain differences in political attitudes today.

During the Italian fascist regime (1922-43), Mussolini undertook enormous investments in infrastructure by building cities from scratch. Fascistville (Littoria) and Mussolinia are two of the 147 new towns (Città di Fondazione) built by the regime on the Italian peninsula.


Towers shaped like the emblem of fascism (Torri Littorie) and majestic buildings as headquarters of the fascist party (Case del Fascio) dominated the centres of the new towns. While they were modern centres, their layout was inspired by the cities of the Roman Empire.

Intended to stimulate a process of identification of the masses based on the collective historical memory of the Roman Empire, the new towns were designed to instil the idea that fascism was building on, and improving, the imperial Roman past.

My study presents three main findings. First, the foundation of the new towns enhanced local electoral support for the fascist party, facilitating the emergence of the fascist regime.

Second, such an effect persisted through democratisation, favouring the emergence and persistence of the strongest neo-fascist party in the advanced industrial countries — the Movimento Sociale Italiano (MSI).

Finally, survey respondents near the fascist new towns are more likely today to have nationalistic views, prefer a stronger leader in politics and exhibit sympathy for the fascists. Direct experience of life under the regime strengthens this link, which appears to be transmitted across generations inside the family.


Thus, the fascist new towns explain differences in current political and cultural attitudes that can be traced back to the fascist ideology.

These findings suggest that public spending may have long-lasting effects on political and cultural attitudes, which persist across major institutional changes and affect the functioning of future institutions. This is a result that may inspire future research to study whether policy interventions may be effective in promoting the adoption of growth-enhancing cultural traits.

The Great Depression as a saving glut

by Victor Degorce (EHESS & European Business School) & Eric Monnet (EHESS, Paris School of economics & CEPR).

This blog is part of our EHS 2020 Annual Conference Blog Series.


Crowd at New York’s American Union Bank during a bank run early in the Great Depression. Available at Wikimedia Commons.

Ben Bernanke, former Chair of the Federal Reserve, the central bank of the United States, once said ‘Understanding the Great Depression is the Holy Grail of macroeconomics’. Although much has been written on this topic, giving rise to much of modern macroeconomics and monetary theory, there remain several areas of unresolved controversy. In particular, the mechanisms by which banking distress led to a fall in economic activity are still disputed.

Our work provides a new explanation based on a comparison of the financial systems of 20 countries in the 1930s: banking panics led to a transfer of bank deposits to non-bank institutions that collected savings but did not lend (or lent less) to the economy. As a result, intermediation between savings and investment was disrupted, and the economy suffered from an excess of unproductive savings, despite a negative wealth effect caused by creditor losses and falling real wages.

This conclusion speaks directly to the current debate on excess savings after the Great Recession (from 2008 to today), the rise in the price of certain assets (housing, public debt) and the lack of investment.

An essential – but often overlooked – feature of the banking systems before the Second World War was the competition between unregulated commercial banks and savings institutions. The latter took very different forms in different countries, but in most cases they were backed by governments and subject to regulation that limited the composition of their assets.

Although the United States is the country where banking panics were most studied, it was an exception. US banks had been regulated since the nineteenth century and alternative forms of savings (postal savings in this case) were limited in scope.

By contrast, in Japan and most European countries, a large proportion of total savings was deposited in regulated specialised institutions. Outside the United States, central banks also accepted private deposits and competed with commercial banks in this area. There were therefore many alternatives for depositors.

Banks were generally preferred because they could offer additional payment services and loans. But in times of crisis, regulated savings institutions were a safe haven. The downside of this security was that they were obliged – often by law – to take little risk, investing in cash or government securities. As a result, they could replace banks as deposit-taking institutions, but not as lending institutions.

We prove our claim thanks to a new dataset on deposits in commercial banks, different types of savings institutions and central banks in 20 countries. We also study how the macroeconomic effect of excess savings depended on the safety of the government (since savings institutions mainly bought government securities) and on the exchange rate regime (since gold standard countries were much less likely to mobilise excess savings to finance countercyclical policies).

Our argument is not inconsistent with earlier mechanisms, such as the monetary and non-monetary effects of bank failures documented, respectively, by Milton Friedman and Anna Schwartz and by Ben Bernanke, or the paradox of thrift explained by John Maynard Keynes.

But our argument is based on a separate mechanism that can only be taken into account when the dual nature of the financial system (unregulated deposit-taking institutions versus regulated institutions) is recognised. It raises important concerns for today about the danger of competition between a highly regulated banking system and a growing shadow banking system.