Overcoming the Egyptian cotton crisis in the interwar period: the role of irrigation, drainage, new seeds and access to credit

By Ulas Karakoc (TOBB ETU, Ankara & Humboldt University Berlin) & Laura Panza (University of Melbourne)

The full article from this blog is forthcoming in the Economic History Review.

 

Panza1
A study of diversity in Egyptian cotton, 1909. Available at Wikimedia Commons.

By 1914, Egypt’s large agricultural sector was negatively hit by declining yields in cotton production. Egypt at the time was a textbook case of export-led development.  The decline in cotton yields — the ‘cotton crisis’ — was coupled with two other constraints: land scarcity and high population density. Nonethless, Egyptian agriculture was able to overcome this crisis in the interwar period, despite unfavourable price shocks. The output stagnation between 1900 and the 1920s clearly contrasts with the following recovery (Figure 1). In this paper, we empirically examine how this happened, by focusing on the role of government investment in irrigation infrastructure, farmers crop choices (intra-cotton shifts), and access to credit.

 

Figure 1: Cotton output, acreage and yields, 1895-1940

Panza2
Source: Annuaire Statistique (various issues)

 

The decline in yields was caused by expanded irrigation without sufficient drainage, leading to a higher water table, increased salination, and increased pest attacks on cotton (Radwan, 1974; Owen, 1968; Richards, 1982).  The government introduced an extensive public works programme, to reverse soil degradation and restore production. Simultaneously, Egypt’s farmers changed the type of cotton they were cultivating, shifting from the long staple and low yielding Sakellaridis to the medium-short staple and high yielding Achmouni, which reflected income maximizing preferences (Goldberg 2004 and 2006). Another important feature of the Egyptian economy between the 1920s and 1940s, was the expansion of credit facilities and the connected increase in farmers’ accessibility to agricultural loans. The interwar years witnessed the establishment of cooperatives to facilitate small landowners’ access to inputs (Issawi,1954), and the foundation of the Crèdit Agricole in 1931, offering small loans (Eshag and Kamal, 1967). These credit institutions coexisted with a number of mortgage banks, among which the Credit Foncièr was the largest, servicing predominantly large owners. Figure 2 illustrates the average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939).

 

Figure 2: Average annual real value of Credit Foncièr land mortgages in 1,000 Egyptian pounds (1926-1939)

Panza3
Source: Annuaire Statistique (various issues)

 

Our work investigates the extent to which these factors contributed to the recovery of the raw cotton industry. Specifically: to what extent can intra-cotton shifts explain changes in total output? How did the increase in public works, mainly investment in the canal and drainage network, help boost production? And what role did differential access to credit play? To answer these questions, we construct a new dataset by exploiting official statistics (Annuaire Statistique de l’Egypte) covering 11 provinces and 17 years during 1923-1939. These data allow us to provide the first empirical estimates of Egyptian cotton output at the province level.

Access to finance and improved seeds significantly increased cotton output. The declining price premium of Sakellaridis led to a large-scale switch to Achmouni, which indicates that farmers responded to market incentives in their cultivation choices. Our study shows that cultivators’ response to market changes was fundamental in the recovery of the cotton sector. Access to credit was also a strong determinant of cotton output, especially to the benefit of large landowners. That access to credit plays a vital role in enabling the adoption of productivity-enhancing innovations is consonant with the literature on the Green Revolution, (Glaeser, 2010).

Our results show that the expansion of irrigation and drainage did not have a direct effect on output. However, we cannot rule out completely the role played by improved irrigation infrastructure because we do not observe investment in private drains, so we cannot assess complementarities between private and public drainage. Further, we find some evidence of a cumulative effect of drainage pipes, two to three years after installation.

The structure of land ownership, specifically the presence of large landowners, contributed to output recovery. Thus, despite institutional innovations designed to give small farmers better access to credit, large landowners benefitted disproportionally from credit availability. This is not a surprising finding: extreme inequality of land holdings had been a central feature of the country’s agricultural system for centuries.

 

References

Eshag, Eprime, and M. A. Kamal. “A Note on the Reform of the Rural Credit System in U.A.R (Egypt).” Bulletin of the Oxford University Institute of Economics & Statistics 29, no. 2 (1967): 95–107. https://doi.org/10.1111/j.1468-0084.1967.mp29002001.x.

Glaeser, Bernhard. The Green Revolution Revisited: Critique and Alternatives. Taylor & Francis, 2010.

Goldberg, Ellis. “Historiography of Crisis in the Egyptian Political Economy.” In Middle Eastern Historiographies: Narrating the Twentieth Century, edited by I. Gershoni, Amy Singer, and Hakan Erdem, 183–207. University of Washington Press, 2006.

———. Trade, Reputation and Child Labour in the Twentieth-Century Egypt. Palgrave Macmillan, 2004.

Issawi, Charles. Egypt at Mid-Century. Oxford University Press, 1954.

Owen, Roger. “Agricultural Production in Historical Perspective: A Case Study of the Period 1890-1939.” In Egypt Since the Revolution, edited by P. Vatikiotis, 40–65, 1968.

Radwan, Samir. Capital Formation in Egyptian Industry and Agriculture, 1882-1967. Ithaca Press, 1974.

Richards, Alan Egypt’s Agricultural Development, 1800-1980: Technical and Social Change. Westview Press, 1982.

 


Ulas Karakoc

ulaslar@gmail.com

 

Laura Panza

lpanza@unimelb.edu.au

 

 

 

 

 

Patents and Invention in Jamaica and the British Atlantic before 1857

By Aaron Graham (Oxford University)

This article will be published in the Economic History Review and is currently available on Early View.

 

Cardiff Hall, St. Ann's.
A Picturesque Tour of the Island of Jamaica, by James Hakewill (1875). Available at Wikimedia Commons.

For a long time the plantation colonies of the Americas were seen as backward and undeveloped, dependent for their wealth on the grinding enslavement of hundreds of thousands of people.  This was only part of the story, albeit a major one. Sugar, coffee, cotton, tobacco and indigo plantations were also some of the largest and most complex economic enterprises of the early industrial revolution, exceeding many textile factories in size and relying upon sophisticated technologies for the processing of raw materials.  My article looks at the patent system of Jamaica and the British Atlantic which supported this system, arguing that it facilitated a process of transatlantic invention, innovation and technological diffusion.

The first key finding concerns the nature of the patent system in Jamaica.  As in British America, patents were granted by colonial legislatures rather than by the Crown, and besides merely registering the proprietary right to an invention they often included further powers, to facilitate the process of licensing and diffusion.  They were therefore more akin to industrial subsidies than modern patents.  The corollary was that inventors had to demonstrate not just novelty but practicality and utility; in 1786, when two inventors competed to patent the same invention, the prize went to the one who provided a successful demonstration (Figure 1).   As a result, the bar was higher, and only about sixty patents were passed in Jamaica between 1664 and 1857, compared to the many thousands in Britain and the United States.

 

Figure 1. ‘Elevation & Plan of an Improved SUGAR MILL by Edward Woollery Esq of Jamaica’

Graham1
Source: Bryan Edwards, The History, Civil and Commercial, of the British Colonies of the West Indies (London, 1794).

 

However, the second key finding is that this ‘bar’ was enough to make Jamaica one of the centres of colonial technological innovation before 1770, along with Barbados and South Carolina, which accounted for about two-thirds of the patents passed in that period.  All three were successful plantation colonies, where planters earned large amounts of money and had both the incentive and the means to invest heavily in technological innovations intended to improve efficiency and profits.  Patenting peaked in Jamaica between the 1760s and 1780s, as the island adapted to sudden economic change, as part of a package of measures that included opening up new lands, experimenting with new cane varieties, engaging in closer accounting, importing more slaves and developing new ways of working them harder.

A further finding of the article is that the English and Jamaican patent systems until 1852 were complementary.  Inventors in Britain could purchase an English patent with a ‘colonial clause’ extending it to colonial territories, but a Jamaican patent offered them additional powers and flexibility as they brought their inventions to Jamaica and adapted it to local conditions.  Inventors in Jamaica could obtain a local patent to protect their invention while they perfected it and prepared to market it in Britain.  The article shows how inventors used varies strategies within the two systems to help support the process of turning their inventions into viable technologies.

Finally, the colonial patents operated alongside a system of grants, premiums and prizes operated by the Jamaican Assembly, which helped to support innovation by plugging the gaps left by the patent system.  Inventors who felt that their designs were too easily pirated, or that they themselves lacked the capacity to develop them properly, could ask for a grant instead that recompensed them for the costs of invention and made the new technology widely available.  Like the imperial and colonial patents, the grants were part of the strategies used to promote invention.

Indeed, sometimes the Assembly stepped in directly.  In 1799, Jean Baptiste Brouet asked the House for a patent for a machine for curing coffee.  The committee agreed that the invention was novel, useful and practical, ‘but as the petitioner has not been naturalised and is totally unable to pay the fees for a private bill’, they suggested granting him £350 instead, ‘as a full reward for his invention; [and] the machines constructed according to the model whereof may then be used by any person desirous of the same, without any license from or fee paid to the petitioner’.

The article therefore argues that Jamaican patents were part of wider transatlantic system that acted to facilitate invention, innovation and technological diffusion in support of the plantation economy and slave society.

 


 

Aaron Graham

aaron.graham@history.ox.ac.uk

Taxation and Wealth Inequality in the German Territories of the Holy Roman Empire 1350-1800

by Victoria Gierok (Nuffield College, Oxford)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

OLYMPUS DIGITAL CAMERA
Nuremberg chronicles – Kingdoms of the Holy Roman Empire of the German Nation. Available at Wikimedia Commons.

Since the French economist, Thomas Piketty, published Capital in the 21st Century in 2014, it has become clear that we need to understand the development of wealth and income inequality in the long run. While Piketty traces inequality over the last 200 years, other economic historians have recently begun to explore inequality in the more distant past,[1] and they report striking similarities of increasing economic inequality from as early as 1450.

However, one major European region has been largely absent from the debate: Central Europe — the German cities and territories of the Holy Roman Empire. How did wealth inequality develop there? And what role did taxation play?

The Holy Roman Empire was vast, but its borders fluctuated greatly over time. As a first step to facilitating analysis, I  focus on cities in the German-speaking regions.  Urban wealth taxation developed early in many of the great cities, such as Cologne and Lübeck. By the fourteenth century, wealth taxes were common in many cities. They are an excellent source for getting a glimpse at wealth inequality (Caption 1).

 

Caption 1. Excerpt from the wealth tax registers of Lübeck (1774-84).

Gierok1
Source: Archiv der Hansestadt Lübeck. Archival reference number: 03.04-05 01.02 Johannis-Quartier: 035 Schoßbuch Johannis-Quartier 1774-1784

 

Three questions need to be clarified when using wealth tax registers as sources:

  • Who was being taxed?
  • What was being taxed?
  • How were they taxed?

 

The first question was also crucial to contemporaries because the nobility and clergy adamantly defended their privileges which excluded them from taxation. It was Citizens and city-dwellers without citizenship who mainly bore the brunt of wealth taxation.

 

Figure 1. Taxpayers in a sample of 17 cities in the German Territories of the Holy Roman Empire.

Gierok2
Note: In all cities, citizens were subject to wealth taxation, whereas city-dwellers were fully taxed in only about half of them.
Source: Data derived from multiple sources. For further information, please contact the author.

 

The cities’ tax codes reveal a level of sophistication that might be surprising. Not only did they tax real estate, cash and inventories, but many of them also taxed financial assets such as loans and perpetuities (Figure 2).

 

Figure 2. Taxable wealth in 19 cities in the German Territories of the Holy Roman Empire.

Gierok3
Note: In all cities, real estate was taxed, whereas financial assets were taxed only in 13 of them.
Source: Data derived from multiple sources. For further information, please contact the author.

 

Wealth taxation was always proportional. Many cities established wealth thresholds below which citizens were exempt from taxation, and basic provisions such as grain, clothing and armour were also often exempt. Taxpayers were asked to estimate their own wealth and to pay the correct amount of taxes to the city’s tax collectors. To prevent fraud, taxpayers had to swear under oath (Caption 2).

 

Caption 2. Scene from the Volkacher Salbuch (1500-1504) shows the mayor on the left, two tax collectors at a table and a taxpayer delivering his tax payment while swearing his oath.

Gierok4
Source: Image: Pausch, Alfons & Jutta Pausch, Kleine Weltgeschichte der Steuerobrigkeit, 1989, Köln: Otto Schmidt KG, p.75

 

Taking the above limitations seriously, one can use tax registers to trace long-run wealth inequality in cities across the Holy Roman Empire (Figure 3).

 

Figure 3. Gini Coefficients showing Wealth Inequality in the Urban Middle Ages.

Gierok5
Source: Guido Alfani, G.,  Gierok, V., and Schaff, F.,  “Economic Inequality in Preindustrial Germany, ca. 1300 – 1850”.  Stone Center Working Paper Series, February 2020, no. 03.

 

Two main trends emerge: First, most cities experienced declining wealth inequality in the aftermath of the Black Death around 1350. The only exception was Rostock, an active trading city in the North. Second, from around 1500, inequality was rising in most cities until the onset of the Thirty Years War (1618-1648). This war, in which large armies marauded through German lands bringing along plague and other diseases, as well as the shift in trade from the Mediterranean to the Atlantic, might be the reason for the decline seen in this period. This sets the German lands apart from the development of inequality in other European regions, such as Italy and the Netherlands, in which inequality continued to rise throughout the early modern period.

 

Notes

[1] Milanovic, B., Lindert, P.H., and  Williamson, J.,  ‘Pre-Industrial Inequality’, Economic Journal 121, no. 551 (2011): 255-272;  Guido, A. ‘Economic Inequality in Northwestern Italy: A Long-Term View’, Journal of Economic History 75, no. 4 (2015): 1058-1096; Guido, A.,  and Ammannati, F.,  ‘Long-term trends in economic inequality: the case of the Florentine state, c.1300-1800’, Economic History Review 70, 4 (2017): 1072-1102; Wouter, R.,  ‘Economic Inequality and Growth before the Industrial Revolution: The Case of the Low Countries’,  European Review of Economic History 20, no. 1 (2016): 1-22;  Reis, J.,  ‘Deviant Behavior? Inequality in Portugal 1565-1770’,  Cliometrica 11, no. 3  (2017): 297-319; Malinowski, M.,  and  van Zanden J.L., ‘Income and Its Distribution in Preindustrial Poland’, Cliometrica 11, no. 3 (2017): 375-404.

 


 

Victoria Gierok: victoria.gierok@nuffield.ox.ac.uk

 

 

 

 

Corporate Social Responsibility for workers: Pirelli (1950-1980)

by Ilaria Suffia (Università Cattolica, Milan)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

 

Suffia1
Pirelli headquarters in Milan’s Bicocca district. Available at Wikimedia Commons.

Corporate social responsibility (CSR) in relation to the workforce has generated extensive academic and public debate. In this paper I evaluate Pirelli’s approach to CSR, by exploring its archives over the period 1950 to 1967.

Pirelli, founded in Milan by Giovanni Battista Pirelli in 1872, introduced industrial welfare for its employees and their family from its inception. In 1950, it deepened its relationship with them by publishing ‘Fatti e Notizie’ [Events and News], the company’s in-house newspaper. The journal was intended to share information with workers, at any level and, above all, it was meant to strengthen relationships within the ‘Pirelli family’.

Pirelli industrial welfare began in the 1870s and, by the end of the decade, a mutual aid fund and some institutions for its employees families (kindergarten and school), were established. Over the next 20 years, the company set the basis of its welfare policy which encompassed three main features: a series of ‘workplace’ protections, including accident and maternity assistance;  ‘family assistance’, including (in addition to kindergarten and school), seasonal care for children and, finally,  commitment to the professional training of its workers.

In the 1920s, the company’s welfare enlarged. In 1926, Pirelli created a health care service for the whole family and, in the same period, sport, culture and ‘free time’ activities became the main pillars of its CSR. Pirelli also provided houses for its workers, best exemplified in 1921, with the ‘Pirelli Village’. After 1945, Pirelli continued its welfare policy. The Company started a new programme of construction of workers’ houses (based on national provision), expanding its Village, and founding a professional training institute, dedicated to Piero Pirelli. The establishment in 1950 of the company journal, ‘Fatti e Notizie’, can be considered part of Pirelli’s welfare activities.

‘Fatti e Notizie’ was designed to improve internal communication about the company, especially Pirelli’s workers.  Subsequently, Pirelli also introduced in-house articles on current news or special pieces on economics, law and politics. My analysis of ‘Fatti e Notizie’ demonstrates that welfare news initially occupied about 80 per cent of coverage, but after the mid-1950s it decreased to 50 per cent in the late 1960s.

The welfare articles indicate that the type of communication depended on subject matter. Thus, health care, news on colleagues, sport and culture were mainly ‘instructive’, reporting information and keeping up to date with events. ‘Official’ communications on subjects such as CEO reports and financial statements, utilised ‘top to bottom’ articles. Cooperation, often reinforced with propaganda language, was promoted for accident prevention and workplace safety. Moreover, this kind of communication was applied to ‘bottom to top’ messages, such as an ‘ideas box’ in which workers presented their suggestions to improve production processes or safety.

My analysis shows that the communication model implemented by Pirelli in the 1950s and 1960s, navigated models of capitulation, (where the managerial view prevails) in the 1950s, to trivialisation (dealing only with ‘neutral’ topics, from the 1960s.

 

 

Ilaria Suffia: ilaria.suffia@unicatt.it

The Great Indian Earthquake: colonialism, politics and nationalism in 1934

by Tirthankar Ghosh (Department of History, Kazi Nazrul University, Asansol, India)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

Ghosh1
Gandhi in Bihar after the 1934 Nepal–Bihar earthquake. Available at Wikipedia.

The Great Indian earthquake of 1934 gave new life to nationalist politics in India. The colonial state too had to devise a new tool to deal with the devastation caused by the disaster. But the post-disaster settlements became a site of contestation between government and non-governmental agencies.

In this earthquake, thousands of lives were lost, houses were destroyed, crops and agricultural fields were devastated, towns and villages were ruined, bridges and railway tracks were warped, and drainage and water-sources had been distorted for a vast area of Bihar.

The multi-layered relief works, which included official and governmental measures, involvement of the organised party leadership and political workers, and voluntary private donations and contributions from several non-political and charitable organisations had to accommodate with several contradictory forces and elements.

Although it is sometime argued that the main objective of these relief works was to gain ‘political capital’ and ‘goodwill’; the mobilisation of fund, sympathy and fellow feelings should not be underestimated. Thus, a whole range of new nationalist politics emerged from the ruins of the disaster, which mobilised a great amount of popular engagement, political energy, and public subscriptions. The colonial state had to release prominent political leaders who could massively contribute to the relief operations.

Now the question is: was there any contestation or competition between the government and non-governmental agencies in the sphere of relief and reconstruction? Or did the disaster temporarily redefine the relationship between the state and subjects during the period of anti-colonial movement?

While the government had to embark on relief operations without having a proper idea about the depth of sufferings of the people, the political organisations, charged with sympathy and nationalism, performed the great task with more efficient organisational skills and dedication.

This time, India had witnessed what was the largest political involvement in a non-political agenda to date, where public involvement and support had not only compensated the administrative deficit, but shared an equal sense of victimhood. The non-political or non-governmental organisations, like Ramakrishna Mission, Marwari Relief Society etc. had also played a leading role in the relief operations.

The 1934 earthquake drew on massive popular sentiment, which was similar to the Bhuj earthquake of 2001 in India. In the long run, the disaster prompted the state to introduce the concept of public safety, hitherto unknown in India, and a whole new set of earthquake resistant building codes and modern urban planning using the latest technologies.

How JP Morgan Picked Winners and Losers in the Panic of 1907: The Importance of Individuals over Institutions

by Jon Moen (University of Mississippi) & Mary Rodgers (SUNY, Oswego).

This blog is part of our EHS 2020 Annual Conference Blog Series.

 

Moen 1
A cartoon on the cover of Puck Magazine, from 1910, titled: ‘The Central Bank – Why should Uncle Sam establish one, when Uncle Pierpont is already on the job?’. Available at Wikimedia Commons.

 

We study J. P. Morgan’s decision making during the Panic of 1907 and find insights for understanding the outcomes of current financial crises.  Morgan relied as much on his personal experience as on formal institutions like the New York Clearing House, when deciding how to combat the Panic. Our main conclusion is that lenders may rely on their past experience during a crisis rather than on institutional and legal arrangements in formulating a response to a financial crisis. The existence of sophisticated and powerful institutions like the Bank of England or the Federal Reserve System may not guarantee optimal policy responses if leaders make their decisions on the basis of personal experience rather than well-established guidelines.  This will result in decisions yielding sub-par outcomes for society compared to those made if formal procedures and data-based decisions had been proffered.

Morgan’s influence in arresting the Panic of 1907 is widely acknowledged. In the absence of a formal lender of last resort in the United States, he personally determined which financial institutions to save and which to let fail in New York. Morgan had two sources of information about the distressed firms: (1) analysis done by six committees of financial experts he assigned to estimate firms’ solvency and (2) decades of personal experience working with those same institutions and their leaders in his investment banking underwriting syndicates. Morgan’s decisions to provide or withhold aid to the teetering institutions appears to track more closely with his prior syndicate experience with each banker, rather than with the recommendations made by committees’ analysis of available data. Crucially, he chose to let the Knickerbocker Trust fail despite one committee’s estimate it was solvent and another’s that it had too little time to make a strong recommendation. Morgan had had a very bad business experience with the Knickerbocker and its president,  Charles Barney, but he had had positive experiences with all the other firms requesting aid. Had the Knickerbocker been aided, the panic might have been avoided all together.

The lesson we draw for present day policy is that the individuals responsible for crisis resolution will bring to the table policies based on personal experience that will influence the crisis resolution in ways that may not have been expected a priori. Their policies might not be consistent with the general well-being of the financial markets involved, as may have been the case with Morgan letting Knickerbocker fail.  A recent example that echoes the experience of Morgan in 1907 can be seen in the leadership of Ben Bernanke, Timothy Geithner and Henry Paulson during the financial crisis in 2008.  They had a formal lender of last resort, the Federal Reserve System, to guide them in responding to the crisis in 2008.  While they may have had the well-being of financial markets more in the forefront of their decision making from the start, controversy still surrounds the failure of Lehman Brothers and the lack of support to provide them with a lifeline from the Federal Reserve.  The latter could have provided aid, and this reveals that the individuals making the decisions, and not the mere existence of a lender of last resort institution and the analysis such an institution will muster, can greatly affect the course of a financial crisis.  Reliance on personal experience at the expense of institutional arrangements is clearly not limited only to the responses made to financial crises.  The coronavirus epidemic is one such example worth examining with this framework.

 


Jon Moen – jmoen@olemiss.edu

Early-life disease exposure and occupational status

by Martin Saavedra (Oberlin College and Conservatory)

This blog is part H of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. This blog is based on the article ‘Early-life disease exposure and occupational status: The impact of yellow fever during the 19th century’, in Explorations in Economic History, 64 (2017): 62-81. https://doi.org/10.1016/j.eeh.2017.01.003   

 

Saavedra1
A girl suffering from yellow fever. Watercolour. Available at Wellcome Images.

Like epidemics, shocks to public health have the potential to affect human capital accumulation. A literature in health economics known as the ‘fetal origins hypothesis’ has examined how in utero exposure to infectious disease affects labor market outcomes. Individuals may be more sensitive to health shocks during the developmental stage of life than during later stages of childhood. For good reason, much of this literature focuses on the 1918 influenza pandemic which was a huge shock to mortality and one of the few events that can be visibly observed when examining life expectancy trends in the United States. However, there are limitations to looking at the 1918 influenza pandemic because it coincided with the First World War. Another complication in this literature is that cities with outbreaks of infectious disease often engaged in many forms of social distancing by closing schools and businesses. This is true for the 1918 influenza pandemic, but also for other diseases. For examples, many schools were closed during the polio epidemic of 1916.

So, how can we estimate the long-run effects of infectious disease when cities simultaneously respond to outbreaks? One possibility is to look at a disease that differentially affected some groups within the same city, such as yellow fever during the nineteenth century. Yellow fever is a viral infection that spreads from the Aedes aegypti mosquito and is still endemic in parts of Africa and South America.  The disease kills roughly 50,000 people per year, even though a vaccine has existed for decades. Symptoms include fever, muscle pain, chills, and jaundice, from which the disease derives its name.

During the eighteenth and nineteenth centuries, yellow fever plagued American cities, particularly port cities that traded with Caribbean Islands. In 1793, over 5,000 Philadelphians likely died of yellow fever. This would be a devasting number in any city, even by today’s standards, but it is even more so when considering that in 1790 that Philadelphia had a population of less than 29,000.

By the mid-nineteenth century, Southern port cities grew, and yellow fever stopped occurring in cities as far north as Philadelphia. The graph below displays the number of yellow fever fatalities in four southern port cities — New Orleans, LA; Mobile, AL; Charleston, SC; and Norfolk, VA — during the nineteenth century. Yellow fever was sporadic, devasting a city in one year and often leaving it untouched in the next. For example, yellow fever killed nearly 8,000 New Orleanians in 1853, and over 2,000 in both 1854 and 1855. The next two years, yellow fever killed fewer than 200 New Orleanians per year, then yellow fever come back killing over 3,500 in 1858. Norfolk, VA was only struck once in 1855. Since yellow fever never struck Norfolk during milder years, the population lacked immunity and approximately 10 percent of the city died in 1855. Charleston and Mobile show similar sporadic patterns. Likely due to the Union’s naval blockade, yellow fever did not visit any American port cities in large numbers during the Civil War.

 

Saavedra2
Source: As per original article.

 

Immigrants were particularly prone to yellow fever because they often came from European countries rarely visited by yellow fever. Native New Orleanians, however, typically caught yellow fever during a mild year as children and were then immune to the disease for the rest of their lives. For this reason, yellow fever earned the name the “stranger’s disease.”

Data from the full count of the 1880 census show that yellow fever fatality rates during an individual’s year of birth negatively affected adult occupational status, but only for individuals with foreign-born mothers. Those with US-born mothers were relatively unaffected by the disease. There are also effects for those who are exposed to yellow fever one or two years after their birth, but there are no effects, not even for those with immigrant mothers, when exposed to yellow fever three or four years after their births. These results suggest that early-life exposure to infectious disease, not just city-wide responses to disease, influence human capital development.

 


 

Martin Saavedra

Martin.Saavedra@oberlin.edu

 

Unequal access to food during the nutritional transition: evidence from Mediterranean Spain

by Francisco J. Medina-Albaladejo & Salvador Calatayud (Universitat de València).

This article is forthcoming in the Economic History Review.

 

Medina1
Figure 1 – General pathology ward, Hospital General de Valencia (Spain), 1949. Source: Consejo General de Colegios Médicos de España. Banco de imágenes de la medicina española. Real Academia Nacional de Medicina de España. Available here.

Over the last century, European historiography has debated whether industrialisation brought about an improvement in working class living standards.  Multiple demographic, economic, anthropometric and wellbeing indicators have been examined in this regard, but it was Eric Hobsbawm (1957) who, in the late 1950s, incorporated food consumption patterns into the analysis.

Between the mid-19th and the first half of the 20th century, the diet of European populations underwent radical changes. Caloric intake increased significantly, and cereals were to a large extent replaced by animal proteins and fat, resulting from a substantial increase in meat, milk, eggs and fish consumption. This transformation was referred to by Popkin (1993) as the ‘Nutritional transition’.

These dietary changes were  driven, inter alia,  by the evolution of income levels which raises the possibility  that significant inequalities between different social groups ensued. Dietary inequalities between different social groups are a key component in the analysis of inequality and living standards; they directly affect mortality, life expectancy, and morbidity. However, this hypothesis  remains unproven, as historians are still searching for adequate sources and methods with which to measure the effects of dietary changes on living standards.

This study contributes to the debate by analysing a relatively untapped source: hospital diets. We have analysed the diet of psychiatric patients and members of staff in the main hospital of the city of Valencia (Spain) between 1852 and 1923. The diet of patients depended on their social status and the amounts they paid for their upkeep. ‘Poor psychiatric patients’ and abandoned children, who paid no fee, were fed according to hospital regulations, whereas ‘well-off psychiatric patients’ paid a daily fee in exchange for a richer and more varied diet. There were also differences among members of staff, with nuns receiving a richer diet than other personnel (launderers, nurses and wet-nurses). We think that our source  broadly  reflects dietary patterns of the Spanish population and the effect of income levels thereon.

Figure 2 illustrates some of these differences in terms of animal-based caloric intake in each of the groups under study. Three population groups can be clearly distinguished: ‘well-off psychiatric patients’ and nuns, whose diet already presented some of the features of the nutritional transition by the mid-19th century, including fewer cereals and a meat-rich diet, as well as the inclusion of new products, such as olive oil, milk, eggs and fish; hospital staff, whose diet was rich in calories,to compensate for their demanding jobs, but still traditional in structure, being largely based on cereals, legumes, meat and wine; and, finally, ‘poor psychiatric patients’ and abandoned children, whose diet was poorer and which, by the 1920, had barely joined the trends that characterised the nutritional transition.

 

Medina2
Figure 2. Percentage of animal calories in the daily average diet by population groups in the Hospital General de Valencia, 1852-1923 (%). Source: as per original article.

 

In conclusion, the nutritional transition was not a homogenous process, affecting all diets at the time or at the same pace. On the contrary, it was a process marked by social difference, and the progress of dietary changes was largely determined by social factors. By the mid-19th century, the diet structure of well-to-do social groups resembled diets that were more characteristic of the 1930s, while less favoured and intermediate social groups had to wait until the early 20th century before they could incorporate new foodstuffs into their diet. As this sequence clearly indicates, less favoured social groups always lagged behind.

 

References

Medina-Albaladejo, F. J. and Calatayud, S., “Unequal access to food during the nutritional transition: evidence from Mediterranean Spain”, Economic History Review, (forthcoming).

Hobsbawm, E. J., “The British Standard of Living, 1790-1850”, Economic History Review, 2nd ser., X (1957), pp. 46-68.

Popkin B. M., “Nutritional Patterns and Transitions”, Population and Development Review, 19, 1 (1993), pp. 138-157.

Fascistville: Mussolini’s new towns and the persistence of neo-fascism

by Mario F. Carillo (CSEF and University of Naples Federico II)

This blog is part of our EHS 2020 Annual Conference Blog Series.


 

Carillo3
March on Rome, 1922. Available at Wikimedia Commons.

Differences in political attitudes are prevalent in our society. People with the same occupation, age, gender, marital status, city of residence and similar background may have very different, and sometimes even opposite, political views. In a time in which the electorate is called to make important decisions with long-term consequences, understanding the origins of political attitudes, and then voting choices, is key.

My research documents that current differences in political attitudes have historical roots. Public expenditure allocation made almost a century ago help to explain differences in political attitudes today.

During the Italian fascist regime (1922-43), Mussolini undertook enormous investments in infrastructure by building cities from scratch. Fascistville (Littoria) and Mussolinia are two of the 147 new towns (Città di Fondazione) built by the regime on the Italian peninsula.

Carillo1

Towers shaped like the emblem of fascism (Torri Littorie) and majestic buildings as headquarters of the fascist party (Case del Fascio) dominated the centres of the new towns. While they were modern centres, their layout was inspired by the cities of the Roman Empire.

Intended to stimulate a process of identification of the masses based on the collective historical memory of the Roman Empire, the new towns were designed to instil the idea that fascism was building on, and improving, the imperial Roman past.

My study presents three main findings. First, the foundation of the new towns enhanced local electoral support for the fascist party, facilitating the emergence of the fascist regime.

Second, such an effect persisted through democratisation, favouring the emergence and persistence of the strongest neo-fascist party in the advanced industrial countries — the Movimento Sociale Italiano (MSI).

Finally, survey respondents near the fascist new towns are more likely today to have nationalistic views, prefer a stronger leader in politics and exhibit sympathy for the fascists. Direct experience of life under the regime strengthens this link, which appears to be transmitted across generations inside the family.

Carillo2

Thus, the fascist new towns explain differences in current political and cultural attitudes that can be traced back to the fascist ideology.

These findings suggest that public spending may have long-lasting effects on political and cultural attitudes, which persist across major institutional changes and affect the functioning of future institutions. This is a result that may inspire future research to study whether policy interventions may be effective in promoting the adoption of growth-enhancing cultural traits.

The Great Depression as a saving glut

by Victor Degorce (EHESS & European Business School) & Eric Monnet (EHESS, Paris School of economics & CEPR).

This blog is part of our EHS 2020 Annual Conference Blog Series.


 

GreatDepression
Crowd at New York’s American Union Bank during a bank run early in the Great Depression. Available at Wikimedia Commons.

Ben Bernanke, former Chair of the Federal Reserve, the central bank of the United States, once said ‘Understanding the Great Depression is the Holy Grail of macroeconomics’. Although much has been written on this topic, giving rise to much of modern macroeconomics and monetary theory, there remain several areas of unresolved controversy. In particular, the mechanisms by which banking distress led to a fall in economic activity are still disputed.

Our work provides a new explanation based on a comparison of the financial systems of 20 countries in the 1930s: banking panics led to a transfer of bank deposits to non-bank institutions that collected savings but did not lend (or lent less) to the economy. As a result, intermediation between savings and investment was disrupted, and the economy suffered from an excess of unproductive savings, despite a negative wealth effect caused by creditor losses and falling real wages.

This conclusion speaks directly to the current debate on excess savings after the Great Recession (from 2008 to today), the rise in the price of certain assets (housing, public debt) and the lack of investment.

An essential – but often overlooked – feature of the banking systems before the Second World War was the competition between unregulated commercial banks and savings institutions. The latter took very different forms in different countries, but in most cases they were backed by governments and subject to regulation that limited the composition of their assets.

Although the United States is the country where banking panics were most studied, it was an exception. US banks had been regulated since the nineteenth century and alternative forms of savings (postal savings in this case) were limited in scope.

By contrast, in Japan and most European countries, a large proportion of total savings was deposited in regulated specialised institutions. Outside the United States, central banks also accepted private deposits and competed with commercial banks in this area. There were therefore many alternatives for depositors.

Banks were generally preferred because they could offer additional payment services and loans. But in times of crisis, regulated savings institutions were a safe haven. The downside of this security was that they were obliged – often by law – to take little risk, investing in cash or government securities. As a result, they could replace banks as deposit-taking institutions, but not as lending institutions.

We prove our claim thanks to a new dataset on deposits in commercial banks, different types of savings institutions and central banks in 20 countries. We also study how the macroeconomic effect of excess savings depended on the safety of the government (since savings institutions mainly bought government securities) and on the exchange rate regime (since gold standard countries were much less likely to mobilise excess savings to finance countercyclical policies).

Our argument is not inconsistent with earlier mechanisms, such as the monetary and non-monetary effects of bank failures documented, respectively, by Milton Friedman and Anna Schwartz and by Ben Bernanke, or the paradox of thrift explained by John Maynard Keynes.

But our argument is based on a separate mechanism that can only be taken into account when the dual nature of the financial system (unregulated deposit-taking institutions versus regulated institutions) is recognised. It raises important concerns for today about the danger of competition between a highly regulated banking system and a growing shadow banking system.