Taxation and Wealth Inequality in the German Territories of the Holy Roman Empire 1350-1800

by Victoria Gierok (Nuffield College, Oxford)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

OLYMPUS DIGITAL CAMERA
Nuremberg chronicles – Kingdoms of the Holy Roman Empire of the German Nation. Available at Wikimedia Commons.

Since the French economist, Thomas Piketty, published Capital in the 21st Century in 2014, it has become clear that we need to understand the development of wealth and income inequality in the long run. While Piketty traces inequality over the last 200 years, other economic historians have recently begun to explore inequality in the more distant past,[1] and they report striking similarities of increasing economic inequality from as early as 1450.

However, one major European region has been largely absent from the debate: Central Europe — the German cities and territories of the Holy Roman Empire. How did wealth inequality develop there? And what role did taxation play?

The Holy Roman Empire was vast, but its borders fluctuated greatly over time. As a first step to facilitating analysis, I  focus on cities in the German-speaking regions.  Urban wealth taxation developed early in many of the great cities, such as Cologne and Lübeck. By the fourteenth century, wealth taxes were common in many cities. They are an excellent source for getting a glimpse at wealth inequality (Caption 1).

 

Caption 1. Excerpt from the wealth tax registers of Lübeck (1774-84).

Gierok1
Source: Archiv der Hansestadt Lübeck. Archival reference number: 03.04-05 01.02 Johannis-Quartier: 035 Schoßbuch Johannis-Quartier 1774-1784

 

Three questions need to be clarified when using wealth tax registers as sources:

  • Who was being taxed?
  • What was being taxed?
  • How were they taxed?

 

The first question was also crucial to contemporaries because the nobility and clergy adamantly defended their privileges which excluded them from taxation. It was Citizens and city-dwellers without citizenship who mainly bore the brunt of wealth taxation.

 

Figure 1. Taxpayers in a sample of 17 cities in the German Territories of the Holy Roman Empire.

Gierok2
Note: In all cities, citizens were subject to wealth taxation, whereas city-dwellers were fully taxed in only about half of them.
Source: Data derived from multiple sources. For further information, please contact the author.

 

The cities’ tax codes reveal a level of sophistication that might be surprising. Not only did they tax real estate, cash and inventories, but many of them also taxed financial assets such as loans and perpetuities (Figure 2).

 

Figure 2. Taxable wealth in 19 cities in the German Territories of the Holy Roman Empire.

Gierok3
Note: In all cities, real estate was taxed, whereas financial assets were taxed only in 13 of them.
Source: Data derived from multiple sources. For further information, please contact the author.

 

Wealth taxation was always proportional. Many cities established wealth thresholds below which citizens were exempt from taxation, and basic provisions such as grain, clothing and armour were also often exempt. Taxpayers were asked to estimate their own wealth and to pay the correct amount of taxes to the city’s tax collectors. To prevent fraud, taxpayers had to swear under oath (Caption 2).

 

Caption 2. Scene from the Volkacher Salbuch (1500-1504) shows the mayor on the left, two tax collectors at a table and a taxpayer delivering his tax payment while swearing his oath.

Gierok4
Source: Image: Pausch, Alfons & Jutta Pausch, Kleine Weltgeschichte der Steuerobrigkeit, 1989, Köln: Otto Schmidt KG, p.75

 

Taking the above limitations seriously, one can use tax registers to trace long-run wealth inequality in cities across the Holy Roman Empire (Figure 3).

 

Figure 3. Gini Coefficients showing Wealth Inequality in the Urban Middle Ages.

Gierok5
Source: Guido Alfani, G.,  Gierok, V., and Schaff, F.,  “Economic Inequality in Preindustrial Germany, ca. 1300 – 1850”.  Stone Center Working Paper Series, February 2020, no. 03.

 

Two main trends emerge: First, most cities experienced declining wealth inequality in the aftermath of the Black Death around 1350. The only exception was Rostock, an active trading city in the North. Second, from around 1500, inequality was rising in most cities until the onset of the Thirty Years War (1618-1648). This war, in which large armies marauded through German lands bringing along plague and other diseases, as well as the shift in trade from the Mediterranean to the Atlantic, might be the reason for the decline seen in this period. This sets the German lands apart from the development of inequality in other European regions, such as Italy and the Netherlands, in which inequality continued to rise throughout the early modern period.

 

Notes

[1] Milanovic, B., Lindert, P.H., and  Williamson, J.,  ‘Pre-Industrial Inequality’, Economic Journal 121, no. 551 (2011): 255-272;  Guido, A. ‘Economic Inequality in Northwestern Italy: A Long-Term View’, Journal of Economic History 75, no. 4 (2015): 1058-1096; Guido, A.,  and Ammannati, F.,  ‘Long-term trends in economic inequality: the case of the Florentine state, c.1300-1800’, Economic History Review 70, 4 (2017): 1072-1102; Wouter, R.,  ‘Economic Inequality and Growth before the Industrial Revolution: The Case of the Low Countries’,  European Review of Economic History 20, no. 1 (2016): 1-22;  Reis, J.,  ‘Deviant Behavior? Inequality in Portugal 1565-1770’,  Cliometrica 11, no. 3  (2017): 297-319; Malinowski, M.,  and  van Zanden J.L., ‘Income and Its Distribution in Preindustrial Poland’, Cliometrica 11, no. 3 (2017): 375-404.

 


 

Victoria Gierok: victoria.gierok@nuffield.ox.ac.uk

 

 

 

 

Corporate Social Responsibility for workers: Pirelli (1950-1980)

by Ilaria Suffia (Università Cattolica, Milan)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

 

Suffia1
Pirelli headquarters in Milan’s Bicocca district. Available at Wikimedia Commons.

Corporate social responsibility (CSR) in relation to the workforce has generated extensive academic and public debate. In this paper I evaluate Pirelli’s approach to CSR, by exploring its archives over the period 1950 to 1967.

Pirelli, founded in Milan by Giovanni Battista Pirelli in 1872, introduced industrial welfare for its employees and their family from its inception. In 1950, it deepened its relationship with them by publishing ‘Fatti e Notizie’ [Events and News], the company’s in-house newspaper. The journal was intended to share information with workers, at any level and, above all, it was meant to strengthen relationships within the ‘Pirelli family’.

Pirelli industrial welfare began in the 1870s and, by the end of the decade, a mutual aid fund and some institutions for its employees families (kindergarten and school), were established. Over the next 20 years, the company set the basis of its welfare policy which encompassed three main features: a series of ‘workplace’ protections, including accident and maternity assistance;  ‘family assistance’, including (in addition to kindergarten and school), seasonal care for children and, finally,  commitment to the professional training of its workers.

In the 1920s, the company’s welfare enlarged. In 1926, Pirelli created a health care service for the whole family and, in the same period, sport, culture and ‘free time’ activities became the main pillars of its CSR. Pirelli also provided houses for its workers, best exemplified in 1921, with the ‘Pirelli Village’. After 1945, Pirelli continued its welfare policy. The Company started a new programme of construction of workers’ houses (based on national provision), expanding its Village, and founding a professional training institute, dedicated to Piero Pirelli. The establishment in 1950 of the company journal, ‘Fatti e Notizie’, can be considered part of Pirelli’s welfare activities.

‘Fatti e Notizie’ was designed to improve internal communication about the company, especially Pirelli’s workers.  Subsequently, Pirelli also introduced in-house articles on current news or special pieces on economics, law and politics. My analysis of ‘Fatti e Notizie’ demonstrates that welfare news initially occupied about 80 per cent of coverage, but after the mid-1950s it decreased to 50 per cent in the late 1960s.

The welfare articles indicate that the type of communication depended on subject matter. Thus, health care, news on colleagues, sport and culture were mainly ‘instructive’, reporting information and keeping up to date with events. ‘Official’ communications on subjects such as CEO reports and financial statements, utilised ‘top to bottom’ articles. Cooperation, often reinforced with propaganda language, was promoted for accident prevention and workplace safety. Moreover, this kind of communication was applied to ‘bottom to top’ messages, such as an ‘ideas box’ in which workers presented their suggestions to improve production processes or safety.

My analysis shows that the communication model implemented by Pirelli in the 1950s and 1960s, navigated models of capitulation, (where the managerial view prevails) in the 1950s, to trivialisation (dealing only with ‘neutral’ topics, from the 1960s.

 

 

Ilaria Suffia: ilaria.suffia@unicatt.it

The Great Indian Earthquake: colonialism, politics and nationalism in 1934

by Tirthankar Ghosh (Department of History, Kazi Nazrul University, Asansol, India)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

Ghosh1
Gandhi in Bihar after the 1934 Nepal–Bihar earthquake. Available at Wikipedia.

The Great Indian earthquake of 1934 gave new life to nationalist politics in India. The colonial state too had to devise a new tool to deal with the devastation caused by the disaster. But the post-disaster settlements became a site of contestation between government and non-governmental agencies.

In this earthquake, thousands of lives were lost, houses were destroyed, crops and agricultural fields were devastated, towns and villages were ruined, bridges and railway tracks were warped, and drainage and water-sources had been distorted for a vast area of Bihar.

The multi-layered relief works, which included official and governmental measures, involvement of the organised party leadership and political workers, and voluntary private donations and contributions from several non-political and charitable organisations had to accommodate with several contradictory forces and elements.

Although it is sometime argued that the main objective of these relief works was to gain ‘political capital’ and ‘goodwill’; the mobilisation of fund, sympathy and fellow feelings should not be underestimated. Thus, a whole range of new nationalist politics emerged from the ruins of the disaster, which mobilised a great amount of popular engagement, political energy, and public subscriptions. The colonial state had to release prominent political leaders who could massively contribute to the relief operations.

Now the question is: was there any contestation or competition between the government and non-governmental agencies in the sphere of relief and reconstruction? Or did the disaster temporarily redefine the relationship between the state and subjects during the period of anti-colonial movement?

While the government had to embark on relief operations without having a proper idea about the depth of sufferings of the people, the political organisations, charged with sympathy and nationalism, performed the great task with more efficient organisational skills and dedication.

This time, India had witnessed what was the largest political involvement in a non-political agenda to date, where public involvement and support had not only compensated the administrative deficit, but shared an equal sense of victimhood. The non-political or non-governmental organisations, like Ramakrishna Mission, Marwari Relief Society etc. had also played a leading role in the relief operations.

The 1934 earthquake drew on massive popular sentiment, which was similar to the Bhuj earthquake of 2001 in India. In the long run, the disaster prompted the state to introduce the concept of public safety, hitherto unknown in India, and a whole new set of earthquake resistant building codes and modern urban planning using the latest technologies.

How JP Morgan Picked Winners and Losers in the Panic of 1907: The Importance of Individuals over Institutions

by Jon Moen (University of Mississippi) & Mary Rodgers (SUNY, Oswego).

This blog is part of our EHS 2020 Annual Conference Blog Series.

 

Moen 1
A cartoon on the cover of Puck Magazine, from 1910, titled: ‘The Central Bank – Why should Uncle Sam establish one, when Uncle Pierpont is already on the job?’. Available at Wikimedia Commons.

 

We study J. P. Morgan’s decision making during the Panic of 1907 and find insights for understanding the outcomes of current financial crises.  Morgan relied as much on his personal experience as on formal institutions like the New York Clearing House, when deciding how to combat the Panic. Our main conclusion is that lenders may rely on their past experience during a crisis rather than on institutional and legal arrangements in formulating a response to a financial crisis. The existence of sophisticated and powerful institutions like the Bank of England or the Federal Reserve System may not guarantee optimal policy responses if leaders make their decisions on the basis of personal experience rather than well-established guidelines.  This will result in decisions yielding sub-par outcomes for society compared to those made if formal procedures and data-based decisions had been proffered.

Morgan’s influence in arresting the Panic of 1907 is widely acknowledged. In the absence of a formal lender of last resort in the United States, he personally determined which financial institutions to save and which to let fail in New York. Morgan had two sources of information about the distressed firms: (1) analysis done by six committees of financial experts he assigned to estimate firms’ solvency and (2) decades of personal experience working with those same institutions and their leaders in his investment banking underwriting syndicates. Morgan’s decisions to provide or withhold aid to the teetering institutions appears to track more closely with his prior syndicate experience with each banker, rather than with the recommendations made by committees’ analysis of available data. Crucially, he chose to let the Knickerbocker Trust fail despite one committee’s estimate it was solvent and another’s that it had too little time to make a strong recommendation. Morgan had had a very bad business experience with the Knickerbocker and its president,  Charles Barney, but he had had positive experiences with all the other firms requesting aid. Had the Knickerbocker been aided, the panic might have been avoided all together.

The lesson we draw for present day policy is that the individuals responsible for crisis resolution will bring to the table policies based on personal experience that will influence the crisis resolution in ways that may not have been expected a priori. Their policies might not be consistent with the general well-being of the financial markets involved, as may have been the case with Morgan letting Knickerbocker fail.  A recent example that echoes the experience of Morgan in 1907 can be seen in the leadership of Ben Bernanke, Timothy Geithner and Henry Paulson during the financial crisis in 2008.  They had a formal lender of last resort, the Federal Reserve System, to guide them in responding to the crisis in 2008.  While they may have had the well-being of financial markets more in the forefront of their decision making from the start, controversy still surrounds the failure of Lehman Brothers and the lack of support to provide them with a lifeline from the Federal Reserve.  The latter could have provided aid, and this reveals that the individuals making the decisions, and not the mere existence of a lender of last resort institution and the analysis such an institution will muster, can greatly affect the course of a financial crisis.  Reliance on personal experience at the expense of institutional arrangements is clearly not limited only to the responses made to financial crises.  The coronavirus epidemic is one such example worth examining with this framework.

 


Jon Moen – jmoen@olemiss.edu

Early-life disease exposure and occupational status

by Martin Saavedra (Oberlin College and Conservatory)

This blog is part H of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. This blog is based on the article ‘Early-life disease exposure and occupational status: The impact of yellow fever during the 19th century’, in Explorations in Economic History, 64 (2017): 62-81. https://doi.org/10.1016/j.eeh.2017.01.003   

 

Saavedra1
A girl suffering from yellow fever. Watercolour. Available at Wellcome Images.

Like epidemics, shocks to public health have the potential to affect human capital accumulation. A literature in health economics known as the ‘fetal origins hypothesis’ has examined how in utero exposure to infectious disease affects labor market outcomes. Individuals may be more sensitive to health shocks during the developmental stage of life than during later stages of childhood. For good reason, much of this literature focuses on the 1918 influenza pandemic which was a huge shock to mortality and one of the few events that can be visibly observed when examining life expectancy trends in the United States. However, there are limitations to looking at the 1918 influenza pandemic because it coincided with the First World War. Another complication in this literature is that cities with outbreaks of infectious disease often engaged in many forms of social distancing by closing schools and businesses. This is true for the 1918 influenza pandemic, but also for other diseases. For examples, many schools were closed during the polio epidemic of 1916.

So, how can we estimate the long-run effects of infectious disease when cities simultaneously respond to outbreaks? One possibility is to look at a disease that differentially affected some groups within the same city, such as yellow fever during the nineteenth century. Yellow fever is a viral infection that spreads from the Aedes aegypti mosquito and is still endemic in parts of Africa and South America.  The disease kills roughly 50,000 people per year, even though a vaccine has existed for decades. Symptoms include fever, muscle pain, chills, and jaundice, from which the disease derives its name.

During the eighteenth and nineteenth centuries, yellow fever plagued American cities, particularly port cities that traded with Caribbean Islands. In 1793, over 5,000 Philadelphians likely died of yellow fever. This would be a devasting number in any city, even by today’s standards, but it is even more so when considering that in 1790 that Philadelphia had a population of less than 29,000.

By the mid-nineteenth century, Southern port cities grew, and yellow fever stopped occurring in cities as far north as Philadelphia. The graph below displays the number of yellow fever fatalities in four southern port cities — New Orleans, LA; Mobile, AL; Charleston, SC; and Norfolk, VA — during the nineteenth century. Yellow fever was sporadic, devasting a city in one year and often leaving it untouched in the next. For example, yellow fever killed nearly 8,000 New Orleanians in 1853, and over 2,000 in both 1854 and 1855. The next two years, yellow fever killed fewer than 200 New Orleanians per year, then yellow fever come back killing over 3,500 in 1858. Norfolk, VA was only struck once in 1855. Since yellow fever never struck Norfolk during milder years, the population lacked immunity and approximately 10 percent of the city died in 1855. Charleston and Mobile show similar sporadic patterns. Likely due to the Union’s naval blockade, yellow fever did not visit any American port cities in large numbers during the Civil War.

 

Saavedra2
Source: As per original article.

 

Immigrants were particularly prone to yellow fever because they often came from European countries rarely visited by yellow fever. Native New Orleanians, however, typically caught yellow fever during a mild year as children and were then immune to the disease for the rest of their lives. For this reason, yellow fever earned the name the “stranger’s disease.”

Data from the full count of the 1880 census show that yellow fever fatality rates during an individual’s year of birth negatively affected adult occupational status, but only for individuals with foreign-born mothers. Those with US-born mothers were relatively unaffected by the disease. There are also effects for those who are exposed to yellow fever one or two years after their birth, but there are no effects, not even for those with immigrant mothers, when exposed to yellow fever three or four years after their births. These results suggest that early-life exposure to infectious disease, not just city-wide responses to disease, influence human capital development.

 


 

Martin Saavedra

Martin.Saavedra@oberlin.edu

 

Unequal access to food during the nutritional transition: evidence from Mediterranean Spain

by Francisco J. Medina-Albaladejo & Salvador Calatayud (Universitat de València).

This article is forthcoming in the Economic History Review.

 

Medina1
Figure 1 – General pathology ward, Hospital General de Valencia (Spain), 1949. Source: Consejo General de Colegios Médicos de España. Banco de imágenes de la medicina española. Real Academia Nacional de Medicina de España. Available here.

Over the last century, European historiography has debated whether industrialisation brought about an improvement in working class living standards.  Multiple demographic, economic, anthropometric and wellbeing indicators have been examined in this regard, but it was Eric Hobsbawm (1957) who, in the late 1950s, incorporated food consumption patterns into the analysis.

Between the mid-19th and the first half of the 20th century, the diet of European populations underwent radical changes. Caloric intake increased significantly, and cereals were to a large extent replaced by animal proteins and fat, resulting from a substantial increase in meat, milk, eggs and fish consumption. This transformation was referred to by Popkin (1993) as the ‘Nutritional transition’.

These dietary changes were  driven, inter alia,  by the evolution of income levels which raises the possibility  that significant inequalities between different social groups ensued. Dietary inequalities between different social groups are a key component in the analysis of inequality and living standards; they directly affect mortality, life expectancy, and morbidity. However, this hypothesis  remains unproven, as historians are still searching for adequate sources and methods with which to measure the effects of dietary changes on living standards.

This study contributes to the debate by analysing a relatively untapped source: hospital diets. We have analysed the diet of psychiatric patients and members of staff in the main hospital of the city of Valencia (Spain) between 1852 and 1923. The diet of patients depended on their social status and the amounts they paid for their upkeep. ‘Poor psychiatric patients’ and abandoned children, who paid no fee, were fed according to hospital regulations, whereas ‘well-off psychiatric patients’ paid a daily fee in exchange for a richer and more varied diet. There were also differences among members of staff, with nuns receiving a richer diet than other personnel (launderers, nurses and wet-nurses). We think that our source  broadly  reflects dietary patterns of the Spanish population and the effect of income levels thereon.

Figure 2 illustrates some of these differences in terms of animal-based caloric intake in each of the groups under study. Three population groups can be clearly distinguished: ‘well-off psychiatric patients’ and nuns, whose diet already presented some of the features of the nutritional transition by the mid-19th century, including fewer cereals and a meat-rich diet, as well as the inclusion of new products, such as olive oil, milk, eggs and fish; hospital staff, whose diet was rich in calories,to compensate for their demanding jobs, but still traditional in structure, being largely based on cereals, legumes, meat and wine; and, finally, ‘poor psychiatric patients’ and abandoned children, whose diet was poorer and which, by the 1920, had barely joined the trends that characterised the nutritional transition.

 

Medina2
Figure 2. Percentage of animal calories in the daily average diet by population groups in the Hospital General de Valencia, 1852-1923 (%). Source: as per original article.

 

In conclusion, the nutritional transition was not a homogenous process, affecting all diets at the time or at the same pace. On the contrary, it was a process marked by social difference, and the progress of dietary changes was largely determined by social factors. By the mid-19th century, the diet structure of well-to-do social groups resembled diets that were more characteristic of the 1930s, while less favoured and intermediate social groups had to wait until the early 20th century before they could incorporate new foodstuffs into their diet. As this sequence clearly indicates, less favoured social groups always lagged behind.

 

References

Medina-Albaladejo, F. J. and Calatayud, S., “Unequal access to food during the nutritional transition: evidence from Mediterranean Spain”, Economic History Review, (forthcoming).

Hobsbawm, E. J., “The British Standard of Living, 1790-1850”, Economic History Review, 2nd ser., X (1957), pp. 46-68.

Popkin B. M., “Nutritional Patterns and Transitions”, Population and Development Review, 19, 1 (1993), pp. 138-157.

Fascistville: Mussolini’s new towns and the persistence of neo-fascism

by Mario F. Carillo (CSEF and University of Naples Federico II)

This blog is part of our EHS 2020 Annual Conference Blog Series.


 

Carillo3
March on Rome, 1922. Available at Wikimedia Commons.

Differences in political attitudes are prevalent in our society. People with the same occupation, age, gender, marital status, city of residence and similar background may have very different, and sometimes even opposite, political views. In a time in which the electorate is called to make important decisions with long-term consequences, understanding the origins of political attitudes, and then voting choices, is key.

My research documents that current differences in political attitudes have historical roots. Public expenditure allocation made almost a century ago help to explain differences in political attitudes today.

During the Italian fascist regime (1922-43), Mussolini undertook enormous investments in infrastructure by building cities from scratch. Fascistville (Littoria) and Mussolinia are two of the 147 new towns (Città di Fondazione) built by the regime on the Italian peninsula.

Carillo1

Towers shaped like the emblem of fascism (Torri Littorie) and majestic buildings as headquarters of the fascist party (Case del Fascio) dominated the centres of the new towns. While they were modern centres, their layout was inspired by the cities of the Roman Empire.

Intended to stimulate a process of identification of the masses based on the collective historical memory of the Roman Empire, the new towns were designed to instil the idea that fascism was building on, and improving, the imperial Roman past.

My study presents three main findings. First, the foundation of the new towns enhanced local electoral support for the fascist party, facilitating the emergence of the fascist regime.

Second, such an effect persisted through democratisation, favouring the emergence and persistence of the strongest neo-fascist party in the advanced industrial countries — the Movimento Sociale Italiano (MSI).

Finally, survey respondents near the fascist new towns are more likely today to have nationalistic views, prefer a stronger leader in politics and exhibit sympathy for the fascists. Direct experience of life under the regime strengthens this link, which appears to be transmitted across generations inside the family.

Carillo2

Thus, the fascist new towns explain differences in current political and cultural attitudes that can be traced back to the fascist ideology.

These findings suggest that public spending may have long-lasting effects on political and cultural attitudes, which persist across major institutional changes and affect the functioning of future institutions. This is a result that may inspire future research to study whether policy interventions may be effective in promoting the adoption of growth-enhancing cultural traits.

The Great Depression as a saving glut

by Victor Degorce (EHESS & European Business School) & Eric Monnet (EHESS, Paris School of economics & CEPR).

This blog is part of our EHS 2020 Annual Conference Blog Series.


 

GreatDepression
Crowd at New York’s American Union Bank during a bank run early in the Great Depression. Available at Wikimedia Commons.

Ben Bernanke, former Chair of the Federal Reserve, the central bank of the United States, once said ‘Understanding the Great Depression is the Holy Grail of macroeconomics’. Although much has been written on this topic, giving rise to much of modern macroeconomics and monetary theory, there remain several areas of unresolved controversy. In particular, the mechanisms by which banking distress led to a fall in economic activity are still disputed.

Our work provides a new explanation based on a comparison of the financial systems of 20 countries in the 1930s: banking panics led to a transfer of bank deposits to non-bank institutions that collected savings but did not lend (or lent less) to the economy. As a result, intermediation between savings and investment was disrupted, and the economy suffered from an excess of unproductive savings, despite a negative wealth effect caused by creditor losses and falling real wages.

This conclusion speaks directly to the current debate on excess savings after the Great Recession (from 2008 to today), the rise in the price of certain assets (housing, public debt) and the lack of investment.

An essential – but often overlooked – feature of the banking systems before the Second World War was the competition between unregulated commercial banks and savings institutions. The latter took very different forms in different countries, but in most cases they were backed by governments and subject to regulation that limited the composition of their assets.

Although the United States is the country where banking panics were most studied, it was an exception. US banks had been regulated since the nineteenth century and alternative forms of savings (postal savings in this case) were limited in scope.

By contrast, in Japan and most European countries, a large proportion of total savings was deposited in regulated specialised institutions. Outside the United States, central banks also accepted private deposits and competed with commercial banks in this area. There were therefore many alternatives for depositors.

Banks were generally preferred because they could offer additional payment services and loans. But in times of crisis, regulated savings institutions were a safe haven. The downside of this security was that they were obliged – often by law – to take little risk, investing in cash or government securities. As a result, they could replace banks as deposit-taking institutions, but not as lending institutions.

We prove our claim thanks to a new dataset on deposits in commercial banks, different types of savings institutions and central banks in 20 countries. We also study how the macroeconomic effect of excess savings depended on the safety of the government (since savings institutions mainly bought government securities) and on the exchange rate regime (since gold standard countries were much less likely to mobilise excess savings to finance countercyclical policies).

Our argument is not inconsistent with earlier mechanisms, such as the monetary and non-monetary effects of bank failures documented, respectively, by Milton Friedman and Anna Schwartz and by Ben Bernanke, or the paradox of thrift explained by John Maynard Keynes.

But our argument is based on a separate mechanism that can only be taken into account when the dual nature of the financial system (unregulated deposit-taking institutions versus regulated institutions) is recognised. It raises important concerns for today about the danger of competition between a highly regulated banking system and a growing shadow banking system.

Business bankruptcies: learning from historical failures

by Philip Fliers (Queen’s University Belfast), Chris Colvin (Queen’s University Belfast), and Abe de Jong (Monash University).

This blog is part of our EHS 2020 Annual Conference Blog Series.


 

 

BankruptBlog
The door of a bankrupt business locked with a chain and padlock. Available at Flickr.

 

Business bankruptcies are rare events. But when they occur, they can prove catastrophic. Employees lose their jobs, shareholders lose their savings and loyal customers lose their trusted suppliers.

Essentially, bankruptcies are ‘black swan’ events in that they come as a surprise, have a major impact and are often inappropriately rationalised after the fact with the benefit of hindsight. While they may be extreme outliers, they are also extremely costly for those affected.

Because bankruptcies are so rare, they are very hard to study. This makes it difficult to understand the causes of bankruptcies, and to develop useful early warning systems.

What are the risk factors for which shareholders should watch out when evaluating their investments, or when pension regulators audit the future sustainability of workplace pension schemes?

Our solution is to exploit the historical record. We collect a dataset of all bankruptcies of publicly listed corporations that occurred in the Netherlands over the past 100 years. And we look to see what we can learn from taking this long-run perspective.

In particular, we are interested in seeing whether these bankruptcies had common features. Are firms that are about to go out of business systematically different in terms of their financial performance, corporate financing or governance structures than those that are healthy and successful?

Our surprising result is that the features of bankrupt corporations vary considerably across the twentieth century.

During the 1920s and 1930s, small and risky firms were more likely to go bankrupt. In the wake of the Second World War, firms that did not pay dividends to their shareholders were more likely to fail. And since the 1980s, failure probabilities have been highest for over-leveraged firms.

Why does all this matter? What can we learn from our historical approach?

On first glance, it looks like we can’t learn anything; the drivers of corporate bankruptcies appear to change quite significantly across our economic past.

But we argue that this finding is itself a lesson from history.

The development of early warning failure systems needs to take account of context and allow for a healthy degree of flexibility.

What does this mean in practice?

Well, regulators and other policy-makers should not solely rely on ad hoc statistical models using recent data. Rather, they should combine these statistical approaches with common sense narrative analytics that incorporate the possibility of compensating mechanisms.

There are clearly different ways in which businesses can go bankrupt. Taking a very recent perspective ignores many alternative routes to business failure. Broadening our scope has permitted us to identify factors that can lead to business instability, but also how these factors can be mitigated.

The Long View on Epidemics, Disease and Public Health: Research from Economic History, Part C

by Vincent Geloso (King’s University College at Western University Canada), discussing Werner Troesken’s ‘The Pox of Liberty’


 

Geloso Image 3
At the Gates, 1885. Available at NIH.

Shutdowns, quarantines, lockdowns and curfews impose economic costs. Yet, from a public health perspective, shutdowns, quarantines, lockdowns and curfews have benefits in that they limit contagion risks and deaths. There is thus a trade-off to be made. Economists, epidemiologists and others have tried to measure the costs and benefits of the measures presently adopted by governments. The idea is to identify which measures are too extreme in that they increase economic distress [i], might induce behavioral responses that mitigate the effectiveness of public health measures [ii] or that are simply too costly compared to other alternatives.

However, these trade-offs understate the complex web of issues associated with public health measures. At least, that it is the conclusion that emerges after reading Werner Troesken’s The Pox of Liberty:  How the Constitution Left Americans Rich, Free, and Prone to Infection (University of Chicago Press, 2015).

Few economists (none, probably) would dispute that the state has a role in the domain of public health. After all, while self-quarantining exists, it would clearly be “underprovided” if marginally disinclined individuals were not somewhat coerced into quarantining themselves. Thus, there is a role for government. Normally, this is the end of the story – at least from the perspective of welfare economics. “Not so” replies Troesken! Institutions that exert the coercion necessary to produce public health measures are also able to use coercion for other, less beneficial purposes, which also generate a series of trade-offs.

 

Geloso Image
The Cow-Pock—or—the Wonderful Effects of the New Inoculation! By satirist James Gillray, 1802. Available here.

 

The first trade-off

Troesken concentrates on inoculation and vaccines in the  late 19th and early 20th centuries, to argue that the United States was an incredibly rich country by the standard of the time and also exhibited exceptionally high (and probably underestimated) smallpox death rates compared to other countries. To explain the puzzle, Troesken argues that by expanding and securing economic freedoms, the Constitution also constrained the ability of governments to prevent contagion of infectious diseases in the short-run. Under the Equal Protection and Due Process Clauses of the Constitution, numerous public health measures were overturned – including measures to make vaccination compulsory. Consequently, Americans were likelier to die from highly contagious diseases.

Geloso Image 2
The Next To Go: Fight Tuberculosis, 1919. Available at NIH.

Yet, the same constraints also protected property rights and economic freedoms, which allowed Americans to grow exceptionally rich during the late 19th and early 20th centuries.

 

The second trade-off

Certain diseases are easily combatted  thanks to economic prosperity (either directly through improved nutrition or indirectly through the ability to make certain investments). Other diseases are less sensitive to the income of the people they maim or kill. [iii] Thus, it might be conjectured that the Constitution made Americans richer and sicker from smallpox  while simultaneously  reducing the likelihood of their dying from other diseases. This is the basis of  Troesken’s chapter,  “The palliative effects of property rights”. The wealth of America made it easier to invest in capital-intensive projects to deal with waterborne diseases and typhoid fever. Troesken demonstrates how the protections afforded by the Constitution encouraged investments in water treatment infrastructures because it protected private firms from politically opportunistic behavior and   bondholders from default.  These protections made Americans less likely to die from typhoid fever.

One could reply to Troesken that the United States is, in many ways, an oddity. Yet, this strange trade-off can be seen through a global lense as well. If Troesken is correct, institutional regimes that score high with regards to economic freedom are ill-equipped to combat infectious diseases as the latter are better dealt with by strong and capable states. Thus, economic freedom would show a zero, or maybe even a positive, relation with death rates. However, these same regimes would be better able to combat “poverty diseases” as economic freedom promotes growth thereby improving the ability to combat certain diseases. Economic freedom would exhibit a negative relation with death rates from poverty diseases. Leandro Prados de la Escosura who compiled time-series of estimates of “economic liberty”, [iv] showed that this relationship was testable.  His measures are very similar to those used in the modern literature on economic freedom but with some differences. Regressing death rates from smallpox and typhoid fever on economic liberty data allows to test the validity of Troesken’s argument.

The result of such a test can be seen in Table 1 below. The relationship  is bivariate and plagued by the problem of a relatively small number of countries providing cause-specific death rates. Thus, it ought to be taken lightly; it is presented  only to suggest new directions of enquiry. Nonetheless, it is supportive of Troesken’s core idea: Economic liberty has no statistically significant effect on the log of smallpox death rates. However, it does have a strong significant effect on death rates from typhoid fever.

This complex trade-off can be summarized: one set of institutions makes us healthier and poorer now while making us less healthy than we could be in the future; the other set of institutions make us sicker and richer now while making us healthier than we could be in the future. Simplified in this manner, we can see the value of the points made by economists such as James Buchanan and Ronald Coase – institutions are not “all you can eat buffets”.

Geloso table

­­In the current crisis, this insight about the roles, advantages, limitations and consequences of public health measures is too often set aside. But this does not make it irrelevant. Unfortunately,  terms of explaining this powerful and crucial insight,  I am a poor substitute for Werner Troesken, [v] who died in 2018.  Nevertheless, his writing ought to be on all our minds when we consider the proper response to the current crisis.

 

[i] W. Kerr et al. 2017. “Economic recession, alcohol, and suicide rates: comparative effects of poverty, foreclosure, and job loss.” American Journal of Preventive Medicine, Vol. 54, n. 4, pp. 469-475.

[ii] A, Mesnard and P. Seabright. 2009. “Escaping epidemics through migration? Quarantine measures under incomplete information about infection risks.” Journal of Public Economics, Vol. 93, no. 7-8, pp. 931-938.

[iii] See notably the works of B. Harris, (2004). “Public Health, Nutrition, and the Decline of Mortality: The McKeown Thesis Revisited.” Social History of Medicine, Vol. 17, no. 3, pp. 379-407 and D. Bloom and D. Canning, (2007). “Commentary: The Preston Curve 30 years on: Still sparking fires.” International Journal of Epidemiology, Vol. 36, no. 3, pp. 489-499 for a summary of the complex relation between prosperity and health.

[iv] L. Prados de la Escosura, (2016). “Economic freedom in the long run: evidence from OECD countries, 1850-2007”. Economic History Review, Vol. 69, no.2, pp. 435-468

[v] I have applied his insights to the case of the Cuban health care system: see V. Geloso, G. Berdine and B. Powell,  “Making Sense of Dictatorships and Health Outcomes”, British Medical Journal: Global Health,  (forthcoming); G. Berdine, V. Geloso and B. Powell,  (2018). “Cuban Infant Mortality and Longevity: Health Care or Repression?” Health Policy & Planning, Vol. 33, no. 6, pp. 755-57.
A longer review of “The Pox of Liberty” by Vincent Geloso is available here: https://notesonliberty.com/2017/01/19/the-pox-of-liberty-dixit-the-political-economy-of-public-health/

See also, Clay, K.; Schmick, E. and Troesken, W. (2019) “The Rise and Fall of Pellagra in the American South”, Journal of Economic History 79(1):32-62, DOI: https://doi.org/10.1017/S0022050718000700

and reviews by Kenneth F. Kiple in  The American Historical Review, https://doi.org/10.1086/ahr/110.2.501 , and a review by Alan M. Kraut in Bulletin of the History of Medicine:https://doi.org/10.1353/bhm.2006.0062


Vincent Geloso

www.vincentgeloso.com 
vincentgeloso@hotmail.com