Judges and the death penalty in Nazi Germany: New research evidence on judicial discretion in authoritarian states

nazipeoplescourt
The German People’s Court. Available at https://www.foreignaffairs.com/reviews/review-essay/good-germans

Do judicial courts in authoritarian regimes act as puppets for the interests of a repressive state – or do judges act with greater independence? How much do judges draw on their political and ideological affiliations when imposing the death sentence?

A study of Nazi Germany’s notorious People’s Court, recently published in the Economic Journal, reveals direct empirical evidence of how the judiciary in one of the world’s most notoriously politicised courts were influenced in their life-and-death decisions.

The research provides important empirical evidence that the political and ideological affiliations of judges do come into play – a finding that has applications for modern authoritarian regimes and also for democracies that administer the death penalty.

The research team – Dr Wayne Geerling (University of Arizona), Prof Gary Magee, Prof Russell Smyth, and Dr Vinod Mishra (Monash Business School) – explore the factors influencing the likelihood of imposing the death sentence in Nazi Germany for crimes against the state – treason and high treason.

The authors examine data compiled from official records of individuals charged with treason and high treason who appeared before the People’s Courts up to the end of the Second World War.

Established by the Nazis in 1934 to hear cases of serious political offences, the People’s Courts have been vilified as ‘blood tribunals’ in which judges meted out pre-determined sentences.

But in recent years, while not contending that the People’s Court judgments were impartial or that its judges were not subservient to the wishes of the regime, a more nuanced assessment has emerged.

For the first time, the new study presents direct empirical evidence of the reasons behind the use of judicial discretion and why some judges appeared more willing to implement the will of the state than others.

The researchers find that judges with a deeper ideological commitment to Nazi values – typified by being members of the Alte Kampfer (‘Old Fighters’ or early members of the Nazi party) – were indeed more likely to impose the death penalty than those who did not share it.

These judges were more likely to hand down death penalties to members of the most organised opposition groups, those involved in violent resistance against the state and ‘defendants with characteristics repellent to core Nazi beliefs’:

‘The Alte Kampfer were thus more likely to sentence devout Roman Catholics (24.7 percentage points), defendants with partial Jewish ancestry (34.8 percentage points), juveniles (23.4 percentage points), the unemployed (4.9 percentage points) and foreigners (42.3 percentage points) to death.’

Judges who became adults during two distinct historical periods (the Revolution of 1918-19 and the period of hyperinflation from June 1921 to January 1924), which may have shaped these judges’ views with respect to Nazism, were more likely to impose the death sentence.

 Alte Kampfer members whose hometown or suburb lay near a centre of the Revolution of 1918-19 were more likely to sentence a defendant to death.

Previous economic research on sentencing in capital cases has focused mainly on gender and racial disparities, typically in the United States. But the understanding of what determines whether courts in modern authoritarian regimes outside the United States impose the death penalty is scant. By studying a politicised court in an historically important authoritarian state, the authors of the new study shed light on sentencing more generally in authoritarian states.

The findings are important because they provide insights into the practical realities of judicial empowerment by providing rare empirical evidence on how the exercise of judicial discretion in authoritarian states is reflected in sentencing outcomes.

To contact the authors:
Russell Smyth (russell.smyth@monash.edu)

BAD LOCATIONS: Many French towns have been trapped in obsolete places for centuries

Beaumaris,_1610
John Speed (1610), 17th century map of Beaumaris. Available on Wiki Commons.

Only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. That is one of the findings of research by Guy Michaels (London School of Economics) and Ferdinand Rauch (University of Oxford), which uses the contrasting experiences of British and French cities after the fall of the Roman Empire as a natural experiment to explore the impact of history on economic geography – and what leads cities to get stuck in undesirable locations, a big issue for modern urban planners.

The study, published in the February 2018 issue of the Economic Journal, notes that in France, post-Roman urban life became a shadow of its former self, but in Britain it completely disappeared. As a result, medieval towns in France were much more likely to be located near Roman towns than their British counterparts. But many of these places were obsolete because the best locations in Roman times weren’t the same as in the Middle Ages, when access to water transport was key.

The world is rapidly urbanising, but some of its growing cities seem to be misplaced. Their locations are hampered by poor access to world markets, shortages of water or vulnerability to flooding, earthquakes, volcanoes and other natural disasters. This outcome – cities stuck in the wrong places – has potentially dire economic and social consequences.

When thinking about policy responses, it is worth looking at the past to see how historical events can leave cities trapped in locations that are far from ideal. The new study does that by comparing the evolution of two initially similar urban networks following a historical calamity that wiped out one, while leaving the other largely intact.

The setting for the analysis of urban persistence is north-western Europe, where the authors trace the effects of the collapse of the Western Roman Empire more than 1,500 years ago through to the present day. Around the dawn of the first millennium, Rome conquered, and subsequently urbanised, areas including those that make up present day France and Britain (as far north as Hadrian’s Wall). Under the Romans, towns in the two places developed similarly in terms of their institutions, organisation and size.

But around the middle of the fourth century, their fates diverged. Roman Britain suffered invasions, usurpations and reprisals against its elite. Around 410CE, when Rome itself was first sacked, Roman Britain’s last remaining legions, which had maintained order and security, departed permanently. Consequently, Roman Britain’s political, social and economic order collapsed. Between 450CE and 600CE, its towns no longer functioned.

Although some Roman towns in France also suffered when the Western Roman Empire fell, many of them survived and were taken over by Franks. So while the urban network in Britain effectively ended with the fall of the Western Roman Empire, there was much more urban continuity in France.

The divergent paths of these two urban networks makes it possible to study the spatial consequences of the ‘resetting’ of an urban network, as towns across Western Europe re-emerged and grew during the Middle Ages. During the High Middle Ages, both Britain and France were again ruled by a common elite (Norman rather than Roman) and had access to similar production technologies. Both features make is possible to compare the effects of the collapse of the Roman Empire on the evolution of town locations.

Following the asymmetric calamity and subsequent re-emergence of towns in Britain and France, one of three scenarios can be imagined:

  • First, if locational fundamentals, such as coastlines, mountains and rivers, consistently favour a fixed set of places, then those locations would be home to both surviving and re-emerging towns. In this case, there would be high persistence of locations from the Roman era onwards in both British and French urban networks.
  • Second, if locational fundamentals or their value change over time (for example, if coastal access becomes more important) and if these fundamentals affect productivity more than the concentration of human activity, then both urban networks would similarly shift towards locations with improved fundamentals. In this case, there would be less persistence of locations in both British and French urban networks relative to the Roman era.
  • Third, if locational fundamentals or their value change, but these fundamentals affect productivity less than the concentration of human activity, then there would be ‘path-dependence’ in the location of towns. The British urban network, which was reset, would shift away from Roman-era locations towards places that are more suited to the changing economic conditions. But French towns would tend to remain in their original Roman locations.

The authors’ empirical investigation finds support for the third scenario, where town locations are path-dependent. Medieval towns in France were much more likely to be located near Roman towns than their British counterparts.

These differences in urban persistence are still visible today; for example, only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. This finding suggests that the urban network in Britain shifted towards newly advantageous locations between the Roman and medieval eras, while towns in France remained in locations that may have become obsolete.

But did it really matter for future economic development that medieval French towns remained in Roman-era locations? To shed light on this question, the researchers focus on a particular dimension of each town’s location: its accessibility to transport networks.

During Roman times, roads connected major towns, facilitating movements of the occupying army. But during the Middle Ages, technical improvements in water transport made coastal access more important. This technological change meant that having coastal access mattered more for medieval towns in Britain and France than for Roman ones.

The study finds that during the Middle Ages, towns in Britain were roughly two and a half times more likely to have coastal access – either directly or via a navigable river – than during the Roman era. In contrast, in France, there was little change in the urban network’s coastal access.

The researchers also show that having coastal access did matter for towns’ subsequent population growth, which is a key indicator of their economic viability. Specifically, they find that towns with coastal access grew faster between 1200 and 1700, and for towns with poor coastal access, access to canals was associated with faster population growth. The investments in the costly building and maintenance of these canals provide further evidence of the value of access to water transport networks.

The conclusion is that many French towns were stuck in the wrong places for centuries, since their locations were designed for the demands of Roman times and not those of the Middle Ages. They could not take full advantage of the improved transport technologies because they had poor coastal access.

Taken together, these findings show that urban networks may reconfigure around locational fundamentals that become more valuable over time. But this reconfiguration is not inevitable, and towns and cities may remain trapped in bad locations over many centuries and even millennia. This spatial misallocation of economic activity over hundreds of years has almost certainly induced considerable economic costs.

‘Our findings suggest lessons for today’s policy-makers – conclude the authors – The conclusion that cities may be misplaced still matters as the world’s population becomes ever more concentrated in urban areas. For example, parts of Africa, including some of its cities, are hampered by poor access to world markets due to their landlocked position and poor land transport infrastructure. Our research suggests that path-dependence in city locations can still have significant costs.’

‘‘Resetting the Urban Network: 117-2012’ by Guy Michaels and Ferdinand Rauch was published in the February 2018 issue of the Economic Journal.

To contact the authors:
Guy Michaels (G.Michaels@lse.ac.uk)
Ferdinand Rauch (ferdinand.rauch@economics.ox.ac.uk)

THE ‘WITCH CRAZE’ OF 16th & 17th CENTURY EUROPE: Economists uncover religious competition as driving force of witch hunts

11328679url_&&version=1501231358665
“The Pendle Witches”. Available at https://www.theanneboleynfiles.com/witchcraft-in-tudor-and-stuart-times/

Economists Peter Leeson (George Mason University) and Jacob Russ (Bloom Intelligence) have uncovered new evidence to resolve the longstanding puzzle posed by the ‘witch craze’ that ravaged Europe in the sixteenth and seventeenth centuries and resulted in the trial and execution of tens of thousands for the dubious crime of witchcraft.

 

In research forthcoming in the Economic Journal, Leeson and Russ argue that the witch craze resulted from competition between Catholicism and Protestantism in post-Reformation Christendom. For the first time in history, the Reformation presented large numbers of Christians with a religious choice: stick with the old Church or switch to the new one. And when churchgoers have religious choice, churches must compete.

In an effort to woo the faithful, competing confessions advertised their superior ability to protect citizens against worldly manifestations of Satan’s evil by prosecuting suspected witches. Similar to how Republicans and Democrats focus campaign activity in political battlegrounds during US elections to attract the loyalty of undecided voters, Catholic and Protestant officials focused witch trial activity in religious battlegrounds during the Reformation and Counter-Reformation to attract the loyalty of undecided Christians.

Analysing new data on more than 40,000 suspected witches whose trials span Europe over more than half a millennium, Leeson and Russ find that when and where confessional competition, as measured by confessional warfare, was more intense, witch trial activity was more intense too. Furthermore, factors such as bad weather, formerly thought to be key drivers of the witch craze, were not in fact important.

The new data reveal that the witch craze took off only after the Protestant Reformation in 1517, following the new faith’s rapid spread. The craze reached its zenith between around 1555 and 1650, years co-extensive with peak competition for Christian consumers, evidenced by the Catholic Counter-Reformation, during which Catholic officials aggressively pushed back against Protestant successes in converting Christians throughout much of Europe.

Then, around 1650, the witch craze began its precipitous decline, with prosecutions for witchcraft virtually vanishing by 1700.

What happened in the middle of the seventeenth century to bring the witch craze to a halt? The Peace of Westphalia, a treaty entered in 1648, which ended decades of European religious warfare and much of the confessional competition that motivated it by creating permanent territorial monopolies for Catholics and Protestants – regions of exclusive control, in which one confession was protected from the competition of the other.

The new analysis suggests that the witch craze should also have been focused geographically, located where Catholic-Protestant rivalry was strongest and vice versa. And indeed it was: Germany alone, which was ground zero for the Reformation, laid claim to nearly 40% of all witchcraft prosecutions in Europe.

In contrast, Spain, Italy, Portugal and Ireland – each of which remained a Catholic stronghold after the Reformation and never saw serious competition from Protestantism – collectively accounted for just 6% of Europeans tried for witchcraft.

Religion, it is often said, works in unexpected ways. The new study suggests that the same can be said of competition between religions.

 

To contact the authors:  Peter Leeson (PLeeson@GMU.edu)

THE FINANCIAL POWER OF THE POWERLESS: Evidence from Ottoman Istanbul on socio-economic status, legal protection and the cost of borrowing

In Ottoman Istanbul, privileged groups such as men, Muslims and other elites paid more for credit than the under-privileged – the exact opposite of what happens in a modern economy.

New research by Professors Timur Kuran (Duke University) and Jared Rubin (Chapman University), published in the March 2018 issue of the Economic Journal, explains why: a key influence on the cost of borrowing is the rule of law and in particular the extent to which courts will enforce a credit contract.

In pre-modern Turkey, it was the wealthy who could benefit from judicial bias to evade their creditors – and who, because of this default risk, faced higher interest rates on loans. Nowadays, it is under-privileged people who face higher borrowing costs because there are various institutions through which they can escape loan repayment, including bankruptcy options and organisations that will defend poor defaulters as victims of exploitation.

In the modern world, we take it for granted that the under-privileged incur higher borrowing costs than the upper socio-economic classes. Indeed, Americans in the bottom quartile of the US income distribution usually borrow through pawnshops and payday lenders at rates of around 450% per annum, while those in the top quartile take out short-term loans through credit cards at 13-16%. Unlike the under-privileged, the wealthy also have access to long-term credit through home equity loans at rates of around 4%.

The logic connecting socio-economic status to borrowing costs will seem obvious to anyone familiar with basic economics: the higher costs of the poor reflect higher default risk, for which the lender must be compensated.

The new study sets out to test whether the classic negative correlation between socio-economic status and borrowing cost holds in a pre-modern setting outside the industrialised West. To this end, the authors built a data set of private loans issued in Ottoman Istanbul during the period from 1602 to 1799.

These data reveal the exact opposite of what happens in a modern economy: the privileged paid more for credit than the under-privileged. In a society where the average real interest rate was around 19%, men paid an interest surcharge of around 3.4 percentage points; Muslims paid a surcharge of 1.9 percentage points; and elites paid a surcharge of about 2.3 percentage points (see Figure 1).

pic

What might explain this reversal of relative borrowing costs? Why did socially advantaged groups pay more for credit, not less?

The data led the authors to consider a second factor contributing to the price of credit, often taken for granted: the partiality of the law. Implicit in the logic that explains relative credit costs in modern lending markets is that financial contracts are enforceable impartially when the borrower is able to pay. Thus, the rich pay less for credit because they are relatively unlikely to default and because, if they do, lenders can force repayment through courts whose verdicts are more or less impartial.

But in settings where the courts are biased in favour of the wealthy, creditors will expect compensation for the risk of being unable to obtain restitution. The wealth and judicial partiality effects thus work against each other. The former lowers the credit cost for the rich; the latter raises it.

Islamic Ottoman courts served all Ottoman subjects through procedures that were manifestly biased in favour of clearly defined groups. These courts gave Muslims rights that they denied to Christians and Jews. They privileged men over women.

Moreover, because the courts lacked independence from the state, Ottoman subjects connected to the sultan enjoyed favourable treatment. Theory developed in the new study explains why their weak legal power may translate into strong financial power.

More generally, this research suggests that in a free financial market, any hindrance to the enforcement of a credit contract will raise the borrower’s credit cost. Just as judicial biases in favour of the wealthy raise their interest rates on loans, institutions that allow the poor to escape loan repayment – bankruptcy options, shielding of assets from creditors, organisations that defend poor defaulters as victims of exploitation – raise interest rates charged to the poor.

Today, wealth and credit cost are negatively correlated for multiple reasons. The rich benefit both from a higher capacity to post collateral and from better enforcement of their credit obligations relative to those of the poor.

 

To contact the authors:
Timur Kuran (t.kuran@duke.edu); Jared Rubin (jrubin@chapman.edu)

Medieval origins of Spain’s economic geography

The frontier of medieval warfare between Christian and Muslim armies in southern Spain provides a surprisingly powerful explanation of current low-density settlement patterns in those regions. This is the central finding of research by Daniel Oto-Peralías (University of Saint-Andrews), recently presented at the Royal Economic Society’s annual conference in March 2018.

 His study notes that Southern Spain is one of the most deserted areas in Europe in terms of population density, only surpassed by parts of Iceland and the northern part of Scandinavia. It turns out that this outcome has roots going back to medieval times when Spain’s southern plateau was a battlefield between Christian and Muslim armies.

The study documents that Spain stands out in Europe with an anomalous settlement pattern characterised by a very low density in its southern half. Among the ten European regions with the lowest settlement density, six are from southern Spain (while the other four are from Iceland, Norway, Sweden and Finland).

pro

On average only 29.8% of 10km2 grid cells are inhabited in southern Spain, which is a much lower percentage than in the rest of Europe (with an average of 74.4%). Extreme geographical and climatic conditions do not seem to be the reason for this low settlement density, which the author refers to as ‘Spanish anomaly’.

After ruling out geography as the main explanatory factor for the ‘Spanish anomaly’, the research investigates its historical roots by focusing on the Middle Ages, when the territory was retaken by the Christian kingdoms from Muslim rule.

The hypothesis is that the region’s character as a militarily insecure frontier conditioned the colonisation of the territory, which is tested by taking advantage of the geographical discontinuity in military insecurity created by the Tagus River in central Spain. Historical ‘accidents’ made the colonisation of the area south of the Tagus River very different from colonisation north of it.

The invasions of North Africa’s Almoravid and Almohad empires converted the territory south of the Tagus into a battlefield for a century and a half, this river being a natural defensive border. Continuous warfare and insecurity heavily conditioned the nature of the colonisation process in this frontier region, which was characterised by the leading role of the military orders as agents of colonisation, scarcity of population and a livestock-oriented economy. It resulted in the prominence of castles and the absence of villages, and consequently, a spatial distribution of the population characterised by a very low density of settlements.

The empirical analysis reveals a large difference in settlement density across the River Tagus, whereas there are no differences in geographical and climatic variables across it. In addition, it is shown that the discontinuity in settlement density already existed in the 16th and 18th centuries, and is not therefore the result of migration movements and urban developments taking place recently. Preliminary evidence also indicates that the territory exposed to the medieval ranching frontier is relatively poorer today.

Thus, the study shows that historical frontiers can decisively shape the economic geography of countries. Using Medieval Spain as a case study, it illustrates how the exposure to warfare and insecurity – typical in medieval frontiers– creates incentives for a militarised colonisation based on a few fortified settlements and a livestock-oriented economy, conditioning the occupation of a territory to such an extent to convert it into one of the most deserted areas in Europe. Given the ubiquity of frontiers in history, the mechanisms underlined in the analysis are of general interest and may operate in other contexts.

THE IMPACT OF MALARIA ON EARLY AFRICAN DEVELOPMENT: Evidence from the sickle cell trait

_Keep_out_malaria_mosquitoes_repair_your_torn_screen__-_NARA_-_514969
poster “Keep out malaria mosquitoes repair your torn screens”. U.S. Public Health Service, 1941–45

While malaria historically claimed millions of African lives, it did not hold back the continent’s economic development. That is one of the findings of new research by Emilio Depetris-Chauvin (Pontificia Universidad Católica de Chile) and David Weil (Brown University), published in the Economic Journal.

Their study uses data on the prevalence of the gene that causes sickle cell disease to estimate death rates from malaria for the period before the Second World War. They find that in parts of Africa with high malaria transmission, one in ten children died from malaria or sickle cell disease before reaching adulthood – a death rate more than twice the current burden of malaria in these regions.

 

According to the World Health Organization, the malaria mortality rate declined by 29% between 2010 and 2015. This was a major public health accomplishment, although with 429,000 annual deaths, the disease remains a terrible scourge.

Countries where malaria is endemic are also, on average, very poor. This correlation has led economists to speculate about whether malaria is a driver of poverty. But addressing that issue is difficult because of a lack of data. Poverty in the tropics has long historical roots, and while there are good data on malaria prevalence in the period since the Second World War, there is no World Malaria Report for 1900, 1800 or 1700.

Biologists only came to understand the nature of malaria in the late nineteenth century. Even today, trained medical personnel have trouble distinguishing between malaria and other diseases without the use of microscopy or diagnostic tests. Accounts from travellers and other historical records provide some evidence of the impact of malaria going back millennia, but these are hardly sufficient to draw firm conclusions. Akyeampong (2006), Mabogunje and Richards (1985)

This study addresses the lack of information on malaria’s impact historically by using genetic data. In the worst afflicted areas, malaria left an imprint on the human genome that can be read today.

Specifically, the researchers look at the prevalence of the gene that causes sickle cell disease. Carrying one copy of this gene provided individuals with a significant level of protection against malaria, but people who carried two copies of the gene died before reaching reproductive age.

Thus, the degree of selective pressure exerted by malaria determined the equilibrium prevalence of the gene in the population. By measuring the prevalence of the gene in modern populations, it is possible to back out estimates of the severity of malaria historically.

In areas of high malaria transmission, 20% of the population carries the sickle cell trait. The researchers’ estimate is that this implies that historically 10-11% of children died from malaria or sickle cell disease before reaching adulthood. Such a death rate is more than twice the current burden of malaria in these regions.

Comparing the most affected areas with those least affected, malaria may have been responsible for a ten percentage point difference in the probability of surviving to adulthood. In areas of high malaria transmission, the researchers’ estimate that life expectancy at birth was reduced by approximately five years.

Having established the magnitude of malaria’s mortality burden, the researchers then turn to its economic effects. Surprisingly, they find little reason to believe that malaria held back development. A simple life cycle model suggests that the disease was not very important, primarily because the vast majority of deaths that it caused were among the very young, in whom society had invested few resources.

This model-based finding is corroborated by the findings of a statistical examination. Within Africa, areas with higher malaria burden, as evidenced by the prevalence of the sickle cell trait, do not show lower levels of economic development or population density in the colonial era data examined in this study.

 

To contact the authors:  David Weil, david_weil@brown.edu

EFFECTS OF COAL-BASED AIR POLLUTION ON MORTALITY RATES: New evidence from nineteenth century Britain

pic
Samuel Griffiths (1873) The Black Country in the 1870s. In Griffiths’ Guide to the iron trade of Great Britain.

Industrialised cities in mid-nineteenth century Britain probably suffered from similar levels of air pollution as urban centres in China and India do today. What’s more, the damage to health caused by the burning of coal was very high, reducing life expectancy by more than 5% in the most polluted cities like Manchester, Sheffield and Birmingham. It was also responsible for a significant proportion of the higher mortality rates in British cities compared with rural parts of the country.

 These are among the findings of new research by Brian Beach (College of William & Mary) and Walker Hanlon (NYU Stern School of Business), which is published in the Economic Journal. Their study shows the potential value of history for providing insights into the long-run consequences of air pollution.

From Beijing to Delhi and Mexico City to Jakarta, cities across the world struggle with high levels of air pollution. To what extent does severe air pollution affect health and broader economic development for these cities? While future academics will almost surely debate this question, assessing the long-run consequences of air pollution for modern cities will not be possible for decades.

But severe air pollution is not a new phenomenon; Britain’s industrial cities of the nineteenth century, for example, also faced very high levels of air pollution. Because of this, researchers argue that history has the potential to provide valuable insights into the long-run consequences of air pollution.

One challenge in studying historical air pollution is that direct pollution measures are largely unavailable before the mid-twentieth century. This study shows how historical pollution levels in England and Wales can be inferred by combining data on the industrial composition of employment in local areas in 1851 with information on the amount of coal used per worker in each industry.

This makes it possible to estimate the amount of coal used in over 581 districts covering all of England and Wales. Because coal was by far the most important pollutant in Britain in the nineteenth century (as well as much of the twentieth century), this provides a way of approximating local industrial pollution emission levels.

The results are consistent with what historical sources suggest: the researchers find high levels of coal use in a broad swath of towns stretching from Lancashire and the West Riding down into Staffordshire, as well as in the areas around Newcastle, Cardiff and Birmingham.

By comparing measures of local coal-based pollution to mortality data, the study shows that air pollution was a major contributor to mortality in Britain in the mid-nineteenth century. In the most polluted locations – places like Manchester, Sheffield and Birmingham – the results show that air pollution resulting from industrial coal use reduced life expectancy by more than 5%.

One potential concern is that locations with more industrial coal use could have had higher mortality rates for other reasons. For example, people living in these industrial areas could have been poorer, infectious disease may have been more common or jobs may have been more dangerous.

The researchers deal with this concern by looking at how coal use in some parts of the country affected mortality in other areas that were, given the predominant wind direction, typically downwind. They show that locations which were just downwind of major coal-using areas had higher mortality rates than otherwise similar locations which were just upwind of these areas.

These results help to explain why cities in the nineteenth century were much less healthy than more rural areas – the so-called urban mortality penalty. Most existing work argues that the high mortality rates observed in British cities in the nineteenth century were due to the impact of infectious diseases, bad water and unclean food.

The new results show that in fact about one third of the higher mortality rate in cities in the nineteenth century was due to exposure to high levels of air pollution due to the burning of coal by industry.

In addition to assessing the effects of coal use on mortality, the researchers use these effects to back out very rough estimates of historical particulate pollution levels. Their estimates indicate that by the mid-nineteenth century, industrialised cities in Britain were probably as polluted as industrial cities in places like China and India are today.

These findings shed new light on the impact of air pollution in nineteenth century Britain and lay the groundwork for further research analysing the long-run effects of air pollution in cities.

 

To contact the authors:  Brian Beach (bbbeach@wm.edu); Walker Hanlon (whanlon@stern.nyu.edu)

PRE-REFORMATION ROOTS OF THE PROTESTANT ETHIC: Evidence of a nine centuries old belief in the virtues of hard work stimulating economic growth

Jörg_Breu_d._Ä._002
Cistercians at work in a detail from the Life of St. Bernard of Clairvaux, illustrated by Jörg Breu the Elder (1500). From Wikimedia Commons <https://en.wikipedia.org/wiki/Cistercians&gt;

Max Weber’s well-known conception of the ‘Protestant ethic’ was not uniquely Protestant: according to this research published in the September 2017 issue of the Economic Journal, Protestant beliefs in the virtues of hard work and thrift have pre-Reformation roots.

The Order of Cistercians – a Catholic order that spread across Europe 900 years ago – did exactly what the Protestant Reformation is supposed to have done four centuries later: the Order stimulated economic growth by instigating an improved work ethic in local populations.

What’s more, the impact of this work ethic survives today: people living in parts of Europe that were home to Cistercian monasteries more than 500 years ago tend to regard hard work and thrift as more important compared with people living in regions that were not home to Cistercians in the past.

The researchers begin their analysis with an event that has recently been commemorated in several countries across Europe. Exactly 500 years ago, Martin Luther allegedly nailed 95 theses to the door of the Castle Church in Wittenberg, and thereby established Protestantism.

Whether the emergence of Protestantism had enduring consequences has long been debated by social scientists. One of the most influential sociologists, Max Weber, famously argued that the Protestant Reformation was instrumental in facilitating the rise of capitalism in Western Europe.

In contrast to Catholicism, Weber said, Protestantism commends the virtues of hard work and thrift. These values, which he referred to as the Protestant ethic, laid the foundation for the eventual rise of modern capitalism.

But was Weber right? The new study suggests that Weber was right in stressing the importance of a cultural appreciation of hard work and thrift, but quite likely wrong in tracing the origins of these values to the Protestant Reformation.

The researchers use a theoretical model to demonstrate how a small group of people with a relatively strong work ethic – the Cistercians – could plausibly have improved the average work ethic of an entire population within the span of 500 years.

The researchers then test the theory statistically using historical county data from England, where the Cistercians arrived in the twelfth century. England is of particular interest as it has high quality historical data and because, centuries later, it became the epicentre of the Industrial Revolution.

The researchers document that English counties with more Cistercian monasteries experienced faster population growth – a leading measure of economic growth in pre-modern times. The data reveal that this is not simply because the monks were good at choosing locations that would have prospered regardless.

The researchers even detect an impact on economic growth centuries after the king closed down all the monasteries and seized their wealth on the eve of the Protestant Reformation. Thus, the legacy of the monks cannot simply be the wealth that they left behind.

Instead, the monks seem to have left an imprint on the cultural values of the population. To document this, the researchers combine historical data on the location of Cistercian monasteries with a contemporary dataset on the cultural values of individuals across Europe.

They find that people living in regions in Europe that were home to Cistercian monasteries more than 500 years ago reveal different cultural values than those living in other regions. In particular, these individuals tend to regard hard work and thrift as more important compared with people living in regions that were not home to Cistercians in the past.

This study is not the first to question Max Weber’s influential hypothesis. While the majority of statistical analyses show that Protestant regions are more prosperous than others, the reason for this may not be the Protestant ethic as emphasised by Weber.

For example, a study by the economists Sascha Becker and Ludger Woessman demonstrates that Protestant regions of Prussia prospered more than others because of the improved schooling that followed from the instructions of Martin Luther, who encouraged Christians to learn to read so that they could study the Bible.

 

‘Pre-Reformation Roots of the Protestant Ethic’ by Thomas Barnebeck Andersen, Jeanet Bentzen, Carl-Johan Dalgaard and Paul Sharp is published in the September 2017 issue of the Economic Journal.

Thomas Barnebeck Andersen and Paul Richard Sharp are at the University of Southern Denmark. Jeanet Sinding Bentzen and Carl-Johan Dalgaard are at the University of Copenhagen.

 

The TOWER OF BABEL: why we are still a long way from everyone speaking the same language

Nearly a third of the world’s 6,000 plus distinct languages have more than 35,000 speakers. But despite the big communications advantages of a few widely spoken languages such as English and Spanish, there is no sign of a systematic decline in the number of people speaking this large group of relatively small languages.

the_tower_of_babel

These are among the findings of a new study by Professor David Clingingsmith, published in the February 2017 issue of the Economic Journal. His analysis explains how it is possible to have a stable situation in which the world has a small number of very large languages and a large number of small languages.

Does this mean that the benefits of a universal language could never be so great as to induce a sweeping consolidation of language? No, the study concludes:

‘Consider the example of migrants, who tend to switch to the language of their adopted home within a few generations. When the incentives are large enough, populations do switch languages.’

‘The question we can’t yet answer is whether recent technological developments, such as the internet, will change the benefits enough to make such switching worthwhile more broadly.’

Why don’t all people speak the same language? At least since the story of the Tower of Babel, humans have puzzled over the diversity of spoken languages. As with the ancient writers of the book of Genesis, economists have also recognised that there are advantages when people speak a common language, and that those advantages only increase when more people adopt a language.

This simple reasoning predicts that humans should eventually adopt a common language. The growing role of English as the world’s lingua franca and the radical shrinking of distances enabled by the internet has led many people to speculate that the emergence of a universal human language is, if not imminent, at least on the horizon.

There are more than 6,000 distinct languages spoken in the world today. Just 16 of these languages are the native languages of fully half the human population, while the median language is known by only 10,000 people.

The implications might appear to be clear: if we are indeed on the road to a universal language, then the populations speaking the vast majority of these languages must be shrinking relative to the largest ones, on their way to extinction.

The new study presents a very different picture. The author first uses population censuses to produce a new set of estimates of the level and growth of language populations.

The relative paucity of data on the number of people speaking the world’s languages at different points in time means that this can be done for only 344 languages. Nevertheless, the data clearly suggest that the populations of the 29% of languages that have 35,000 or more speakers are stable, not shrinking.

How could this stability be consistent with the very real advantages offered by widely spoken languages? The key is to realise that most human interaction has a local character.

This insight is central to the author’s analysis, which shows that even when there are strong benefits to adopting a common language, we can still end up in a world with a small number of very large languages and a large number of small ones. Numerical simulations of the analytical model produce distributions of language sizes that look very much like the one that actually obtain in the world today.

Summary of the article ‘Are the World’s Languages Consolidating? The Dynamics and Distribution of Language Populations’ by David Clingingsmith. Published in Economic Journal on February 2017

WELFARE SPENDING DOESN’T ‘CROWD OUT’ CHARITABLE WORK: Historical evidence from England under the Poor Laws

Cutting the welfare budget is unlikely to lead to an increase in private voluntary work and charitable giving, according to research by Nina Boberg-Fazlic and Paul Sharp.

Their study of England in the late eighteenth and early nineteenth century, published in the February 2017 issue of the Economic Journal, shows that parts of the country where there was increased spending under the Poor Laws actually enjoyed higher levels of charitable income.

refusing_a_beggar_with_one_leg_and_a_crutch
Edmé Jean Pigal, 1800 ca. An amputee beggar holds out his hat to a well dressed man who is standing with his hands in his pockets. Artist’s caption’s translation: “I don’t give to idlers”. From Wikimedia Commons

 

 

The authors conclude:

‘Since the end of the Second World War, the size and scope of government welfare provision has come increasingly under attack.’

‘There are theoretical justifications for this, but we believe that the idea of ‘crowding out’ – public spending deterring private efforts – should not be one of them.’

‘On the contrary, there even seems to be evidence that government can set an example for private donors.

Why does Europe have considerably higher welfare provision than the United States? One long debated explanation is the existence of a ‘crowding out’ effect, whereby government spending crowds out private voluntary work and charitable giving. The idea is that taxpayers feel that they are already contributing through their taxes and thus do not contribute as much privately.

Crowding out makes intuitive sense if people are only concerned with the total level of welfare provided. But many other factors might play a role in the decision to donate privately and, in fact, studies on this topic have led to inconclusive results.

The idea of crowding out has also caught the imagination of politicians, most recently as part of the flagship policy of the UK’s Conservative Party in the 2010 General Election: the so-called ‘big society’. If crowding out holds, spending cuts could be justified by the notion that the private sector will take over.

The new study shows that this is not necessarily the case. In fact, the authors provide historical evidence for the opposite. They analyse data on per capita charitable income and public welfare spending in England between 1785 and 1815. This was a time when welfare spending was regulated locally under the Poor Laws, which meant that different areas in England had different levels of spending and generosity in terms of who received how much relief for how long.

The research finds no evidence of crowding out; rather, it finds that parts of the country with higher state provision of welfare actually enjoyed higher levels of charitable income. At the time, Poor Law spending was increasing rapidly, largely due to strains caused by the Industrial Revolution. This increase occurred despite there being no changes in the laws regulating relief during this period.

The increase in Poor Law spending led to concerns among contemporary commentators and economists. Many expressed the belief that the increase in spending was due to a disincentive effect of poor relief and that mandatory contributions through the poor rate would crowd out voluntary giving, thereby undermining social virtue. That public debate now largely repeats itself two hundred years later.

 

Summary of the article ‘Does Welfare Spending Crowd Out Charitable Activity? Evidence from Historical England under the Poor Laws’ by Nina Boberg-Fazlic (University of Duisberg-Essen) and Paul Sharp (University of Southern Denmark). Published in  Economic Journal, February 2017