THE FINANCIAL POWER OF THE POWERLESS: Evidence from Ottoman Istanbul on socio-economic status, legal protection and the cost of borrowing

In Ottoman Istanbul, privileged groups such as men, Muslims and other elites paid more for credit than the under-privileged – the exact opposite of what happens in a modern economy.

New research by Professors Timur Kuran (Duke University) and Jared Rubin (Chapman University), published in the March 2018 issue of the Economic Journal, explains why: a key influence on the cost of borrowing is the rule of law and in particular the extent to which courts will enforce a credit contract.

In pre-modern Turkey, it was the wealthy who could benefit from judicial bias to evade their creditors – and who, because of this default risk, faced higher interest rates on loans. Nowadays, it is under-privileged people who face higher borrowing costs because there are various institutions through which they can escape loan repayment, including bankruptcy options and organisations that will defend poor defaulters as victims of exploitation.

In the modern world, we take it for granted that the under-privileged incur higher borrowing costs than the upper socio-economic classes. Indeed, Americans in the bottom quartile of the US income distribution usually borrow through pawnshops and payday lenders at rates of around 450% per annum, while those in the top quartile take out short-term loans through credit cards at 13-16%. Unlike the under-privileged, the wealthy also have access to long-term credit through home equity loans at rates of around 4%.

The logic connecting socio-economic status to borrowing costs will seem obvious to anyone familiar with basic economics: the higher costs of the poor reflect higher default risk, for which the lender must be compensated.

The new study sets out to test whether the classic negative correlation between socio-economic status and borrowing cost holds in a pre-modern setting outside the industrialised West. To this end, the authors built a data set of private loans issued in Ottoman Istanbul during the period from 1602 to 1799.

These data reveal the exact opposite of what happens in a modern economy: the privileged paid more for credit than the under-privileged. In a society where the average real interest rate was around 19%, men paid an interest surcharge of around 3.4 percentage points; Muslims paid a surcharge of 1.9 percentage points; and elites paid a surcharge of about 2.3 percentage points (see Figure 1).

pic

What might explain this reversal of relative borrowing costs? Why did socially advantaged groups pay more for credit, not less?

The data led the authors to consider a second factor contributing to the price of credit, often taken for granted: the partiality of the law. Implicit in the logic that explains relative credit costs in modern lending markets is that financial contracts are enforceable impartially when the borrower is able to pay. Thus, the rich pay less for credit because they are relatively unlikely to default and because, if they do, lenders can force repayment through courts whose verdicts are more or less impartial.

But in settings where the courts are biased in favour of the wealthy, creditors will expect compensation for the risk of being unable to obtain restitution. The wealth and judicial partiality effects thus work against each other. The former lowers the credit cost for the rich; the latter raises it.

Islamic Ottoman courts served all Ottoman subjects through procedures that were manifestly biased in favour of clearly defined groups. These courts gave Muslims rights that they denied to Christians and Jews. They privileged men over women.

Moreover, because the courts lacked independence from the state, Ottoman subjects connected to the sultan enjoyed favourable treatment. Theory developed in the new study explains why their weak legal power may translate into strong financial power.

More generally, this research suggests that in a free financial market, any hindrance to the enforcement of a credit contract will raise the borrower’s credit cost. Just as judicial biases in favour of the wealthy raise their interest rates on loans, institutions that allow the poor to escape loan repayment – bankruptcy options, shielding of assets from creditors, organisations that defend poor defaulters as victims of exploitation – raise interest rates charged to the poor.

Today, wealth and credit cost are negatively correlated for multiple reasons. The rich benefit both from a higher capacity to post collateral and from better enforcement of their credit obligations relative to those of the poor.

 

To contact the authors:
Timur Kuran (t.kuran@duke.edu); Jared Rubin (jrubin@chapman.edu)

Medieval origins of Spain’s economic geography

The frontier of medieval warfare between Christian and Muslim armies in southern Spain provides a surprisingly powerful explanation of current low-density settlement patterns in those regions. This is the central finding of research by Daniel Oto-Peralías (University of Saint-Andrews), recently presented at the Royal Economic Society’s annual conference in March 2018.

 His study notes that Southern Spain is one of the most deserted areas in Europe in terms of population density, only surpassed by parts of Iceland and the northern part of Scandinavia. It turns out that this outcome has roots going back to medieval times when Spain’s southern plateau was a battlefield between Christian and Muslim armies.

The study documents that Spain stands out in Europe with an anomalous settlement pattern characterised by a very low density in its southern half. Among the ten European regions with the lowest settlement density, six are from southern Spain (while the other four are from Iceland, Norway, Sweden and Finland).

pro

On average only 29.8% of 10km2 grid cells are inhabited in southern Spain, which is a much lower percentage than in the rest of Europe (with an average of 74.4%). Extreme geographical and climatic conditions do not seem to be the reason for this low settlement density, which the author refers to as ‘Spanish anomaly’.

After ruling out geography as the main explanatory factor for the ‘Spanish anomaly’, the research investigates its historical roots by focusing on the Middle Ages, when the territory was retaken by the Christian kingdoms from Muslim rule.

The hypothesis is that the region’s character as a militarily insecure frontier conditioned the colonisation of the territory, which is tested by taking advantage of the geographical discontinuity in military insecurity created by the Tagus River in central Spain. Historical ‘accidents’ made the colonisation of the area south of the Tagus River very different from colonisation north of it.

The invasions of North Africa’s Almoravid and Almohad empires converted the territory south of the Tagus into a battlefield for a century and a half, this river being a natural defensive border. Continuous warfare and insecurity heavily conditioned the nature of the colonisation process in this frontier region, which was characterised by the leading role of the military orders as agents of colonisation, scarcity of population and a livestock-oriented economy. It resulted in the prominence of castles and the absence of villages, and consequently, a spatial distribution of the population characterised by a very low density of settlements.

The empirical analysis reveals a large difference in settlement density across the River Tagus, whereas there are no differences in geographical and climatic variables across it. In addition, it is shown that the discontinuity in settlement density already existed in the 16th and 18th centuries, and is not therefore the result of migration movements and urban developments taking place recently. Preliminary evidence also indicates that the territory exposed to the medieval ranching frontier is relatively poorer today.

Thus, the study shows that historical frontiers can decisively shape the economic geography of countries. Using Medieval Spain as a case study, it illustrates how the exposure to warfare and insecurity – typical in medieval frontiers– creates incentives for a militarised colonisation based on a few fortified settlements and a livestock-oriented economy, conditioning the occupation of a territory to such an extent to convert it into one of the most deserted areas in Europe. Given the ubiquity of frontiers in history, the mechanisms underlined in the analysis are of general interest and may operate in other contexts.

THE IMPACT OF MALARIA ON EARLY AFRICAN DEVELOPMENT: Evidence from the sickle cell trait

_Keep_out_malaria_mosquitoes_repair_your_torn_screen__-_NARA_-_514969
poster “Keep out malaria mosquitoes repair your torn screens”. U.S. Public Health Service, 1941–45

While malaria historically claimed millions of African lives, it did not hold back the continent’s economic development. That is one of the findings of new research by Emilio Depetris-Chauvin (Pontificia Universidad Católica de Chile) and David Weil (Brown University), published in the Economic Journal.

Their study uses data on the prevalence of the gene that causes sickle cell disease to estimate death rates from malaria for the period before the Second World War. They find that in parts of Africa with high malaria transmission, one in ten children died from malaria or sickle cell disease before reaching adulthood – a death rate more than twice the current burden of malaria in these regions.

 

According to the World Health Organization, the malaria mortality rate declined by 29% between 2010 and 2015. This was a major public health accomplishment, although with 429,000 annual deaths, the disease remains a terrible scourge.

Countries where malaria is endemic are also, on average, very poor. This correlation has led economists to speculate about whether malaria is a driver of poverty. But addressing that issue is difficult because of a lack of data. Poverty in the tropics has long historical roots, and while there are good data on malaria prevalence in the period since the Second World War, there is no World Malaria Report for 1900, 1800 or 1700.

Biologists only came to understand the nature of malaria in the late nineteenth century. Even today, trained medical personnel have trouble distinguishing between malaria and other diseases without the use of microscopy or diagnostic tests. Accounts from travellers and other historical records provide some evidence of the impact of malaria going back millennia, but these are hardly sufficient to draw firm conclusions. Akyeampong (2006), Mabogunje and Richards (1985)

This study addresses the lack of information on malaria’s impact historically by using genetic data. In the worst afflicted areas, malaria left an imprint on the human genome that can be read today.

Specifically, the researchers look at the prevalence of the gene that causes sickle cell disease. Carrying one copy of this gene provided individuals with a significant level of protection against malaria, but people who carried two copies of the gene died before reaching reproductive age.

Thus, the degree of selective pressure exerted by malaria determined the equilibrium prevalence of the gene in the population. By measuring the prevalence of the gene in modern populations, it is possible to back out estimates of the severity of malaria historically.

In areas of high malaria transmission, 20% of the population carries the sickle cell trait. The researchers’ estimate is that this implies that historically 10-11% of children died from malaria or sickle cell disease before reaching adulthood. Such a death rate is more than twice the current burden of malaria in these regions.

Comparing the most affected areas with those least affected, malaria may have been responsible for a ten percentage point difference in the probability of surviving to adulthood. In areas of high malaria transmission, the researchers’ estimate that life expectancy at birth was reduced by approximately five years.

Having established the magnitude of malaria’s mortality burden, the researchers then turn to its economic effects. Surprisingly, they find little reason to believe that malaria held back development. A simple life cycle model suggests that the disease was not very important, primarily because the vast majority of deaths that it caused were among the very young, in whom society had invested few resources.

This model-based finding is corroborated by the findings of a statistical examination. Within Africa, areas with higher malaria burden, as evidenced by the prevalence of the sickle cell trait, do not show lower levels of economic development or population density in the colonial era data examined in this study.

 

To contact the authors:  David Weil, david_weil@brown.edu

EFFECTS OF COAL-BASED AIR POLLUTION ON MORTALITY RATES: New evidence from nineteenth century Britain

pic
Samuel Griffiths (1873) The Black Country in the 1870s. In Griffiths’ Guide to the iron trade of Great Britain.

Industrialised cities in mid-nineteenth century Britain probably suffered from similar levels of air pollution as urban centres in China and India do today. What’s more, the damage to health caused by the burning of coal was very high, reducing life expectancy by more than 5% in the most polluted cities like Manchester, Sheffield and Birmingham. It was also responsible for a significant proportion of the higher mortality rates in British cities compared with rural parts of the country.

 These are among the findings of new research by Brian Beach (College of William & Mary) and Walker Hanlon (NYU Stern School of Business), which is published in the Economic Journal. Their study shows the potential value of history for providing insights into the long-run consequences of air pollution.

From Beijing to Delhi and Mexico City to Jakarta, cities across the world struggle with high levels of air pollution. To what extent does severe air pollution affect health and broader economic development for these cities? While future academics will almost surely debate this question, assessing the long-run consequences of air pollution for modern cities will not be possible for decades.

But severe air pollution is not a new phenomenon; Britain’s industrial cities of the nineteenth century, for example, also faced very high levels of air pollution. Because of this, researchers argue that history has the potential to provide valuable insights into the long-run consequences of air pollution.

One challenge in studying historical air pollution is that direct pollution measures are largely unavailable before the mid-twentieth century. This study shows how historical pollution levels in England and Wales can be inferred by combining data on the industrial composition of employment in local areas in 1851 with information on the amount of coal used per worker in each industry.

This makes it possible to estimate the amount of coal used in over 581 districts covering all of England and Wales. Because coal was by far the most important pollutant in Britain in the nineteenth century (as well as much of the twentieth century), this provides a way of approximating local industrial pollution emission levels.

The results are consistent with what historical sources suggest: the researchers find high levels of coal use in a broad swath of towns stretching from Lancashire and the West Riding down into Staffordshire, as well as in the areas around Newcastle, Cardiff and Birmingham.

By comparing measures of local coal-based pollution to mortality data, the study shows that air pollution was a major contributor to mortality in Britain in the mid-nineteenth century. In the most polluted locations – places like Manchester, Sheffield and Birmingham – the results show that air pollution resulting from industrial coal use reduced life expectancy by more than 5%.

One potential concern is that locations with more industrial coal use could have had higher mortality rates for other reasons. For example, people living in these industrial areas could have been poorer, infectious disease may have been more common or jobs may have been more dangerous.

The researchers deal with this concern by looking at how coal use in some parts of the country affected mortality in other areas that were, given the predominant wind direction, typically downwind. They show that locations which were just downwind of major coal-using areas had higher mortality rates than otherwise similar locations which were just upwind of these areas.

These results help to explain why cities in the nineteenth century were much less healthy than more rural areas – the so-called urban mortality penalty. Most existing work argues that the high mortality rates observed in British cities in the nineteenth century were due to the impact of infectious diseases, bad water and unclean food.

The new results show that in fact about one third of the higher mortality rate in cities in the nineteenth century was due to exposure to high levels of air pollution due to the burning of coal by industry.

In addition to assessing the effects of coal use on mortality, the researchers use these effects to back out very rough estimates of historical particulate pollution levels. Their estimates indicate that by the mid-nineteenth century, industrialised cities in Britain were probably as polluted as industrial cities in places like China and India are today.

These findings shed new light on the impact of air pollution in nineteenth century Britain and lay the groundwork for further research analysing the long-run effects of air pollution in cities.

 

To contact the authors:  Brian Beach (bbbeach@wm.edu); Walker Hanlon (whanlon@stern.nyu.edu)

PRE-REFORMATION ROOTS OF THE PROTESTANT ETHIC: Evidence of a nine centuries old belief in the virtues of hard work stimulating economic growth

Jörg_Breu_d._Ä._002
Cistercians at work in a detail from the Life of St. Bernard of Clairvaux, illustrated by Jörg Breu the Elder (1500). From Wikimedia Commons <https://en.wikipedia.org/wiki/Cistercians&gt;

Max Weber’s well-known conception of the ‘Protestant ethic’ was not uniquely Protestant: according to this research published in the September 2017 issue of the Economic Journal, Protestant beliefs in the virtues of hard work and thrift have pre-Reformation roots.

The Order of Cistercians – a Catholic order that spread across Europe 900 years ago – did exactly what the Protestant Reformation is supposed to have done four centuries later: the Order stimulated economic growth by instigating an improved work ethic in local populations.

What’s more, the impact of this work ethic survives today: people living in parts of Europe that were home to Cistercian monasteries more than 500 years ago tend to regard hard work and thrift as more important compared with people living in regions that were not home to Cistercians in the past.

The researchers begin their analysis with an event that has recently been commemorated in several countries across Europe. Exactly 500 years ago, Martin Luther allegedly nailed 95 theses to the door of the Castle Church in Wittenberg, and thereby established Protestantism.

Whether the emergence of Protestantism had enduring consequences has long been debated by social scientists. One of the most influential sociologists, Max Weber, famously argued that the Protestant Reformation was instrumental in facilitating the rise of capitalism in Western Europe.

In contrast to Catholicism, Weber said, Protestantism commends the virtues of hard work and thrift. These values, which he referred to as the Protestant ethic, laid the foundation for the eventual rise of modern capitalism.

But was Weber right? The new study suggests that Weber was right in stressing the importance of a cultural appreciation of hard work and thrift, but quite likely wrong in tracing the origins of these values to the Protestant Reformation.

The researchers use a theoretical model to demonstrate how a small group of people with a relatively strong work ethic – the Cistercians – could plausibly have improved the average work ethic of an entire population within the span of 500 years.

The researchers then test the theory statistically using historical county data from England, where the Cistercians arrived in the twelfth century. England is of particular interest as it has high quality historical data and because, centuries later, it became the epicentre of the Industrial Revolution.

The researchers document that English counties with more Cistercian monasteries experienced faster population growth – a leading measure of economic growth in pre-modern times. The data reveal that this is not simply because the monks were good at choosing locations that would have prospered regardless.

The researchers even detect an impact on economic growth centuries after the king closed down all the monasteries and seized their wealth on the eve of the Protestant Reformation. Thus, the legacy of the monks cannot simply be the wealth that they left behind.

Instead, the monks seem to have left an imprint on the cultural values of the population. To document this, the researchers combine historical data on the location of Cistercian monasteries with a contemporary dataset on the cultural values of individuals across Europe.

They find that people living in regions in Europe that were home to Cistercian monasteries more than 500 years ago reveal different cultural values than those living in other regions. In particular, these individuals tend to regard hard work and thrift as more important compared with people living in regions that were not home to Cistercians in the past.

This study is not the first to question Max Weber’s influential hypothesis. While the majority of statistical analyses show that Protestant regions are more prosperous than others, the reason for this may not be the Protestant ethic as emphasised by Weber.

For example, a study by the economists Sascha Becker and Ludger Woessman demonstrates that Protestant regions of Prussia prospered more than others because of the improved schooling that followed from the instructions of Martin Luther, who encouraged Christians to learn to read so that they could study the Bible.

 

‘Pre-Reformation Roots of the Protestant Ethic’ by Thomas Barnebeck Andersen, Jeanet Bentzen, Carl-Johan Dalgaard and Paul Sharp is published in the September 2017 issue of the Economic Journal.

Thomas Barnebeck Andersen and Paul Richard Sharp are at the University of Southern Denmark. Jeanet Sinding Bentzen and Carl-Johan Dalgaard are at the University of Copenhagen.

 

The TOWER OF BABEL: why we are still a long way from everyone speaking the same language

Nearly a third of the world’s 6,000 plus distinct languages have more than 35,000 speakers. But despite the big communications advantages of a few widely spoken languages such as English and Spanish, there is no sign of a systematic decline in the number of people speaking this large group of relatively small languages.

the_tower_of_babel

These are among the findings of a new study by Professor David Clingingsmith, published in the February 2017 issue of the Economic Journal. His analysis explains how it is possible to have a stable situation in which the world has a small number of very large languages and a large number of small languages.

Does this mean that the benefits of a universal language could never be so great as to induce a sweeping consolidation of language? No, the study concludes:

‘Consider the example of migrants, who tend to switch to the language of their adopted home within a few generations. When the incentives are large enough, populations do switch languages.’

‘The question we can’t yet answer is whether recent technological developments, such as the internet, will change the benefits enough to make such switching worthwhile more broadly.’

Why don’t all people speak the same language? At least since the story of the Tower of Babel, humans have puzzled over the diversity of spoken languages. As with the ancient writers of the book of Genesis, economists have also recognised that there are advantages when people speak a common language, and that those advantages only increase when more people adopt a language.

This simple reasoning predicts that humans should eventually adopt a common language. The growing role of English as the world’s lingua franca and the radical shrinking of distances enabled by the internet has led many people to speculate that the emergence of a universal human language is, if not imminent, at least on the horizon.

There are more than 6,000 distinct languages spoken in the world today. Just 16 of these languages are the native languages of fully half the human population, while the median language is known by only 10,000 people.

The implications might appear to be clear: if we are indeed on the road to a universal language, then the populations speaking the vast majority of these languages must be shrinking relative to the largest ones, on their way to extinction.

The new study presents a very different picture. The author first uses population censuses to produce a new set of estimates of the level and growth of language populations.

The relative paucity of data on the number of people speaking the world’s languages at different points in time means that this can be done for only 344 languages. Nevertheless, the data clearly suggest that the populations of the 29% of languages that have 35,000 or more speakers are stable, not shrinking.

How could this stability be consistent with the very real advantages offered by widely spoken languages? The key is to realise that most human interaction has a local character.

This insight is central to the author’s analysis, which shows that even when there are strong benefits to adopting a common language, we can still end up in a world with a small number of very large languages and a large number of small ones. Numerical simulations of the analytical model produce distributions of language sizes that look very much like the one that actually obtain in the world today.

Summary of the article ‘Are the World’s Languages Consolidating? The Dynamics and Distribution of Language Populations’ by David Clingingsmith. Published in Economic Journal on February 2017

WELFARE SPENDING DOESN’T ‘CROWD OUT’ CHARITABLE WORK: Historical evidence from England under the Poor Laws

Cutting the welfare budget is unlikely to lead to an increase in private voluntary work and charitable giving, according to research by Nina Boberg-Fazlic and Paul Sharp.

Their study of England in the late eighteenth and early nineteenth century, published in the February 2017 issue of the Economic Journal, shows that parts of the country where there was increased spending under the Poor Laws actually enjoyed higher levels of charitable income.

refusing_a_beggar_with_one_leg_and_a_crutch
Edmé Jean Pigal, 1800 ca. An amputee beggar holds out his hat to a well dressed man who is standing with his hands in his pockets. Artist’s caption’s translation: “I don’t give to idlers”. From Wikimedia Commons

 

 

The authors conclude:

‘Since the end of the Second World War, the size and scope of government welfare provision has come increasingly under attack.’

‘There are theoretical justifications for this, but we believe that the idea of ‘crowding out’ – public spending deterring private efforts – should not be one of them.’

‘On the contrary, there even seems to be evidence that government can set an example for private donors.

Why does Europe have considerably higher welfare provision than the United States? One long debated explanation is the existence of a ‘crowding out’ effect, whereby government spending crowds out private voluntary work and charitable giving. The idea is that taxpayers feel that they are already contributing through their taxes and thus do not contribute as much privately.

Crowding out makes intuitive sense if people are only concerned with the total level of welfare provided. But many other factors might play a role in the decision to donate privately and, in fact, studies on this topic have led to inconclusive results.

The idea of crowding out has also caught the imagination of politicians, most recently as part of the flagship policy of the UK’s Conservative Party in the 2010 General Election: the so-called ‘big society’. If crowding out holds, spending cuts could be justified by the notion that the private sector will take over.

The new study shows that this is not necessarily the case. In fact, the authors provide historical evidence for the opposite. They analyse data on per capita charitable income and public welfare spending in England between 1785 and 1815. This was a time when welfare spending was regulated locally under the Poor Laws, which meant that different areas in England had different levels of spending and generosity in terms of who received how much relief for how long.

The research finds no evidence of crowding out; rather, it finds that parts of the country with higher state provision of welfare actually enjoyed higher levels of charitable income. At the time, Poor Law spending was increasing rapidly, largely due to strains caused by the Industrial Revolution. This increase occurred despite there being no changes in the laws regulating relief during this period.

The increase in Poor Law spending led to concerns among contemporary commentators and economists. Many expressed the belief that the increase in spending was due to a disincentive effect of poor relief and that mandatory contributions through the poor rate would crowd out voluntary giving, thereby undermining social virtue. That public debate now largely repeats itself two hundred years later.

 

Summary of the article ‘Does Welfare Spending Crowd Out Charitable Activity? Evidence from Historical England under the Poor Laws’ by Nina Boberg-Fazlic (University of Duisberg-Essen) and Paul Sharp (University of Southern Denmark). Published in  Economic Journal, February 2017

France’s Nineteenth Century Wine Crisis: the impact on crime rates

431px-marchand_de_vins_metier_de_la_rue_milieu_xix_eme_siecle_a_paris
Street Wine Merchant, France 19th century. From Wikimedia Commons

 

The phylloxera crisis in nineteenth century France destroyed 40% of the country’s vineyards, devastating local economies. According to research by Vincent Bignon, Eve Caroli, and Roberto Galbiati, the negative shock to wine production led to a substantial increase in property crime in the affected regions. But their study, published in the February 2017 issue of the Economic Journal, also finds that there was a significant fall in violent crimes because of the reduction in alcohol consumption.

It has long been debated whether crime responds to economic conditions. In particular, do crime rates increase because of financial crises or major downsizing events in regions heavily specialised in some industries?

Casual observation and statistical evidence suggest that property crimes are more frequent during economic crises. For example, the United Nations Office on Drugs and Crime has claimed that in a sample of 15 countries, theft has sharply increased during the last economic crisis.[1]

These issues are important because crime is also known to have a damaging impact on economic growth by discouraging business and talented workers from settling in regions with high rates of crime. If an economic downturn triggers an increase in the crime rate, it could have long-lasting effects by discouraging recovery.

But since multiple factors can simultaneously affect economic conditions and the propensity to commit crime, identifying a causal effect of economic conditions on crime rates is challenging.

The new research addresses the issue by examining how crime rates were affected by a major economic crisis that massively hit wine production, France’s most iconic industry, in the nineteenth century.

The crisis was triggered by the near microscopic insect named phylloxera vastatrix. It originally lived in North America and did not reach Europe in the era of sailing ships since the transatlantic journey took so long that it had died on arrival.

Steam power provided the greater speed needed for phylloxera to survive the trip and it arrived in France in 1863 on imported US vines. Innocuous in its original ecology, phylloxera proved very destructive for French vineyards by sucking the sap of the vines. Between 1863 and 1890, it destroyed about 40% of them, thus causing a significant loss of GDP.

Because phylloxera took time to spread, not all districts started being hit at the same moment, and because districts differed widely in their ability to grow wines, not all districts were hit equally. The phylloxera crisis is therefore an ideal natural experiment to identify the impact of an economic crisis on crime because it generated exogenous variation in economic activity in 75 French districts.

To show the effect quantitatively, the researchers have collected local administrative data on the evolution of property and violent crime rates, as well as minor offences. They use these data to study whether crime increased significantly after the arrival of phylloxera and the ensuing destruction of the vineyards that it entailed.

The results suggest that the phylloxera crisis caused a substantial increase in property crime rates and a significant decrease in violent crimes. The effect on property crime was driven by the negative income shock induced by the crisis. People coped with the negative income shock by engaging in property crimes. At the same time, the reduction in alcohol consumption induced by the phylloxera crisis had a positive effect on the reduction of violent crimes.

From a policy point of view, these results suggest that crises and downsizing events can have long lasting effects. By showing that the near-disappearance of an industry (in this case only a temporary phenomenon) can trigger long-run negative consequences on local districts through an increasing crime rate, this study underlines that this issue must be high on the policy agenda at times of crises.

 

Summary of the article ‘Stealing to Survive? Crime and Income Shocks in Nineteenth Century France’ by Vincent Bignon, Eve Caroli and Roberto Galbiati. Published in Economic Journal on February 2017

[1] ‘Monitoring the impact of economic crisis on crime’, United Nations Office on Drugs and Crime, 2012. This effect was also noted by the French ‘Observatoire national de la délinquance et des réponses pénales’, when it underlines that burglaries sharply increased in France in the period 2007 to 2012.