Extractive Policies and Economic Outcomes: the Unitary Origins of the Present-Day North-South of Italy Divide

by Guilherme de Oliveira (Columbia Law School) and Carmine Guerriero (University of Bologna)

manifesto_emigrazione_san_paolo_brasile

Italy emerged from the Congress of Vienna as a carefully thought equilibrium among eight absolutists states, all under the control of Austria except the Kingdom of the Two Sicilies, dominated by the Bourbons, and the Kingdom of Sardinia, ruled by the Savoys and erected as a barrier between Austria and France. This status quo fed the ambitions of the Piedmontese lineage, turning it into the champion of the liberals, who longed to establish a unitary state by fomenting the beginning of the century unrest. Although ineffective, these insurrections forced the implementation, especially in the South, of the liberal reforms first introduced by the Napoleonic armies, and allowed a rising class of bourgeoisie, attracted by the expanding international demand, to acquire the nester nobility’s domains and prioritize export-oriented farming. Among these activities, arboriculture and sericulture, which were up to 60 times more lucrative than wheat breeding, soon became dominant, constituting half of the 1859 exports. Consequently, farming productivity increased, reaching similar levels in the Northern farms and the Southern latifundia, but the almost exclusive specialization in the agrarian sectors left the Italian economy stagnant as implied by the evolution of the GDP per capita in the regions in our sample, which we group by their political relevance for the post-unitary rulers as inversely picked by Distance-to-Enemies (see upper-left graph of figure 1). This is the distance between each region’s main city and the capital of the fiercer enemy of the Savoys—i.e., Vienna over the 1801-1813, 1848-1881, and 1901-1914 periods, and Paris otherwise—and is the lowest for Veneto, which we then label the “high” political relevance cluster. Similarly, we refer to the regions with above(below)-average values as “low” (“middle”) political relevance group or “South” and to the union of the high-middle relevance regions and the key Kingdom of Sardinia regions—i.e., Liguria and Piedmont—as “North.”

 

Figure 1: Income, Political Power, Land Property Taxes, and Railway Diffusion

1  Note: “GDP-L” is the income in 1861 lire per capita, “Political-Power” is the share of prime ministers born in the region averaged over the previous decade, “Land-Taxes” is the land property tax revenues in 1861 lire per capita, and “Railway” is the railway length built in the previous decade in km per square km. _M (_H) includes Abruzzi, Emilia Romagna, Lombardy, Marche, Tuscany, and Umbria (Veneto), whereas KS gathers Liguria and Piedmont. The North (_L) cluster includes the M, H, and KS groups (Apulia, Basilicata, Calabria, Campania, Lazio, and Sicily). See de Oliveira and Guerriero (2017) for each variable sources and definition.

 

Despite some pre-unitary differences, both clusters were largely underdeveloped with respect to the leading European powers at unification, and the causes of this backwardness ranged from the scarcity of coal and infrastructures to the shortage of human and real capital. Crucially, none of such conditions was significantly different across groups since, differently from the Kingdom of Sardinia, none of the pre-unitary states established a virtuous balance between military spending and investment in valuable public goods as railway and literacy. Even worst, they intensified taxation only when necessary to finance the armies needed to tame internal unrest, which were especially fierce in the Kingdom of Two Sicilies. The bottom graphs of figure 1 exhibit this pattern by displaying the key direct tax, which was the land property duty, and the main non-military expenditure, which was the railway investment.

Meanwhile, the power of the Piedmontese parliament relative to the king grew steadily and its leader Camillo of Cavour succeeded to guarantee an alliance with France in a future conflict against Austria by sustaining the former in the 1856 Crimean War. The 1859 French-Piedmontese victory against the Habsburgs then triggered insurrections in Tuscany, the conquest of the South by Garibaldi, and the proclamation of the Kingdom of Italy in 1861. Dominated by a narrow elite of northerners (see upper-right graphs of figure 1), the new state favoured the Northern export-oriented farming and manufacturing industries while selecting public spending and the Northern populations when levying the taxes necessary to finance these policies. To illustrate, the 1887 protectionist reform, instead of safeguarding the arboriculture sectors crushed by 1880s fall in prices, shielded the Po Valley wheat breeding and those Northern textile and manufacturing industries that had survived the liberal years thanks to state intervention. While indeed the former dominated the allocation of military clothing contracts, the latter monopolized both coal mining permits and public contracts. A similar logic guided the assignment of the monopoly rights in the steamboat construction and navigation sectors and, notably, the public spending in railway, which represented the 53 percent of the 1861-1911 total. Over this period indeed, Liguria and Piedmont gained a 3 (4) times bigger railway spending per square km than Veneto (the other regions). Moreover, the aim of this effort “was more the military one of controlling the national territory, especially in the South, than favouring commerce” [Iuzzolino et al. 2011, p. 22]. Crucially, this infrastructural program was financed through highly unbalanced land property taxes, which in turn affected the key source of savings available to the investment in the growth sectors absent a developed banking systems. The 1864 reform fixed a 125 million target revenue to be raised from 9 districts resembling the pre-unitary states. The ex-Papal State took on the 10 percent, the ex-Kingdom of Two Sicilies the 40, and the rest of the state (ex-Kingdom of Sardinia) only the 29 (21). To further weigh this burden down, a 20 percent surcharge was added by 1868 creating the disparities displayed in the bottom-left graph of figure 1.

The 1886 cadastral reform opened the way to more egalitarian policies and, after the First World War, to the harmonization of the tax-rates, but the impact of extraction on the economies of the two blocks was at that point irreversible. While indeed a flourishing manufacturing sector was established in the North, the mix of low public spending and heavy taxation squeezed the Southern investment to the point that the local industry and export-oriented farming were wiped out. Moreover, extraction destroyed the relationship between the central state and the southern population by unchaining first a civil war, which brought about 20,000 victims by 1864 and the militarization of the area, and then favouring emigration. Because of these tensions, the population started to display a progressively weaker culture as implied by the fall in our proxy for social capital depicted in the bottom-left graph of figure 2.

The fascist regime’s aversion to migrations and its rush to arming first, and the 1960s pro-South state aids then have further affected the divide, which can be safely attributed to the extractive policies selected by the unitary state between 1861 and 1911.

Empirical Evidence

Because the 13 regions remained agrarian over our 1801-1911 sample, we capture the extent of extraction with the land property taxation and the farming productivity with the geographic drivers of the profitability of the arboriculture and sericulture sectors. In addition, we use as inverse metrics of each region’s tax-collection costs (political relevance) the share of previous decade in which the region partook in external wars (Distance-to-Enemies).

Our fixed region and time effects OLS estimates imply that pre-unitary revenues from land property taxes in 1861 lire per capita decrease with each region’s farming productivity but not with its relevance for the Piedmontese elite, whereas the opposite was true for the post-unitary ones. Moreover, post-unitary distortions in land property tax revenues—proxied with the difference between the observed and the counterfactual ones forecasted through pre-unitary estimates (see upper-left graph of figure 2)—and the severity of the other extractive policies—negatively captured by the tax-collection costs and the political relevance (see below)—positively determined the opening gaps in culture, literacy (see bottom-right graph of figure 2), and development, i.e., the income in 1861 lire per capita, the gross saleable farming product, and the textile industry value added in thousands of 1861 lire per capita.

 

Figure 2: The Rise of the North-South Divide

2Note: “Distortion-LT” are the land property tax distortions in 1861 lire per capita, “Distortion-R” is the difference between Railway and the forecasted length of railway built in the previous decade in km per square km, “Culture-N” is the normalized share of the active population engaged in political, union, and religious activities, and “Illiterates-N” is the normalized percentage points of illiterates in the population over the age of six. See figure 1 for each cluster definition and de Oliveira and Guerriero (2017) for each variable sources and definition.

 

These results are consistent with the predictions of the model we lay out to inform our test. First, because of limited state-capacity, the pre-unitary states should reduce extraction if confronted by a more productive and so powerful citizenry, whereas the extractive power of the unitary state should be sufficiently strong to make taxation of the South profitable at the margin and so crucially shaped by his relevance. Second, it should also induce the Southern citizenry to prefer private to public good production and his investment and welfare to rise with factors limiting taxation, i.e., marginal tax-collection costs and political relevance.

Since our proxies for the drivers of extraction are driven by either geographic features independent of human effort or events outside the control of the policy-makers, reverse causation is not an issue. Nevertheless, our results could still be produced by unobserved heterogeneity. To evaluate this aspect, we control for the interactions of time effects with the structural conditions differentiating the two blocks in 1861 and considered key by the extant literature (Franchetti and Sonnino, 1876; Gramsci, 1966; Barbagallo, 1980; Krugman, 1981), i.e., the pre-unitary inclusiveness of political institutions, the land ownership fragmentation, the coal price, and the railway length. Including these controls has little effect on our results. Finally, two extra pieces of evidence rule out the possibility that extraction was an acceptable price for the Italian development (Romeo, 1987). First, it did not shape the manufacturing sector value added. Second, while the pre-unitary length of railway additions was only affected by the farming productivity, the post-unitary one was only driven by the political relevance, resulting useless in creating a unitary market (see upper-right graph of figure 2).

Conclusions

Although the North-South divide has been linked to post-unitary policies before (Salvemini 1963; Cafagna, 1989), nobody has formally clarified how the unitary state solved the trade-off between extraction-related losses and rent-seeking gains. In doing so, we also contribute to the literature comparing extractive and inclusive institutions (North et al., 2009, Acemoglu and Robinson, 2012), endogenizing however the extent of extraction in a setup sufficiently general to be applied to other instances, as for instance the post-Civil War USA.

References

The TOWER OF BABEL: why we are still a long way from everyone speaking the same language

Nearly a third of the world’s 6,000 plus distinct languages have more than 35,000 speakers. But despite the big communications advantages of a few widely spoken languages such as English and Spanish, there is no sign of a systematic decline in the number of people speaking this large group of relatively small languages.

the_tower_of_babel

These are among the findings of a new study by Professor David Clingingsmith, published in the February 2017 issue of the Economic Journal. His analysis explains how it is possible to have a stable situation in which the world has a small number of very large languages and a large number of small languages.

Does this mean that the benefits of a universal language could never be so great as to induce a sweeping consolidation of language? No, the study concludes:

‘Consider the example of migrants, who tend to switch to the language of their adopted home within a few generations. When the incentives are large enough, populations do switch languages.’

‘The question we can’t yet answer is whether recent technological developments, such as the internet, will change the benefits enough to make such switching worthwhile more broadly.’

Why don’t all people speak the same language? At least since the story of the Tower of Babel, humans have puzzled over the diversity of spoken languages. As with the ancient writers of the book of Genesis, economists have also recognised that there are advantages when people speak a common language, and that those advantages only increase when more people adopt a language.

This simple reasoning predicts that humans should eventually adopt a common language. The growing role of English as the world’s lingua franca and the radical shrinking of distances enabled by the internet has led many people to speculate that the emergence of a universal human language is, if not imminent, at least on the horizon.

There are more than 6,000 distinct languages spoken in the world today. Just 16 of these languages are the native languages of fully half the human population, while the median language is known by only 10,000 people.

The implications might appear to be clear: if we are indeed on the road to a universal language, then the populations speaking the vast majority of these languages must be shrinking relative to the largest ones, on their way to extinction.

The new study presents a very different picture. The author first uses population censuses to produce a new set of estimates of the level and growth of language populations.

The relative paucity of data on the number of people speaking the world’s languages at different points in time means that this can be done for only 344 languages. Nevertheless, the data clearly suggest that the populations of the 29% of languages that have 35,000 or more speakers are stable, not shrinking.

How could this stability be consistent with the very real advantages offered by widely spoken languages? The key is to realise that most human interaction has a local character.

This insight is central to the author’s analysis, which shows that even when there are strong benefits to adopting a common language, we can still end up in a world with a small number of very large languages and a large number of small ones. Numerical simulations of the analytical model produce distributions of language sizes that look very much like the one that actually obtain in the world today.

Summary of the article ‘Are the World’s Languages Consolidating? The Dynamics and Distribution of Language Populations’ by David Clingingsmith. Published in Economic Journal on February 2017

Holding Brexiteers to account

by Adrian Williamson, University of Cambridge

fbd5393b-0a6b-4101-9803-2a0eb6e6e36b
Margaret Thatcher and Ted Heat campaigning during the 1975 Common Market Referendum, when conservative leaders took a rather different approach to Europe. Source: http://www.eureferendum.com

The House of Commons has voted overwhelmingly to trigger Article 50, on the explicit basis that this process will be irrevocable and that, at the end of the negotiations, Parliament will have a choice between a hard Brexit (leaving the Single Market and the EEA) and an ultra-hard Brexit (WTO terms, if available).

It follows that arguments about whether the UK should remain in the EU, or should stay in all but name (the so called Norwegian option) are now otiose. What role can economic historians play as the terms of exit unfold? I think that there is an important role for scholars in seeking to analyse the promises of the Brexiteers and how feasible these appear in the light of previous experience.

Thus far, the economic debate over Brexit has been conducted on a very general basis. Remainers have argued that leaving the EU spells disaster, whereas Leavers have dismissed such concerns and promised a golden economic future. But what exactly will this future consist of? Doing the best one can, the Brexit proposition must surely be that the rate of economic growth per capita will be significantly higher in the future than it would have been if the UK had retained its EU membership. Since, at the same time, there was to be a massive and permanent reduction in EU and non-EU immigration (from c.330,000 p.a. net immigration to ‘tens of thousands’), it is per capita improvements that will have to be achieved.

The path to this goal will, it is said, be clear once the UK leaves. In particular:

  • the UK will be able to make its own trade deals and become a great global trading nation;
  • the UK can develop a less restrictive regulatory framework than that imposed by the EU;
  • industries such as manufacturing, fisheries and agriculture will revive once the country is no longer ‘tethered to the corpse’ of the EU;
  • the post-referendum devaluation will provide a boost for exporters.

In relation to each of these claims, there is plenty of helpful evidence from economic history. After all, the UK was the first nation to embrace a global trading role. As Keynes pointed out in a famous passage, in 1914:

The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep; he could at the same moment and by the same means adventure his wealth in the natural resources and new enterprises of any quarter of the world, and share, without exertion or even trouble, in their prospective fruits and advantages…

 Yet, despite this background, and despite the economically advantageous legacies of Empire, the UK spent the period between 1961 and 1973 making increasingly desperate attempts to join a (then much smaller) Common Market. British policymakers were initially dismissive of the European Community. Exports to the Six were thought less important than trade with the Commonwealth. Britain’s initial response was to establish EFTA as a rival free trade area. However, it soon became apparent that this arrangement was lopsided: Britain was part of a free trade area with a population of 89m (including its own 51m), but stood outside the EEC’s tariff walls and population of 170m. Will the 2020s be different from the 1960s? In any event, ‘free trade’ is an elusive concept. As John Biffen, a Tory Trade Minister in the Thatcher government (and no friend of the EU), acknowledged, free trade has never existed ‘outside a textbook’.

As regards to decoupling from EU regulations, the UK was, of course, completely free to devise its own regulatory framework prior to accession to the EU in 1973. Nonetheless, in this period, much of the current labour market structure, such as protection against unfair dismissal and redundancy, was enacted. EU regulations, such as the Social Chapter, have complemented, not undermined, this domestic framework. In any event, does the evidence suggest that a mature economy, such as the UK, will be able to establish a more rapid rate of growth with a looser regulatory framework? The obvious comparisons in this respect are the developed North American and Japanese economies. The data suggests that the UK has performed extremely well within the EU framework.

 

Table: GDP per capita (current US $, source: World Bank

Country

1980

2015

Cumulative increase

USA

12,598

56,116

345%

UK

10,032

43,876

337%

Canada

11,135

43,249

288%

EU

8,314

32,005

285%

Japan

9,308

34,524

271%

 

Of course, much higher rates of growth have recently been achieved in developing economies such as China and India. But it cannot seriously be argued that an economy like the UK, which underwent an industrial revolution in the eighteenth century, can achieve rates of progress comparable to economies that are industrialising now. The whole course of economic history shows that mature economies have much slower rates of growth and that the increases achieved by the USA and the UK over the last few decades are close to optimum performance.

The maturity of the UK economy is also germane to arguments suggesting that it will be possible to revive industries that have suffered long term decline, such as manufacturing, agriculture and fisheries. After all, one consequence of the UK’s early start in manufacturing is that primary industries declined first and most rapidly here. Economic historians have been pointing out since the 1950s that in advanced economies the working population inevitably drifts from agriculture to manufacturing and then from manufacturing to services. In 1973, the American sociologist Daniel Bell greeted the arrival of the post-industrial society. He pointed out that the American economy was the first in the world in which more than 60% of the population were engaged in services, and that this trend was deepening in the USA and elsewhere. Brexit is scarcely likely to reverse these very long-term developments.

The British economy has also had considerable past experience of enforced devaluation (for example in 1931, 1949 and 1967). Research following the 1967 devaluation suggested that a falling pound gave only a temporary fillip to the trade balance, whilst delivering a permanent increase in inflation. Over the same period the West German economy performed extremely strongly, despite a constantly appreciating currency.

Finally, one may question whether the UK can achieve an economic miracle whilst, at the same time, pursuing a very restrictive approach to immigration. Successful economies tend to be extremely open to outsiders, who are both a cause and a consequence of growth. After all, in the pre-1914 golden age to which Keynes referred, there were no controls at all, and the British businessman ‘could secure forthwith, if he wished it, cheap and comfortable means of transit to any country or climate without passport or other formality…and could then proceed abroad to foreign quarters…and would consider himself greatly aggrieved and much surprised at the least interference’. Our putative partners in trade deals are not likely to be offering such access and, if they do, they will want substantial concessions in return.

Of course, past performance is no guarantee of future prosperity. Historic failure does not preclude future success. And sections of British public opinion have, it appears, ‘had enough of experts’. Even so, economic historians can hold up to scrutiny some of the more extravagant claims of the Brexiteers.

 

From NEP-HIS Blog: ‘The market turn: From social democracy to market liberalism’, by Avner Offer

The market turn: From social democracy to market liberalism By Avner Offer, All Souls College, University of Oxford (avner.offer@all-souls.ox.ac.uk) Abstract: Social democracy and market liberalism offered different solutions to the same problem: how to provide for life-cycle dependency. Social democracy makes lateral transfers from producers to dependents by means of progressive taxation. Market liberalism uses […]

via How do we eliminate wealth inequality and financial fragility? — The NEP-HIS Blog

From VOX – Short poppies: the height of WWI servicemen

From Timothy Hatton, Professor of Economics, Australian National University and University of Essex. Originally published on 9 May 2014

The height of today’s populations cannot explain which factors matter for long-run trends in health and height. This column highlights the correlates of height in the past using a sample of British army soldiers from World War I. While the socioeconomic status of the household mattered, the local disease environment mattered even more. Better education and modest medical advances led to an improvement in average health, despite the war and depression.

hattongraph
Distribution of heights in a sample of army recruits. From Bailey et al. (2014)

The last century has seen unprecedented increases in the heights of adults (Bleakley et al., 2013). Among young men in western Europe, that increase amounts to about four inches. On average, sons have been taller than their fathers for the last five generations. These gains in height are linked to improvements in health and longevity.

Increases in human stature have been associated with a wide range of improvements in living conditions, including better nutrition, a lower disease burden, and some modest improvement in medicine. But looking at the heights of today’s populations provides limited evidence on the socioeconomic determinants that can account for long-run trends in health and height. For that, we need to understand the correlates of height in the past. Instead of asking why people are so tall now, we should be asking why they were so short a century ago.

In a recent study Roy Bailey, Kris Inwood and I ( Bailey et al. 2014) took a sample of soldiers joining the British army around the time of World War I. These are randomly selected from a vast archive of two million service records that have been made available by the National Archives, mainly for the benefit of genealogists searching for their ancestors.

For this study, we draw a sample of servicemen who were born in the 1890s and who would therefore be in their late teens or early twenties when they enlisted. About two thirds of this cohort enlisted in the armed services and so the sample suffers much less from selection bias than would be likely during peacetime, when only a small fraction joined the forces. But we do not include officers who were taller than those they commanded. And at the other end of the distribution, we also miss some of the least fit, who were likely to be shorter than average.

FULL TEXT HERE

WELFARE SPENDING DOESN’T ‘CROWD OUT’ CHARITABLE WORK: Historical evidence from England under the Poor Laws

Cutting the welfare budget is unlikely to lead to an increase in private voluntary work and charitable giving, according to research by Nina Boberg-Fazlic and Paul Sharp.

Their study of England in the late eighteenth and early nineteenth century, published in the February 2017 issue of the Economic Journal, shows that parts of the country where there was increased spending under the Poor Laws actually enjoyed higher levels of charitable income.

refusing_a_beggar_with_one_leg_and_a_crutch
Edmé Jean Pigal, 1800 ca. An amputee beggar holds out his hat to a well dressed man who is standing with his hands in his pockets. Artist’s caption’s translation: “I don’t give to idlers”. From Wikimedia Commons

 

 

The authors conclude:

‘Since the end of the Second World War, the size and scope of government welfare provision has come increasingly under attack.’

‘There are theoretical justifications for this, but we believe that the idea of ‘crowding out’ – public spending deterring private efforts – should not be one of them.’

‘On the contrary, there even seems to be evidence that government can set an example for private donors.

Why does Europe have considerably higher welfare provision than the United States? One long debated explanation is the existence of a ‘crowding out’ effect, whereby government spending crowds out private voluntary work and charitable giving. The idea is that taxpayers feel that they are already contributing through their taxes and thus do not contribute as much privately.

Crowding out makes intuitive sense if people are only concerned with the total level of welfare provided. But many other factors might play a role in the decision to donate privately and, in fact, studies on this topic have led to inconclusive results.

The idea of crowding out has also caught the imagination of politicians, most recently as part of the flagship policy of the UK’s Conservative Party in the 2010 General Election: the so-called ‘big society’. If crowding out holds, spending cuts could be justified by the notion that the private sector will take over.

The new study shows that this is not necessarily the case. In fact, the authors provide historical evidence for the opposite. They analyse data on per capita charitable income and public welfare spending in England between 1785 and 1815. This was a time when welfare spending was regulated locally under the Poor Laws, which meant that different areas in England had different levels of spending and generosity in terms of who received how much relief for how long.

The research finds no evidence of crowding out; rather, it finds that parts of the country with higher state provision of welfare actually enjoyed higher levels of charitable income. At the time, Poor Law spending was increasing rapidly, largely due to strains caused by the Industrial Revolution. This increase occurred despite there being no changes in the laws regulating relief during this period.

The increase in Poor Law spending led to concerns among contemporary commentators and economists. Many expressed the belief that the increase in spending was due to a disincentive effect of poor relief and that mandatory contributions through the poor rate would crowd out voluntary giving, thereby undermining social virtue. That public debate now largely repeats itself two hundred years later.

 

Summary of the article ‘Does Welfare Spending Crowd Out Charitable Activity? Evidence from Historical England under the Poor Laws’ by Nina Boberg-Fazlic (University of Duisberg-Essen) and Paul Sharp (University of Southern Denmark). Published in  Economic Journal, February 2017

From The Conversation: No, the Black Death did not create more jobs for women

by Jane Humphries, Professor of Economic History, University of Oxford
Published on 8 April 2014

The plague known as the Black Death which tore through 14th century Europe is traditionally held to have had at least one upside. Women, the theory runs, were able to exploit the labour shortages of post-plague England to find themselves in a richer and more stable position than before. However the idea that women of the era were forerunners of the post World War I generation doesn’t stand up to much scrutiny, as new research shows.

Medievalists have long debated the extent to which women shared in the “golden age” of the English peasantry that followed the demographic catastrophe of the Black Death. The plague killed between 30% and 45% of the population in its first wave 1348-59. Recurrences meant that by the 1370s England’s population had been halved.

The silver lining, for the peasantry at least, was the dramatic increase in workers’ remuneration as landowners struggled to recruit and retain labourers. The results are apparent in a rapid increase in male casual (nominal and real) wages from about 1349.

hngts2cm-1396890152

Some historians have argued that women’s gains were even more marked as they could find employment in hitherto male-dominated jobs, or migrate to towns to work in the growing textile industries and commercial services and so enjoy “economic independence”.

Others however have suggested that whatever the implications of the Black Death for male workers, the sexual division of labour prevented women from seizing the opportunities created by the labour shortage. As one account puts it: “Women tended to work in low-skilled, low-paid jobs … This was true in 1300 and it remained true in 1700”.

The debate has significant implications as optimists have gone further in arguing that women’s improved wages changed demographic behaviour by delaying marriage, promoting celibacy and reducing fertility, with the resulting so-called north-west European Marriage Pattern raising incomes and promoting further growth.

READ FULL ARTICLE HERE

 

France’s Nineteenth Century Wine Crisis: the impact on crime rates

431px-marchand_de_vins_metier_de_la_rue_milieu_xix_eme_siecle_a_paris
Street Wine Merchant, France 19th century. From Wikimedia Commons

 

The phylloxera crisis in nineteenth century France destroyed 40% of the country’s vineyards, devastating local economies. According to research by Vincent Bignon, Eve Caroli, and Roberto Galbiati, the negative shock to wine production led to a substantial increase in property crime in the affected regions. But their study, published in the February 2017 issue of the Economic Journal, also finds that there was a significant fall in violent crimes because of the reduction in alcohol consumption.

It has long been debated whether crime responds to economic conditions. In particular, do crime rates increase because of financial crises or major downsizing events in regions heavily specialised in some industries?

Casual observation and statistical evidence suggest that property crimes are more frequent during economic crises. For example, the United Nations Office on Drugs and Crime has claimed that in a sample of 15 countries, theft has sharply increased during the last economic crisis.[1]

These issues are important because crime is also known to have a damaging impact on economic growth by discouraging business and talented workers from settling in regions with high rates of crime. If an economic downturn triggers an increase in the crime rate, it could have long-lasting effects by discouraging recovery.

But since multiple factors can simultaneously affect economic conditions and the propensity to commit crime, identifying a causal effect of economic conditions on crime rates is challenging.

The new research addresses the issue by examining how crime rates were affected by a major economic crisis that massively hit wine production, France’s most iconic industry, in the nineteenth century.

The crisis was triggered by the near microscopic insect named phylloxera vastatrix. It originally lived in North America and did not reach Europe in the era of sailing ships since the transatlantic journey took so long that it had died on arrival.

Steam power provided the greater speed needed for phylloxera to survive the trip and it arrived in France in 1863 on imported US vines. Innocuous in its original ecology, phylloxera proved very destructive for French vineyards by sucking the sap of the vines. Between 1863 and 1890, it destroyed about 40% of them, thus causing a significant loss of GDP.

Because phylloxera took time to spread, not all districts started being hit at the same moment, and because districts differed widely in their ability to grow wines, not all districts were hit equally. The phylloxera crisis is therefore an ideal natural experiment to identify the impact of an economic crisis on crime because it generated exogenous variation in economic activity in 75 French districts.

To show the effect quantitatively, the researchers have collected local administrative data on the evolution of property and violent crime rates, as well as minor offences. They use these data to study whether crime increased significantly after the arrival of phylloxera and the ensuing destruction of the vineyards that it entailed.

The results suggest that the phylloxera crisis caused a substantial increase in property crime rates and a significant decrease in violent crimes. The effect on property crime was driven by the negative income shock induced by the crisis. People coped with the negative income shock by engaging in property crimes. At the same time, the reduction in alcohol consumption induced by the phylloxera crisis had a positive effect on the reduction of violent crimes.

From a policy point of view, these results suggest that crises and downsizing events can have long lasting effects. By showing that the near-disappearance of an industry (in this case only a temporary phenomenon) can trigger long-run negative consequences on local districts through an increasing crime rate, this study underlines that this issue must be high on the policy agenda at times of crises.

 

Summary of the article ‘Stealing to Survive? Crime and Income Shocks in Nineteenth Century France’ by Vincent Bignon, Eve Caroli and Roberto Galbiati. Published in Economic Journal on February 2017

[1] ‘Monitoring the impact of economic crisis on crime’, United Nations Office on Drugs and Crime, 2012. This effect was also noted by the French ‘Observatoire national de la délinquance et des réponses pénales’, when it underlines that burglaries sharply increased in France in the period 2007 to 2012.

From Notes on Liberty – Ten best papers/books in economic history of the last decades (part 1)

In my post on French economic history last week, I claimed that Robert Allen’s 2001 paper in Explorations in Economic History was one of the ten most important papers of the last twenty-five years. In reaction, economic historian Benjamin Guilbert asked me “what are the other nine”? As I started thinking about the best articles, I realized that […]

via Ten best papers/books in economic history of the last decades (part 1) — Notes On Liberty

From the LSE blogs – Industrial strategy: some lessons from the past

Industrial strategy is back on the government’s agenda, with a promise to produce a ‘match fit’ economy that ‘works for everyone’ and is able to thrive after Brexit. As yet, however, there is little sign of the promised broadly-based and coherent industrial strategy emerging. In crafting it, explains Hugh Pemberton, its architects may profitably look…

via Industrial strategy: some lessons from the past — British Politics and Policy at LSE