The long-term negative impact of slavery on economic development in Brazil

by Andrea Papadia (London School of Economics)

image-brazil-handler-tuite.png
Jean Baptiste Debret (1826). From “The Atlantic Slave Trade and Slave Life in the Americas: A Visual Record”, https://makinghistorymatter.ca/2014/04/02/journal-of-an-african-slave-in-brazil/

 

Slavery has been at the centre of many heated debates in the social sciences, yet there are few systematic studies relating slavery to economic outcomes in receiving countries. Moreover, most existing work on Brazil – which was the largest slave importer during the African slave trade and the last country to abolish the practice – has failed to identify any clear legacies of this institution.

This research overcomes this impasse by highlighting a distinctly negative impact of slavery on economic development in Brazil. More precisely, it illustrates that in the municipalities of the states of Rio de Janeiro and São Paulo, where slave labour was more prevalent in the nineteenth century, fiscal development was lower in the early twentieth century, long after slavery was abolished.

The identification of this negative effect is tied to separating the true effect of slavery on fiscal development from the fact that the huge expansion of coffee production that Brazil underwent from the 1830s attracted large numbers of slaves to booming regions. In fact, the research shows that:

  • A naïve analysis of the data would suggest that for relatively low levels, more slavery in the nineteenth century was associated with higher successive fiscal development.
  • For population shares of slaves above 30-35%, more slavery was clearly associated with lower fiscal development.
  • Taking account of the impact of the coffee boom on both the demand for slave labour and development, slavery was unambiguously associated with worse developmental outcomes later on.
  • Comparing two hypothetical municipalities – equal in all respects except for their reliance on slave labour – one with 30% of slaves among its citizens would have had revenues 70% lower compared with one with 20%.
  • These results persist even when taking account of a wide variety of other factors that could explain difference in fiscal development across municipalities.

Fiscal development is widely considered as an essential building block in the creation of modern states able to foster economic growth by providing public goods and protecting the rule of law. While the historical process of fiscal development on the European continent is relatively well understood, in other parts of the world the study of the evolution of fiscal institutions is still in its early stages.

There are many reasons why a high incidence of slavery would hamper fiscal development and the provision of public goods:

  • First, a higher incidence of slaves in the population will translate into lower political representation for the masses, even in only partially democratic regimes such as nineteenth and early twentieth century Brazil.
  • Second, the provision of key public goods, such as education, will be less salient in areas that rely heavily on slave labour. These areas will also be less keen to attract workers from other areas of the country and abroad, thus making the provision of public services to their citizens less important.
  • Finally, slavery might make resource sharing though taxation more difficult due to increased ethnic, geographical and class cleavages in the population.

The history of Brazil, which was characterised by large-scale use of slave labour from the sixteenth century until the nineteenth century, provides an idea testing ground to investigate how this clearly extractive institution affected the developmental path of countries and their subdivisions.

The research shows that by accounting for confounding effects due to Brazil’s coffee boom, the pernicious effects of slavery on a key factor for economic growth – fiscal development – can be strongly identified.

British engineering skills in the age of steam

by Harry Kitsikopoulos (academic director, Unbound Prometheus)

Side-lever_engine_1849
Wiki Commons. The side-lever Engine, 1849 ca.

 

Engineering skills in Britain improved during the eighteenth century but progress was not linear. My research uses a novel approach to quantifying the trends from the first appearance of the technology of steam power (1706) through to the last quarter of the century (the Watt era), using a large amount of data on fuel consumption rates.

Britain was a very unlikely candidate for the invention of steam engines, as I argue in my 2016 book, Innovation and Technological Diffusion: An Economic History of the Early Steam Engines. It was French and Italians who first rediscovered, translated and published the ancient texts of Hero of Alexandria on steam power; they also discovered the existence of vacuum in nature, the main principle of a steam engine’s working mechanism.

But Britain had two advantages: first, a divorce-obsessed king who detached the island from the Catholic dogma and its alliance with the Cartesian epistemological paradigm, both denying the existence of vacuum in nature. The same king also brought a seismic institutional transformation by passing monastic properties under the ownership of lay landlords, a class far more keen on solving the water drainage problem plaguing the mining industry in its drive to exploit mineral wealth.

Britain was also fortunate in another respect: it was relatively backward in terms of mining technology! That proved to be a good thing. While mining districts in Germany and Liège used a technology that resolved the drainage problem, Britain failed to imitate them, hence forcing itself to seek alternative solutions, thereby leading to the invention of the steam engine.

Grand inventions earn glorious references in school textbooks, but it is the diffusion of a technology that contributes to economic growth, a process that relies on the development of relevant human capital.

The records reveal that there were not much more than a dozen engineers who were active in erecting engines during the period 1706-75, including Thomas Newcomen, the obscure ironmonger from Devon who came up with the first working model. The figure increased to at least 60 during the last quarter of the century through the action of the invisible hand: the initial scarcity of such skills raised wages, which, in turn, acted as stimuli transferring talent from related engineering occupations.

My new study traces the production and marketing strategies of this group, which ranged from the narrow horizons of certain figures concentrating on the erection of engines in one locality, a single model, or focusing on one industry all the way to the global outlook of the Boulton and Watt firm.

The last question I pose is perhaps the most interesting: did British engineers get better during the eighteenth century in managing these engines?

Measuring skill is not a straightforward affair. Two well-respected experts at the time came up with tables that specified what the ideal fuel rates ought to have been for engines of different hp. When plotted in a graph these two variables depict a curve of ideal rates.

My analysis uses two distinct datasets with 111 fuel rate observations recorded in working engines – one for the older Newcomen model and another for the newer Watt engines. These actual fuel rates were plotted as bullet points around the respective ‘ideal’ curves. A progressively narrower distance between the curves and the bullet points would indicate higher efficiency and improved engineering skills.

The results reveal that for the first 25 years following the appearance of both models, there was no consistent trend: the bullet points alternated coming closer and moving away from the ideal curves. But the data also reveal that these initial patterns gave way to trends revealing consistent progress.

In an era of practical tinkerers lacking a formal educational system when it comes to this particular skill, British engineers did get better through a classic process of ‘learning-by-doing’, But this only happened after an initial stage of adjustment, of getting used to models with different working mechanisms.

Safe-haven asset: property speculation in medieval England

by Adrian Bell, Chris Brooks and Helen Killick (ICMA Centre, University of Reading)

Neuadd_y_Penrhyn

While we might imagine the medieval English property market to have been predominantly ‘feudal’ in nature and therefore relatively static, this research reveals that in the fourteenth and fifteenth centuries, it demonstrated increasing signs of commercialisation.

The study, funded by the Leverhulme Trust, finds that a series of demographic crises in the fourteenth century caused an increase in market activity, as opportunities for property ownership were opened up to new sections of society.

Chief among these was the Black Death of 1348-50, which wiped out over a third of the population. In contrast with previous research, this research shows that after a brief downturn in the immediate aftermath of the plague, the English market in freehold property experienced a surge in activity; between 1353 and 1370, the number of transactions per year almost doubled in number.

The Black Death prompted aristocratic landowners to dispose of their estates, as the high death toll meant that they no longer had access to the labour with which to cultivate them. At the same time, the gentry and professional classes sought to buy up land as a means of social advancement, resulting in a widespread downward redistribution of land.

In light of the fact that during this period labour shortages made land much less profitable in terms of agricultural production, we might expect property prices to have fallen.

Instead, this research demonstrates that this was not the case: the price of freehold land remained robust and certain types of property (such as manors and residential buildings) even rose in value. This is attributed to the fact that increasing geographical and social mobility during this period allowed for greater opportunities for property acquisition, and thus the development of property as a commercial asset.

This is indicated by changes in patterns of behaviour among buyers. The data suggest that an increasing number of people from the late fourteenth century onwards engaged in property speculation – in other words, purchase for the purposes of investment rather than consumption.

These investors purchased multiple properties, often at a considerable distance from their place of residence, and sometimes clubbed together with other buyers to form syndicates. They were often wealthy London merchants, who had acquired large amounts of disposable capital through their commercial activities.

The commodification of housing is a subject that has been much debated in recent years. By exploring the origins of property as an ‘asset class’ in the pre-modern economy, this research draws inevitable comparisons with the modern context: in medieval times, much as now, ‘bricks and mortar’ were viewed as a secure financial investment.

Industrialisation and the origins of modern prosperity: evidence from the United States in the 19th century

by Ori Katz (Tel Aviv University)

Aertsen,_Pieter_-_Market_Scene.jpg
Wiki Commons. Market scene by Pieter Aertsen, c.1550

 

The largest economic mystery is the modern prosperity of humankind. For thousands of years since the Neolithic revolution, most humans lived in small communities, working as farmers, and their average standard of living did not change much.

But in the nineteenth century, things changed: large parts of the world become industrialised. In those parts, people moved to live in huge cities, where they worked in manufacturing and commerce, had fewer children, invested more in schooling, and their standard of living began to rise, and then to rise dramatically, and it has never stopped since. Whether you look at life expectancy, birth fatality, income per person or any other measure, the trend is the same. And we don’t really know why.

We have a lot of theories. Some believe that this dramatic change has something to do with a geopolitical environment that encouraged competition and maintained stability in property rights. Others talk about a change in human preferences, maybe even in human biology. But in every theory, two of the main ingredients are the dramatic reduction in fertility and the increasing investment in human capital during the late nineteenth century.

This research examines the effect of industrialisation on human capital and fertility in the United States during the period from 1850 to 1900. This effect is hard to identify, for example because human capital also affects industrialisation, or because other variables such as ‘culture’ may affect both.

To deal with those problems, the study uses the westward expansion of the country as a ‘natural experiment’. The appearance of new large cities such as Chicago and Buffalo led to the development of new transport routes, and the study looks at counties that happened to be close to those new routes.

Those counties experienced industrialisation only because of their geographical location, and not because of the human capital of the local population or other variables. This means that analysing them is similar to a laboratory experiment, where it is possible to change only one parameter and leave the others intact.

Results show a very large effect of industrialisation on both fertility and human capital. These results are in contrast with an old theory according to which industrialisation was a ‘de-skilling’ process that increased the demand for unskilled labour. It seems that industrialisation was conducive to human capital.

They also find that the effects of industrialisation on both fertility and human capital were larger in counties that were already more developed in the first place. This led to a divergence between them and less developed counties. Indeed, when we look at the country level, we see increasing gaps between the industrialised countries and the rest of the world, starting in the nineteenth century, just like the gaps shown at the county level.

The modern period of growth is still a mystery, but these research results tell us that the effects of industrialisation on fertility and human capital are an important piece of the puzzle. These effects might be the reason for the great divergence between nineteenth century economies that created the modern wealth gaps between nations.

Employment, retirement and pensions: the Victorian era as a golden age for the elderly

by Tom Heritage (University of Southampton)

Elderlyspinnera
Irish spinning wheel – around 1900
Library of Congress collection

For far too long, our elderly ancestors have been viewed through the prism of the National Health Service and the modern welfare state: old people are regarded as a burden, taking out of society rather than contributing. In contrast, this study of census data for five counties across England and Wales from 1851 to 1911 reveals a reciprocal relationship between those living in old age and wider society.

First, across the whole period, 86-93% of men aged 60 and over were in employment. Even if we exclude those in workhouses, the figure is 80-85%.

Most old men worked in agricultural and general labouring, although an increase was evident by 1911 in the mining industry in Glamorgan and metal manufacturing in Sheffield. Bricklaying, house painting, dock labouring and commercial sales were also pursued in urban areas. Labour force participation rates were higher among men in their sixties than among men in their seventies and eighties.

Second, from 1851 to 1911, between a sixth and a third of women aged over 60 were in employment. Although their occupations were less diverse than those of men, the majority were based in domestic service.

Old women were also involved in cotton and silk textiles and in the manufacture of straw hats. Over time, though, the employment rates of old women did not increase like those of men, owing partly to foreign competition in Asian straw imports and French silks.

Third, retirement was not an innovation brought about by the creation of old age pensions. As early as 1891, over 13% of old men were described in the census as ‘retired’, with high rates in the areas favoured by today’s retirees: the coastal areas of Christchurch and Portsmouth in southern England. More old people retired than went into the workhouse.

But retirement was only an option for those who had inherited or managed to accumulate wealth, such as former smallholders, grocers, innkeepers, civil servants or military officers. Others who lacked land or capital, for example agricultural labourers, or boot and shoe makers were forced to resort to the Poor Law.

Even then, this did not always, or usually, mean the workhouse. Welfare assistance to old people in their own homes was common, especially for women. ‘Outdoor relief’, usually around 2s 6d per week, was issued as a weekly ‘pension’.

Moreover, the women who received it were not always as old as those entitled to a pension in the modern era: in Yorkshire in 1891, over 10% of old women described as ‘on relief’ were under 66, which will be the minimum pension age for women by 2020.

So is it really true to say that nowadays, ‘the elderly have never had it so good’? In a sense it is, as old people lead healthier and longer lives today than they have ever done.

But it would be wrong to conclude that old people in Victorian times were largely condemned to lives of pain and poverty. They had a wide range of experiences, and many had access to employment opportunities and sources of assistance that are no longer offered.

In terms of present day policy, we might learn something from our Victorian forebears about ways to integrate the general population in their sixties into the workforce, so that they can contribute to society as well as receive welfare.

Child labour in 18th century England: evidence from the Foundling Hospital

by Alice Dolan (University of Hertfordshire)

Foundling_Hospital;_Captain_Coram_and_several_children,_the_Wellcome_V0049243
Wellcome Images.Foundling Hospital: Captain Coram and several children, the latter carrying implements of work, a church and ships in the distance. Steel engraving by H. Setchell after W. Hogarth

Every few years a child labour scandal in the clothing industry hits the British press, invoking wide public condemnation. This reaction is a modern phenomenon: 250 years ago, child labour in textile production was commonplace, not worthy of a headline.

Attitudes changed in the nineteenth century, leading to the passing of the 1833 Factory Act and 1842 Mines Act. But before this change, child labour was believed to have positive benefits for children.

One notable example was the Foundling Hospital, a charitable institution that supported abandoned children and was a keen believer in the benefits of child labour. The Hospital sought to produce upright citizens that would be able to support themselves as adults.

A key aim of the Hospital was therefore to train children to be ‘industrious’ from a young age. One governor wrote that the Hospital aimed ‘to give [the Foundlings] an early Turn to Industry by giving them constant employment’. This ‘Turn’ would train the children into economic self-sufficiency, stopping them from relying on parish poor relief as adults.

The Foundling Hospital opened its doors in 1741. Parliament recognised the value of its work and funded the acceptance of all children presented to it aged 12 months or under over the period 1756-60. This ‘General Reception’ brought 14,934 children into the Hospital.

The London Hospital could not cope with these unprecedentedly high numbers and new branches were founded, including one in Ackworth, Yorkshire, which received 2,664 children in the period 1757-72. Ackworth closed because Parliament regretted its generosity and stopped funding the General Reception generation in 1771.

Thousands of children required thousands of uniforms and Ackworth chose to make as many garments as possible in-house. On-site production both trained children to be industrious and offered financial benefits for the Hospital. Work completed on-site was cheap and reliable, and there was greater quality control.

The Ackworth ‘manufactory’ produced woollen cloth. The children prepared the fibre for spinning, span it and wove the yarn into cloth that was worn by their peers at Ackworth and was sold to the London Hospital and externally. Some cloth manufacturing work was outsourced, particularly finishing processes that required a higher level of skill.

Few concessions were made for the age of the makers and the London branch criticised and sent orders back that were considered to be of insufficient quality or inappropriate size. These were primarily business rather than charitable transactions.

The skill division also applied in the making of clothing. Underwear, stockings and girls’ clothing were made in-house because it was less skilled work. Garments were produced in high volumes. From 1761 to 1770, 13,442 pieces of underwear (shirts and shifts) and 19,148 pairs of stockings were made by the children.

Tasks such as tailoring, and hat and shoe making required long apprenticeships to develop the necessary skill – this work was therefore outsourced. But external supply had its problems. It was difficult to source enough garments for the hundreds of children at the branch. Products were more expensive because labour was not free and the Hospital had little influence on suppliers’ timeframes.

A Foundling started work young, aged 4 or 5, and continued to work through their residence at the Hospital. Despite this, they were luckier than their peers in the workhouse who endured worse conditions.

Many parents chose to send their children to the Foundling Hospital to give them better life chances through the greater educational and apprenticeship opportunities offered. Putting the children to work, which seems cruel to us, was a key educational strategy to help them achieve economic independence in adulthood. Its financial and logistical benefits were welcome too.

Trading parliamentary votes for private gain: logrolling in the approval of new railways in 19th century Britain

by Rui Esteves and Gabriel Geisler Mesevage (University of Oxford)

ImageVaultHandler.aspx
Parliament.uk – Railways in early nineteenth century Britain

The possibility that politicians might act to further their private financial interests, as opposed to the general public interest, has led to the creation of conflict-of-interest rules in modern democracies. For example, the code of conduct of the British Parliament requires that MPs disclose private interests related to their public duties.

In the mid-nineteenth century, Parliament went further, and created a system for the approval of new major public works projects in which MPs with a conflict were barred from voting. But the effectiveness of these rules can be undermined if politicians agree to trade votes with their colleagues — a practice known as ‘logrolling’.

This research use a unique episode in the mid-nineteenth century to determine whether, and to what extent, British politicians traded their votes to further their private interests.

In the mid-1840s, hundreds of new railway companies petitioned the British Parliament for the right to build railway lines. It was Parliament’s responsibility to pick the railway lines they wanted to see built, and in this way shape the development of the modern British transport network.

Since many MPs were also investors in railroads, Parliament created a system of subcommittees, in which the applications of railways would be considered only by MPs without financial conflicts, and who did not represent a constituency that the railway was intending to service.

As a result of this system, MPs with vested interests could not vote for their preferred projects directly. But they could further their interests indirectly by trading their vote on another project with the vote of the MP overseeing the project in which they had an interest.

Drawing on methods from social network analysis, the study identifies all of the potential trades between MPs, and then test statistically for evidence of vote trading. The statistical evidence reveals significant collusion in the voting patterns of MPs who were deciding which railway lines to approve.

These findings reveal significant levels of vote-trading, with politicians coordinating their behaviour so as to ensure that the projects they preferred – which they were banned from influencing directly – were nonetheless approved by their colleagues. As much as a quarter of all of the approved projects were likely the result of this logrolling, and the economic costs of this behaviour were significant, leading to Britain creating a less efficient railway network.

This research highlights the importance of understanding politician’s private interests. Moreover, it illustrates how merely acknowledging conflicts of interest, and abstaining from voting when conflicted, may not resolve the problem of vested interests if politicians are able to collude. The findings shed light on a perennial problem; the methods developed to detect logrolling in this setting may prove useful for detecting vote-trading in other contexts.

The making of New World individualism and Old World collectivism: international migrants as carriers of cultural values

by Anne Sofie Beck Knudsen (University of Copenhagen)

 

 

ny-world-immigration-1906
The Sunday magazine of the New York World appealed to Immigrants with this 1906 cover page celebrating their arrival at Ellis Island.

Although a hotly debated topic, we know surprisingly little of the long-term cultural impact of international migration. Does it boil down to the risk of clashes between different cultures; or do we see cultural changes in migrant-sending and migrant-receiving countries along other dimensions as well?

Using novel empirical data, this research documents how past mass migration flows carried values of individualism across the Atlantic ocean from the mid-nineteenth to early twentieth century. This inter-cultural exchange was so significant that its impact is still observed today.

When talking about individualism versus collectivism, this study refers to the emphasis on independence from society that is prevalent in these cultures. With this in mind, it becomes clear why it has a role to play. The act of migration involves leaving familiar surroundings to embark on a journey where you are bound to rely on yourself. An individual with strong ties to the surroundings will be less likely to undergo this act. Collectivists are thus less likely migrate, while the opposite is true for individualists.

To test the idea of individualistic migration and its long-term impact empirically, this research constructs novel indicators of culture, which allow to go back and study the past. It looks at two everyday cultural manifestations: how we name our children; and how we speak our language.

Giving a child commonplace names like ‘John’ reflects parents of a more conformist motivation as they, perhaps unconsciously, are more concerned about their child fitting in rather than standing out. Likewise, the relative use of singular (‘I’, ‘mine’, ‘me’) over plural (‘we’, ‘ours’, us) personal pronouns tells us something about the focus on the individual over the collective.

The study constructs historical indicators of culture from the distribution of names in historical birth registers and from the written language of local newspapers at the time.

With new data in hand, the research can document the prevalence of individualistic migration during the settlement of the United States around the turn of the twentieth century. Among inhabitants of major migrant-sending countries like Norway and Sweden, only those with more uncommon names were more likely actually migrate to. This cultural effect remains even when considering a host of other potential explanations related to economic prospects and family background.

If more individualistic types are more likely to migrate, we would expect to observe an impact on the overall culture of a given location. That is exactly what this research finds. Districts in Sweden and Norway that experienced high emigration flows of people with an individualistic spirit did indeed become more collectivistic – both in terms of child naming trends and in written language pronoun use.

This leaves with the question of whether an impact from this historical event is still visible today. Does international migration have long-term cultural consequences other than the risk of producing cultural clashes?

In this study, this seems to be the case. Scandinavian districts that experience more emigration are still relatively more collectivist today than those that experienced less. Moreover, it is widely agreed that New World countries like the United States are the most individualistic in the world today – a fact that seems to be explained by the type of migrants they once received.

Mortality in economic downturns: unobserved migration can create the false impression that recessions are good for health

by Vellore Arthi (University of Essex), Brian Beach (College of William & Mary), and Walker Hanlon (University of California, Los Angeles)

volga-germans-us

Are recessions good for health? A number of recent studies suggest that mortality actually goes down during recessions – at least in developed countries, where social safety nets help cushion the blow of unemployment and income loss.

This striking conclusion rests on one of two assumptions: either that people do not respond by migrating away from recession-stricken areas; or that if they move, these population flows can be perfectly measured. But are these assumptions realistic?

Migrant movements can be notoriously difficult to track, and famous episodes such as the Depression-era migration from the US Great Plains to California suggest that these sorts of internal population movements may indeed be a natural response to changes in local economic conditions. This raises the question: what does unaccounted migration mean for our assessment of the recession-mortality relationship?

Our research shows that unobserved migration from recession-stricken regions may actually lead us to underestimate systematically how deadly recessions really are.

To test how migration influences estimates of the relationship between recessions and mortality, we draw on a unique historical natural experiment: the temporary but severe economic downturn in the cotton textile-producing regions of Britain that resulted from the American civil war (1861-65).

The cotton textile industry was England’s largest industrial sector in the second half of the nineteenth century and, prior to the civil war, received the majority of its raw cotton inputs from the American South. The onset of the civil war sharply reduced these supplies, leading to a severe but temporary economic downturn that left several hundred thousand workers unemployed.

Digitising a wealth of historical data on births, deaths and population, and exploiting variation in both the geographical distribution of the British cotton textile industry and the timing of the civil war, we show that standard approaches yield the familiar result: the downturn, popularly termed the ‘cotton famine,’ reduced mortality.

But we also find evidence that migratory responses to this event were substantial, with much of this mobility occurring over short distances, as displaced cotton workers sought opportunities for work in nearby districts.

After making a series of methodological adjustments that account for this recession-induced migration, we show that the sign of the recession-mortality relationship flips: this downturn in fact appears to have been bad for health, raising mortality in both cotton regions and in the regions to which unemployed cotton operatives fled.

After accounting for migration bias, we find that:

  • The civil war-era downturn in the cotton textile regions of Britain increased total mortality in the affected districts
  • But the downturn appears to have led to improved infant and maternal mortality outcomes, probably by freeing up maternal time for breastfeeding, childcare, and other health-improving behaviours.
  • Gains in infant health were offset by large and significant increases in mortality among the elderly.
  • There was no net effect on mortality among working-age adults, who were also the most mobile during the downturn.
  • This outcome appears to have been driven by worsening mortality due to the deteriorating nutrition and living conditions associated with income loss, which was in turn offset by improvements in maternal mortality and by fewer deaths by accidents and violence. (The latter finding is further supported by evidence that alcohol consumption and industrial accident rates fell during the recession.)

Our study provides both a methodological and factual contribution to our understanding of the relationship between recessions and health. The methodological contribution consists of showing that migration undertaken in response to a recession has the potential to introduce substantial bias into estimates of the recession-mortality relationship using the standard approach – particularly if these population flows are not well measured.

This bias is likely to be greater in settings, such as developing countries, where labour forces are more mobile, where weak social safety nets induce migration in response to recessions, and where the intercensal population data used to track these movements are poor. Studies applying the standard approach in these settings are likely to generate misleading results, which may lead to poorly targeted public health responses.

On a factual level, our study also contributes new evidence on the relationship between recessions and mortality in a historical setting, with the implication that studies focused on just one age group, such as infants, may generate results that are not representative of other segments of the population, or indeed of the overall relationship between recessions and mortality.

 

 

Economic roots of Jewish persecutions in Medieval Europe

by Robert Warren Anderson (University of Michigan-Dearborn), Noel D. Johnson and Mark Koyama (George Mason University).

 

 

Jewish communities in pre-industrial European societies were more likely to be vulnerable to persecutions during periods of economic hardship.

The authors’ study finds that colder springs and summers, which led to reduced food supply, were associated with a higher probability of Jewish persecutions. What’s more, the effect of colder weather on the probability of Jewish persecutions was larger in cities with poor quality soil and in states that were weaker.

Throughout most of history, religious minorities were the victims of persecution. Violence against religious and ethnic minorities remains a major problem in many developing countries today. This study investigates why some societies persecute minorities.

To answer these questions, the researchers focus on the persecution of Jews in medieval and early modern Europe. Violence against Jews was caused by a complex set of factors that have been studied intensively by historians. These include religiously motivated anti-semitism, the need to blame outsider groups and the economic role that Jews played in pre-industrial European societies.

The new study focuses on the hypothesis that Jews were more likely to be vulnerable during periods of economic hardship. The researchers test this hypothesis by combining two novel datasets.

The first dataset is drawn from the 26-volume Encyclopaedia Judaica and contains yearly information on 1,366 city-level persecutions of Jews from 936 European cities between 1100 and 1800. The location of these cities as well as the intensity with which they persecuted Jews is illustrated in Figure 1.

 

Figure 1: The distribution of cities with Jewish persecutions and total persecutions, 1100-1800

 

 

The second source contains data on yearly growing season temperature (April to September), which have been reconstructed from proxies including tree rings, ice cores and pollen counts (Guiot and Corona, 2010).

The first result is that colder springs and summers are indeed associated with a higher probability of persecution. A one standard deviation decrease in average growing season temperature in the previous five-year period (about one-third of a degree Celsius) raised the probability that a community would be persecuted from a baseline of about 2% to between 3% and 3.5% in the subsequent five-year period or a 50% to 75% increase in persecution probability.

To explain this effect, the researchers develop a conceptual framework that outlines the political equilibrium under which pre-modern rulers would tolerate the presence of a Jewish community. They argue that this equilibrium was vulnerable to shocks to agricultural output and why this vulnerability may have been greater in locations with poor quality soil and in polities where sovereignty was divided or which were more susceptible to unrest.

Consistent with their conceptual framework, the researchers find that the effect of colder weather on persecution probability was larger in cities with poor quality soil and in states that were weaker. Moreover, the relationship between colder weather and persecution probability was strongest in the late Middle Ages.

Furthermore, as Figure 2 illustrates, the relationship disappeared after 1600, which the researchers attribute to various factors: the rise of stronger states (which were better able to protect minorities); increased agricultural productivity; and the development of more integrated markets, which reduced the impact of local weather shocks on the food supply.

 

Figure 2: The effect of cold weather shocks on persecution probability over time

 

 

The researchers support their results with extensive narrative evidence consistent with these claims and with further evidence that the relationship between colder weather and higher wheat prices also diminished after 1600.

‘Jewish Persecutions and Weather Shocks: 1100-1800’ by Robert Warren Anderson, Noel D. Johnson and Mark Koyama is published in the June 2017 issue of the Economic Journal.

A blog article also appeared on the media briefings of the Royal Economic Society.