The UK’s unpaid war debts to the United States, 1917-1980

by David James Gill (University of Nottingham)

Trenches in World War I. From <>

We all think we know the consequences of the Great War – from the millions of dead to the rise of Nazism – but the story of the UK’s war debts to the United States remains largely untold.

In 1934, the British government defaulted on these loans, leaving unpaid debts exceeding $4 billion. The UK decided to cease repayment 18 months after France had defaulted on its war debts, making one full and two token repayments prior to Congressional approval of the Johnson Act, which prohibited further partial contributions.

Economists and political scientists typically attribute such hesitation to concerns about economic reprisals or the costs of future borrowing. Historians have instead stressed that delay reflected either a desire to protect transatlantic relations or a naive hope for outright cancellation.

Archival research reveals that the British cabinet’s principal concern was that many states owing money to the UK might use its default on war loans as an excuse to cease repayment on their own debts. In addition, ministers feared that refusal to pay would profoundly shock a large section of public opinion, thereby undermining the popularity of the National government. Eighteen months of continued repayment therefore provided the British government with more time to manage these risks.

The consequences of the UK’s default have attracted curiously limited attention. Economists and political scientists tend to assume dire political costs to incumbent governments as well as significant short-term economic shocks in terms of external borrowing, international trade, and the domestic economy. None of these consequences apply to the National government or the UK in the years that followed.

Most historians consider these unpaid war debts to be largely irrelevant to the course of domestic and international politics within five years. Yet archival research reveals that they continued to play an important role in British and American policy-making for at least four more decades.

During the 1940s, the issue of the UK’s default arose on several occasions, most clearly during negotiations concerning Lend-Lease and the Anglo-American loan, fuelling Congressional resistance that limited the size and duration of American financial support.

Successive American administrations also struggled to resist growing Congressional pressure to use these unpaid debts as a diplomatic tool to address growing balance of payment deficits from the 1950s to the 1970s. In addition, British default presented a formidable legal obstacle for the UK’s return to the New York bond market in the late 1970s, threatening to undermine the efficient refinancing of the government’s recent loans from the International Monetary Fund.

The consequences of the UK’s default on its First World War debts to the United States were therefore longer lasting and more significant to policy-making on both sides of the Atlantic than widely assumed.


Judges and the death penalty in Nazi Germany: New research evidence on judicial discretion in authoritarian states

The German People’s Court. Available at

Do judicial courts in authoritarian regimes act as puppets for the interests of a repressive state – or do judges act with greater independence? How much do judges draw on their political and ideological affiliations when imposing the death sentence?

A study of Nazi Germany’s notorious People’s Court, recently published in the Economic Journal, reveals direct empirical evidence of how the judiciary in one of the world’s most notoriously politicised courts were influenced in their life-and-death decisions.

The research provides important empirical evidence that the political and ideological affiliations of judges do come into play – a finding that has applications for modern authoritarian regimes and also for democracies that administer the death penalty.

The research team – Dr Wayne Geerling (University of Arizona), Prof Gary Magee, Prof Russell Smyth, and Dr Vinod Mishra (Monash Business School) – explore the factors influencing the likelihood of imposing the death sentence in Nazi Germany for crimes against the state – treason and high treason.

The authors examine data compiled from official records of individuals charged with treason and high treason who appeared before the People’s Courts up to the end of the Second World War.

Established by the Nazis in 1934 to hear cases of serious political offences, the People’s Courts have been vilified as ‘blood tribunals’ in which judges meted out pre-determined sentences.

But in recent years, while not contending that the People’s Court judgments were impartial or that its judges were not subservient to the wishes of the regime, a more nuanced assessment has emerged.

For the first time, the new study presents direct empirical evidence of the reasons behind the use of judicial discretion and why some judges appeared more willing to implement the will of the state than others.

The researchers find that judges with a deeper ideological commitment to Nazi values – typified by being members of the Alte Kampfer (‘Old Fighters’ or early members of the Nazi party) – were indeed more likely to impose the death penalty than those who did not share it.

These judges were more likely to hand down death penalties to members of the most organised opposition groups, those involved in violent resistance against the state and ‘defendants with characteristics repellent to core Nazi beliefs’:

‘The Alte Kampfer were thus more likely to sentence devout Roman Catholics (24.7 percentage points), defendants with partial Jewish ancestry (34.8 percentage points), juveniles (23.4 percentage points), the unemployed (4.9 percentage points) and foreigners (42.3 percentage points) to death.’

Judges who became adults during two distinct historical periods (the Revolution of 1918-19 and the period of hyperinflation from June 1921 to January 1924), which may have shaped these judges’ views with respect to Nazism, were more likely to impose the death sentence.

 Alte Kampfer members whose hometown or suburb lay near a centre of the Revolution of 1918-19 were more likely to sentence a defendant to death.

Previous economic research on sentencing in capital cases has focused mainly on gender and racial disparities, typically in the United States. But the understanding of what determines whether courts in modern authoritarian regimes outside the United States impose the death penalty is scant. By studying a politicised court in an historically important authoritarian state, the authors of the new study shed light on sentencing more generally in authoritarian states.

The findings are important because they provide insights into the practical realities of judicial empowerment by providing rare empirical evidence on how the exercise of judicial discretion in authoritarian states is reflected in sentencing outcomes.

To contact the authors:
Russell Smyth (

Decimalising the pound: a victory for the gentlemanly City against the forces of modernity?

by Andy Cook (University of Huddersfield)


1813 guinea

Some media commentators have identified the decimalisation of the UK’s currency in 1971 as the start of a submerging of British identity. For example, writing in the Daily Mail, Dominic Sandbrook characterises it as ‘marking the end of a proud history of defiant insularity and the beginning of the creeping ­Europeanisation of ­Britain’s institutions.’

This research, based on Cabinet papers, Bank of England archives, Parliamentary records and other sources, reveals that this interpretation is spurious and reflects more modern preoccupations with the arguments that dominated much of the Brexit debate, rather than the actual motivation of key players at the time.

The research examines arguments made by the proponents of alternative systems based on either decimalising the pound, or creating a new unit worth the equivalent of 10 shillings. South Africa, Australia and New Zealand had all recently adopted a 10-shilling unit, and this system was favoured by a wide range of interest groups in the UK, representing consumers, retailers, small and large businesses, and media commentators.

Virtually a lone voice in lobbying for retention of the pound was the City of London, and its arguments, articulated by the Bank of England, were based on a traditional attachment to the international status of sterling. These arguments were accepted, both by the Committee of Enquiry on Decimal currency, which reported in 1963, and, in 1966, by a Labour government headed by Harold Wilson, who shared the City’s emotional attachment to the pound.

Yet by 1960, the UK had faced the imminent prospect of being virtually the only country retaining non-decimal coinage. Most key economic players agreed that decimalisation was necessary and the only significant bone of contention was the choice of system.

Most informed opinion favoured a new major unit equivalent to 10 shillings, as reflected in evidence given by retailers and other businesses to the Committee of Enquiry on Decimal Coinage, and the formation of a Decimal Action Committee by the Consumers Association to press for such a system.

The City, represented by the Bank of England, was implacably opposed to such a system, arguing that the pound’s international prestige was crucial to underpinning the position of the City as a leading financial centre. This assertion was not evidence-based, and internal Bank documents acknowledge that their argument was ‘to some extent based on sentiment’.

This sentiment was shared by Harold Wilson, whose government announced the decision to introduce decimal currency based on the pound in 1966. Five years earlier, he had made an emotional plea to keep the pound arguing that ‘the world will lose something if the pound disappears from the markets of the world’.

Far from being the end of ‘defiant insularity’, the decision to retain a higher-value basic currency unit of any major economy, rather than adopting one closer in value either to the US dollar or the even lower-value European currencies, reflected the desire of the City and government to maintain a distinctive symbol of Britishness, the pound, overcoming opposition from interests with more practical concerns.

British perceptions of German post-war industrial relations

By Colin Chamberlain (University of Cambridge)

Some 10,000 steel workers participate in a demonstration to demand a 10 per...
A demonstration in Stuttgart, 11th January 1962.  Picture alliance/AP Images, available at <;

‘Almost idyllic’ – this was the view of one British commentator on the state of post-war industrial relations in West Germany. No one could say the same about British industrial relations. Here, industrial conflict grew inexorably from year to year, forcing governments to expend ever more effort on preserving industrial peace.

Deeply frustrated, successive governments alternated between appeasing trade unionists and threatening them with new legal sanctions in an effort to improve their behaviour, thereby avoiding tackling the fundamental issue of their institutional structure. If the British had only studied the German ‘model’ of industrial relations more closely, they would have understood better the reforms that needed to be made.

Britain’s poor state of industrial relations was a major, if not the major, factor holding back Britain’s economic growth, which was regularly less than half the rate in Germany, not to speak of the chronic inflation and balance of payments problems that only made matters worse. So, how come the British did not take a deeper look at the successful model of German industrial relations and learn any lessons?

Ironically, the British were in control of Germany at the time the trade union movement was re-establishing itself after the war. The Trades Union Congress and the British labour movement offered much goodwill and help to the Germans in their task.

But German trade unionists had very different ideas to the British trade unions on how to go about organising their industrial relations, ideas that the British were to ignore consistently over the post-war period. These included:

    • In Britain, there were hundreds of trade unions, but in Germany, there were only 16 re-established after the war, each representing one or more industries, thereby avoiding the demarcation disputes so common in Britain.
    • Terms and conditions were negotiated on this industry-basis by strong well-funded trade unions, which welcomed the fact that their two or three year long collective agreements were legally enforceable in Germany’s system of industrial courts.
    • Trade unions were not involved in workplace grievances and disputes. These were left to employees and managers meeting together in Germany’s highly successful works councils to resolve such issues informally along with engaging in consultative exercises on working practices and company reorganisations. As a result, German companies did not seek to lay-off staff as British companies did on any fall in demand, but rathet to retrain and reallocate them.

British trade unions pleaded that their very untidy institutional structure with hundreds of competing trade unions was what their members actually wanted and should therefore be outside any government interference. The trade unions jealously guarded their privileges and especially rejected any idea of industry-based unions, legally enforceable collective agreements and works councils.

A heavyweight Royal Commission was appointed, but after three years’ deliberation, it came up with little more than the status quo. It was reluctant to study any ideas emanating from Germany.

While the success of industrial relations in Germany was widely recognised in Britain, there was little understanding about why this was so or indeed much interest in it. The British were deeply conservative about the ‘institutional shape’ of industrial relations and had a fear of putting forward any radical German ideas. Britain was therefore at a big disadvantage as far as creating modern trade unions operating in a modern state.

So, what economic price the failure to sort out the institutional structure of the British trade unions?

From VoxEU – Wellbeing inequality in retrospect

Rising trends in GDP per capita are often interpreted as reflecting rising levels of general wellbeing. But GDP per capita is at best a crude proxy for wellbeing, neglecting important qualitative dimensions. 36 more words

via Wellbeing inequality in retrospect — Recent Articles

To elaborate further on the topic, Prof. Leandro de la Escosura has made available several databases on inequality, accessible here, as well as a book on long-term Spanish economic growth, available as open source here


Winning the capital, winning the war: retail investors in the First World War

by Norma Cohen (Queen Mary University of London)


National War Savings CommitteeMcMaster University Libraries, Identifier: 00001792. Available at wikimedia commons

The First World War brought about an upheaval in British investment, forcing savers to repatriate billions of pounds held abroad and attracting new investors among those living far from London, this research finds. The study also points to declining inequality between Britain’s wealthiest classes and the middle class, and rising purchasing power among the lower middle classes.

The research is based on samples from ledgers of investors in successive War Loans. These are lodged in archives at the Bank of England and have been closed for a century. The research covers roughly 6,000 samples from three separate sets of ledgers of investors between 1914 and 1932.

While the First World War is recalled as a period of national sacrifice and suffering, the reality is that war boosted Britain’s output. Sampling from the ledgers points to the extent to which war unleashed the industrial and engineering innovations of British industry, creating and spreading wealth.

Britain needed capital to ensure it could outlast its enemies. As the world’s capital exporter by 1914, the nation imposed increasingly tight measures on investors to ensure capital was used exclusively for war.

While London was home to just over half the capital raised in the first War Loan in 1914, that had fallen to just under 10% of capital raised in the years after. In contrast, the North East, North West and Scotland – home to the mining, engineering and shipbuilding industries – provided 60% of the capital by 1932, up from a quarter of the total raised by the first War Loan.

The concentration of investor occupations also points to profound social changes fostered by war. Men describing themselves as ‘gentleman’ or ‘esquire’ – titles accorded those wealthy enough to live on investment returns – accounted for 55% of retail investors for the first issue of War Loan. By the post-war years, these were 37% of male investors.

In contrast, skilled labourers – blacksmiths, coal miners and railway signalmen among others– were 9.0% of male retail investors by the after-war years, up from 4.9% in the first sample.

Suppliers of war-related goods may not have been the main beneficiaries of newly-created wealth. The sample includes large investments by those supplying consumer goods sought by households made better off by higher wages, steady work and falling unemployment during the war.

During and after the war, these sectors were accused of ‘profiteering’, sparking national indignation. Nearly a quarter of investors in 5% War Loan listing their occupations as ‘manufacturer’ were producing boots and leather goods, a sector singled out during the war for excess profits. Manufacturers in the final sample produced mineral water, worsteds, jam and bread.

My findings show that War Loan was widely held by households likely to have had relatively modest wealth; while the largest concentration of capital remained in the hands of relatively few, larger numbers had a small stake in the fate of the War Loans.

In the post-war years, over half of male retail investors held £500 or less. This may help to explain why efforts to pay for war by taxing wealth as well as income – a debate that echoes today – proved so politically challenging. The rentier class on whom additional taxation would have been levied may have been more of a political construct by 1932 than an actual presence.



poster “Keep out malaria mosquitoes repair your torn screens”. U.S. Public Health Service, 1941–45

While malaria historically claimed millions of African lives, it did not hold back the continent’s economic development. That is one of the findings of new research by Emilio Depetris-Chauvin (Pontificia Universidad Católica de Chile) and David Weil (Brown University), published in the Economic Journal.

Their study uses data on the prevalence of the gene that causes sickle cell disease to estimate death rates from malaria for the period before the Second World War. They find that in parts of Africa with high malaria transmission, one in ten children died from malaria or sickle cell disease before reaching adulthood – a death rate more than twice the current burden of malaria in these regions.


According to the World Health Organization, the malaria mortality rate declined by 29% between 2010 and 2015. This was a major public health accomplishment, although with 429,000 annual deaths, the disease remains a terrible scourge.

Countries where malaria is endemic are also, on average, very poor. This correlation has led economists to speculate about whether malaria is a driver of poverty. But addressing that issue is difficult because of a lack of data. Poverty in the tropics has long historical roots, and while there are good data on malaria prevalence in the period since the Second World War, there is no World Malaria Report for 1900, 1800 or 1700.

Biologists only came to understand the nature of malaria in the late nineteenth century. Even today, trained medical personnel have trouble distinguishing between malaria and other diseases without the use of microscopy or diagnostic tests. Accounts from travellers and other historical records provide some evidence of the impact of malaria going back millennia, but these are hardly sufficient to draw firm conclusions. Akyeampong (2006), Mabogunje and Richards (1985)

This study addresses the lack of information on malaria’s impact historically by using genetic data. In the worst afflicted areas, malaria left an imprint on the human genome that can be read today.

Specifically, the researchers look at the prevalence of the gene that causes sickle cell disease. Carrying one copy of this gene provided individuals with a significant level of protection against malaria, but people who carried two copies of the gene died before reaching reproductive age.

Thus, the degree of selective pressure exerted by malaria determined the equilibrium prevalence of the gene in the population. By measuring the prevalence of the gene in modern populations, it is possible to back out estimates of the severity of malaria historically.

In areas of high malaria transmission, 20% of the population carries the sickle cell trait. The researchers’ estimate is that this implies that historically 10-11% of children died from malaria or sickle cell disease before reaching adulthood. Such a death rate is more than twice the current burden of malaria in these regions.

Comparing the most affected areas with those least affected, malaria may have been responsible for a ten percentage point difference in the probability of surviving to adulthood. In areas of high malaria transmission, the researchers’ estimate that life expectancy at birth was reduced by approximately five years.

Having established the magnitude of malaria’s mortality burden, the researchers then turn to its economic effects. Surprisingly, they find little reason to believe that malaria held back development. A simple life cycle model suggests that the disease was not very important, primarily because the vast majority of deaths that it caused were among the very young, in whom society had invested few resources.

This model-based finding is corroborated by the findings of a statistical examination. Within Africa, areas with higher malaria burden, as evidenced by the prevalence of the sickle cell trait, do not show lower levels of economic development or population density in the colonial era data examined in this study.


To contact the authors:  David Weil,

EFFECTS OF COAL-BASED AIR POLLUTION ON MORTALITY RATES: New evidence from nineteenth century Britain

Samuel Griffiths (1873) The Black Country in the 1870s. In Griffiths’ Guide to the iron trade of Great Britain.

Industrialised cities in mid-nineteenth century Britain probably suffered from similar levels of air pollution as urban centres in China and India do today. What’s more, the damage to health caused by the burning of coal was very high, reducing life expectancy by more than 5% in the most polluted cities like Manchester, Sheffield and Birmingham. It was also responsible for a significant proportion of the higher mortality rates in British cities compared with rural parts of the country.

 These are among the findings of new research by Brian Beach (College of William & Mary) and Walker Hanlon (NYU Stern School of Business), which is published in the Economic Journal. Their study shows the potential value of history for providing insights into the long-run consequences of air pollution.

From Beijing to Delhi and Mexico City to Jakarta, cities across the world struggle with high levels of air pollution. To what extent does severe air pollution affect health and broader economic development for these cities? While future academics will almost surely debate this question, assessing the long-run consequences of air pollution for modern cities will not be possible for decades.

But severe air pollution is not a new phenomenon; Britain’s industrial cities of the nineteenth century, for example, also faced very high levels of air pollution. Because of this, researchers argue that history has the potential to provide valuable insights into the long-run consequences of air pollution.

One challenge in studying historical air pollution is that direct pollution measures are largely unavailable before the mid-twentieth century. This study shows how historical pollution levels in England and Wales can be inferred by combining data on the industrial composition of employment in local areas in 1851 with information on the amount of coal used per worker in each industry.

This makes it possible to estimate the amount of coal used in over 581 districts covering all of England and Wales. Because coal was by far the most important pollutant in Britain in the nineteenth century (as well as much of the twentieth century), this provides a way of approximating local industrial pollution emission levels.

The results are consistent with what historical sources suggest: the researchers find high levels of coal use in a broad swath of towns stretching from Lancashire and the West Riding down into Staffordshire, as well as in the areas around Newcastle, Cardiff and Birmingham.

By comparing measures of local coal-based pollution to mortality data, the study shows that air pollution was a major contributor to mortality in Britain in the mid-nineteenth century. In the most polluted locations – places like Manchester, Sheffield and Birmingham – the results show that air pollution resulting from industrial coal use reduced life expectancy by more than 5%.

One potential concern is that locations with more industrial coal use could have had higher mortality rates for other reasons. For example, people living in these industrial areas could have been poorer, infectious disease may have been more common or jobs may have been more dangerous.

The researchers deal with this concern by looking at how coal use in some parts of the country affected mortality in other areas that were, given the predominant wind direction, typically downwind. They show that locations which were just downwind of major coal-using areas had higher mortality rates than otherwise similar locations which were just upwind of these areas.

These results help to explain why cities in the nineteenth century were much less healthy than more rural areas – the so-called urban mortality penalty. Most existing work argues that the high mortality rates observed in British cities in the nineteenth century were due to the impact of infectious diseases, bad water and unclean food.

The new results show that in fact about one third of the higher mortality rate in cities in the nineteenth century was due to exposure to high levels of air pollution due to the burning of coal by industry.

In addition to assessing the effects of coal use on mortality, the researchers use these effects to back out very rough estimates of historical particulate pollution levels. Their estimates indicate that by the mid-nineteenth century, industrialised cities in Britain were probably as polluted as industrial cities in places like China and India are today.

These findings shed new light on the impact of air pollution in nineteenth century Britain and lay the groundwork for further research analysing the long-run effects of air pollution in cities.


To contact the authors:  Brian Beach (; Walker Hanlon (

Managing the Economy, Managing the People: narratives of economic life in Britain from Beveridge to Brexit

by Jim Tomlinson (University of Glasgow)


book‘It’s the economy stupid’, like most clichés, both reveals and conceals important truths. The slogan suggests a hugely important truth about the post-1945 politics of the advanced democracies such as Britain: that economic  issues have been crucial to government strategies and political arguments. What the cliché conceals is the need to examine what is understood by ‘the economy’, a term which has no fixed meaning, and has been constantly re-worked over the years. Starting from those two points, this book provides a distinctive new account of British economic life since the 1940s, focussing upon how successive governments, in seeking to manage the economy, have sought simultaneously to ‘manage the people’: to try and manage popular understanding of economic issues.

The first half the book analyses the development of the major narratives from the 1940s onwards. This  covers the notion of ‘austerity’ and its particular meaning in the 1940s; the rise of a narrative of ‘economic decline’ from the late 1950s, and the subsequent attempts to ‘modernize’ the economy; the attempts to ‘roll back the state’ from the 1970s; the impact of ideas of ‘globalization’ in the 1900s; and, finally, the way the crisis of 2008/9 onwards was constructed as a problem of ‘debts and deficits’. The second part focuses in on four key issues in attempts to ‘manage the people’: productivity, the balance of payments, inflation and unemployment. It shows how in each case  governments sought to get the populace to understand these issues in a particular light, and shaped strategies to that end.

One conclusion of the book is the grounding of most representations of key economic problems of the post-war period in Britain as an industrial economy, and how de-industrialization undermines this representation.  Unemployment, from its origins in the late-Victorian period, was largely about the malfunctioning of  industrial (and male) labour markets. De-industrialization, accompanied by the proliferation of precarious work, including much classified as ‘self-employment’, radically challenges our understanding of  this problem, however much it remains the case that for the great bulk of the population selling their labour is key to their economic prosperity.

The concern with productivity was likewise grounded in the industrial sector. But outside the marketed services, in non-marketed provision like education, health and care, the problems of conceptualising, let alone measuring, productivity are immense. In a world where personal services of various kinds are becoming ever more important, traditional notions of productivity need a radical re-think.

Less obviously, the notion of a national rate of inflation, such as the Cost of Living Index and later the RPI, was grounded in attempts to measure the real wages of the industrial working class. With the value of housing as key underpinning for consumption, and the ‘financialization’ of the economy, this traditional notion of inflation, measuring the cost of a basket of consumables against nominal wages, has been undermined. Asset, especially housing, prices matter much more to many wage earners, whilst the value of financial assets is also important to increasing numbers of people as the population ages.

Finally, the decline of concern with the balance of payments is linked to the rise in the relative importance of financial flows, making  the manufacturing balance or the current account less pertinent. For many years now Britain’s external payments have relied on the rates of return on overseas assets, exceeding those on domestic assets held by foreigners. We are a very long way indeed from 1940s stories of ‘England’s bread hangs by Lancashire’s thread’.

De-industrialization has not only undercut the coherence and relevance of the four standard economic policy problems of the post-war years, but has also destroyed the primary audience that most post-war economic propaganda was aimed at: the industrial working class. While other audiences were not entirely neglected, it was the worker (usually the male worker), who was the prime target of the narratives and whose understandings and behaviour were seen as the key to the projected solutions.

A recurrent anxiety of this propaganda was the receptivity of those workers to its messages. This anxiety helps to explain much of the ‘simplified’ language of this propaganda, as well as its patterns of distribution. More fundamentally, this anxiety rested upon uncertainties about what kind of arguments would a working-class audience find congenial; there was perennial debate about the efficacy of appeals to individual as opposed to the ‘national’ interest. Above all, there was a moral message of distributive justice which infused much of the propaganda, ultimately grounded in the belief that working class culture had within it ingrained notions of  ‘fairness’ that had to be appealed to.

While ethical appeals continued to inform economic propaganda into the twenty-first century, the fragmentation of the old audience accelerated. In addition, given the upward lurch in inequality in the 1980s, and the following period of continuing growth of incomes right at the top of the distribution, appeals to ‘fairness’ have become much more difficult to make credible. Strikingly, concerns about inequality emerged across the political spectrum after the 2007/8 financial crisis, at the same time as the narrative of debts, deficits and austerity had driven post-crisis policies that increased  inequality. Widespread talk of ‘reducing inequality’, whilst having obvious political appeal, especially after Brexit, would seem to be largely rhetorical.


Managing the Economy, Managing the People: narratives of economic life in Britain from Beveridge to Brexit is edited by Oxford University Press, 2017,  ISBN 978-019-878609-2

To contact the author:

Land reform and agrarian conflict in 1930s Spain

Jordi Domènech (Universidad Carlos III de Madrid) and Francisco Herreros (Institute of Policies and Public Goods, Spanish Higher Scientific Council)

Government intervention in land markets is always fraught with potential problems. Intervention generates clearly demarcated groups of winners and losers as land is the main asset owned by households in predominantly agrarian contexts. Consequently, intervention can lead to large, generally welfare-reducing changes in the behaviour of the main groups affected by reform, and to policies being poorly targeted towards potential beneficiaries.

In this paper (available here), we analyse the impact of tenancy reform in the early 1930s on Spanish land markets. Adapting general laws to local and regional variation in land tenure patterns and heterogeneity in rural contracts was one of the problems of agricultural policies in 1930s Spain. In the case of Catalonia in the 1930s, the interest of the case lies in the adaptation of a centralized tenancy reform, aimed at fixed-rent contracts, to sharecropping contracts that were predominant in Catalan agriculture. This was more typically the case of sharecropping contracts on vineyards, the case of customary sharecropping contract (rabassa morta), subject to various legal changes in the late 18th and early 19th centuries. It is considered that the 1930s culminated a period of conflicts between the so called rabassaires (sharecroppers under rabassa morta contracts) and owners of land.

The divisions between owners of land and tenants was one of the central cleavages of Catalonia in the 20th century. This was so even in an area that had seen substantial industrialization. In the early 1920s, work started on a Catalan law of rural contracts, aimed especially at sharecroppers. A law, passed on the 21st March 1934, allowed the re-negotiation of existing rural contracts and prohibited the eviction of tenants who had been less than 6 years under the same contract. More importantly, it opened the door to forced sales of land to long-term tenants. Such legislative changes posed a threat to the status quo and the Spanish Constitutional Court ruled the law was unconstitutional.

The comparative literature on the impacts of land reforms argues that land reform, in this case tenancy reform, can in fact change agrarian structures. When property rights are threatened, landowners react by selling land or interrupting existing tenancy contracts, mechanizing and hiring labourers. Agrarian structure is therefore endogenous to existing threats to property rights. The extent of insecurity in property rights in 1930s Catalonia can be seen in the wave of litigation over sharecropping contracts. Over 30,000 contracts were revised in the courts in late 1931 and 1932 which provoked satirical cartoons (Figure 01).

Figure 1. Revisions and the share of the harvest. Source: L’Esquella de la Torratxa, 2nd August 1932, p. 11.
Translation: The rabaissaire question: Peasant: You sweat by coming here to claim your part of the harvest, you would be sweating more if you were to grow it by yourself.

The first wave of petitions to revise contracts led overwhelmingly to most petitions being nullified by the courts. This was most pronounced in the Spanish Supreme Court which ruled against the sharecropper in most of the around 30,000 petitions of contract revision. Nonetheless, sharecroppers were protected by the Catalan autonomous government. The political context in which the Catalan government operated became even more charged in October 1934. That month, with signs that the Centre-Right government was moving towards more reactionary positions, the Generalitat participated in a rebellion orchestrated by the Spanish Socialist Party (PSOE) and Left Republicans. It is in this context of suspension of civil liberties that landowners now had a freer hand to evict unruly peasants. The fact that some sharecroppers did not surrender their harvest meant they could be evicted straight away according to the new rules set by the new military governor of Catalonia.

We use the number of cases of completed and initiated tenant evictions from October 1934 to around mid -1935 as the main dependent variable in the paper. Data were collected from a report produced by the main Catalan tenant union, Unió de Rabassaires (Rabassaires’ Union), published in late 1935 to publicize and denounce tenant evictions or attempts of evicting tenants.

Combining the spatial analysis of eviction cases with individual information on evictors and evicted, we can be reasonably confident about several facts around evictions and terminated contracts in 1930s Catalonia. Our data show that that rabassa morta legacies were not the main determinant of evictions. About 6 per cent of terminated contracts were open ended rabassa morta contracts (arbitrarily set at 150 years in the graph). About 12 per cent of evictions were linked to contracts longer than 50 years, which were probably oral contracts (since Spanish legislation had given a maximum of 50 years). Figure 2 gives the contracts lengths of terminated and threatened contracts.

Untitled 2
Figure 2. Histogram of contract lengths. Source: Own elaboration from Unió de Rabassaires, Els desnonaments rústics.

The spatial distribution of evictions is also consistent with the lack of historical legacies of conflict. Evictions were not more common in historical rabassa morta areas, nor were they typical of areas with a larger share of land planted with vines.

Our study provides a substantial revision of claims by unions or historians about very high levels of conflict in the Catalan countryside during the Second Republic. In many cases, there had a long process of adaptation and fine-tuning of contractual forms to crops and soil and climatic conditions which increased the costs of altering existing institutional arrangements.

To contact the authors: