War, shortage and Thailand’s industrialisation, 1932-57

by Panarat Anamwathana (University of Oxford)

This study was awarded the prize for the best conference paper by a new researcher at the Economic History Society’s 2019 annual conference in Belfast.


1954 Bangkok street. Available at Wikimedia Commons.

Thailand fell under Japanese occupation during the Second World War. The small agrarian country relied on imports from the West for consumer and industrial goods, and suffered shortages of everything from clothes to machinery between 1941 and 1945.

After the Japanese surrender, the Thai government learned from its trauma, adapted its economic approach and began domestic production of its own consumer goods – although at the cost of inefficiencies and rent-seeking.

Economic historians have expressed different perspectives on Thailand’s immediate post-war economic development and state-led industrialisation programme. Some, such as Hewison (1989) and Ingram (1971), mention the expansion of manufacturing capacity, despite government inefficiencies. Others, such as Suehiro (1989) and Phongpaichit and Baker (1995), are more critical of state involvement, saying that rent-seeking and corruption hindered any real progress.

Anyone familiar with state-operated enterprises might be suspicious of Thailand’s state-led industrialisation approach. To protect many of the country’s new industries, import tariffs and quotas were introduced. At the same time, a new class of capitalists emerged from an alliance of politicians and entrepreneurs. These people benefitted from favourable concessions, state-sponsored monopolies or being granted lucrative import licences. The question is: did anything come out of all this?

Since Thailand had no industrial census for the period, it is difficult to measure changes in the kingdom’s manufacturing capacity from before the war to after the war. To address this challenge, I have gathered statistical data on three industries: sugar, textiles and gunny bags (which are essential for transporting rice, Thailand’s most important export crop). These goods were three of Thailand’s most important pre-war imports, key to the wellbeing of the population and rationed during the war.

My data come from a variety of primary sources from the National Archives of Thailand, the National Archives at Kew, and the National Archives and Records Administration in Washington, DC. I also read previously unused qualitative sources, such as government reports, correspondence and old newspapers to build a more complete picture of wartime Thailand.

I find that Thailand was able to produce more of its sugar, textiles and gunny bags after 1945, and continued to substitute for imports as the decade progressed. This was achievable in part because the shortage of goods during the war reinforced the drive to diversify the economy. Government systems and infrastructure established under the Japanese occupation but hindered by wartime circumstances could then make use of importing machinery and international credit.

Finally, machines and facilities abandoned by the Japanese army could be used by the post-war Thai government and their capitalist allies. I also find that per capita consumption either plateaued or increased during this period, suggesting that Thais were not deprived of these products because of the government’s industrialisation programme.

Corruption and rent-seeking, however, were common and can easily arise from state-led industrialisation programmes with little transparency, like that in Thailand.

For example, the Sugar Organisation, the most important state-operated enterprise in this industry, played a large role in transporting sugar from both private and government mills to shops. Unfortunately, this organisation was completely corrupt. It embezzled, cheated farmers, sold sugar to fake agents and distributors, and was extremely permissive on check-ups and regulation. Although the state did revoke some of the privileges of the organisation, it continued to operate throughout all the scandals.

My study not only contributes to the historiography of Thai economic development, but also engages with studies of various models of economic growth, the efficiency and costs of state-operated enterprises, and the legacies of the Second World War in occupied territories.



Further reading

Hewison, Kevin (1989) Bankers and Bureaucrats Capital and the Role of the State in Thailand, New Haven.

Ingram, James C (1971) Economic Change in Thailand, 1850-1970, Stanford University Press.

Phongpaichit, Pasuk, and Chris Baker (1995). Thailand: Economy and Politics, Oxford University Press.

Suehiro, Akira (1989) Capital Accumulation in Thailand, Tokyo.

Is bad news ever good for stocks? The importance of time-varying war risk and stock returns

by Gertjan Verdickt (University of Antwerp)

This paper was presented at the EHS Annual Conference 2019 in Belfast.


Brussels Stock Exchange Building (Bourse or Beurs). Available at Wikimedia Commons.

One of the most severe events that affect stock markets is arguably a war. Because wars rarely occur, it is difficult to document what the effect of an increase in the threat and act of war is. Going back to history can go a long way to fill this gap.

In my research, I start by collecting a large sample of articles from the archives of The Economist to create the metrics, Threat and Act. This sample contains 79,568 articles from the period January 1885 to December 1913. To mimic investors and understand the content of news items, I rely on a textual analysis with a thorough human reading.

First, I document that Threat is a good predictor for actual events. If The Economist writes more about a potential military conflict, the probability of that conflict actually happening in the future is higher.

The other metric, Act, only captures conflicts that are happening right now. This suggests that, in contrast to what other historians find, The Economist did not write about war excessively but chose their war news coverage appropriately.

Verdickt Graph

Second, I focus on seven countries with stock listings on the Brussels Stock Exchange: Belgium, France, Germany, Italy, Russia, Spain and the Netherlands. These countries are important for Belgium, either through import and export or with a large number of stock listings in Brussels.

Additionally, I use information on other European and non-European countries with stock listings in Brussels to test whether war risk could be considered a European or global form of risk.

For the seven countries, I document that firms do not adjust dividend policies when there is an increase in the threat of war, but only when there is an outbreak of war.

Investors, on the other hand, sell their stocks when there is an increase in the potential and outbreak of a military conflict. When the threat is not followed by an act, stock prices adjust increase to the similar levels as before.

But when there is an outbreak of war, stock returns are negative up to 12 months after the initial increase. This shows that war risk is priced appropriately in stock markets, but that the outbreak of war is associated with higher uncertainty and welfare costs.

More interestingly, I show that there is a decrease in stock prices for other European countries, but no effect for non-European countries. This suggests that investors value the importance of proximity to a war. But firms from these countries do not adjust their dividend policy when threat and act increase.

Trains of thought: evidence from Sweden on how railways helped ideas to travel

by Eric Melander (University of Warwick)

This paper was presented at the EHS Annual Conference 2019 in Belfast.



Navvies during work at Nybro-Sävsjöströms Järnväg in Sweden. Standing far right is Oskar Lindahl from Mackamåla outside Målerås. Available at Wikimedia Commons.

The role of rail transport in shaping the geography of economic development is widely recognised. My research shows that its role in enabling the spread of political ideas is equally significant.

Using a natural experiment from Swedish history, I find that the increased ability of individuals to travel was a key driver for the spatial diffusion of engagement in grassroots social movements pushing for democratisation.

In nineteenth century Sweden, as in much of Europe, trade unions, leftist parties, temperance movements and non-state churches were important forces for democratisation and social reform. By 1910, 700,000 Swedes were members of at least one such group, out of a total population of around 5.5 million. At the same time, the Swedish rail network had been developed from just 6,000km in 1881 to 14,000km in 1910.

Swedish social historians, such as Sven Lundkvist, have noted that personal visits by agitators and preachers were important channels for the spread of the new ideas. A key example is August Palm, a notable social democrat and labour activist, who made heavy use of the railways during his extensive ‘agitation travels’. And Swedish political economist and economic historian Eli Heckscher has written about the ‘democratising effect’ of travel in this period.

My study is the first to test the hypothesised link between railway expansion and the success of these movements formally, using modern economic and statistical techniques.

By analysing a rich dataset, including historical railway maps, information on passenger and freight traffic, census data and archival information on social movement membership in Sweden, I demonstrate the impact of railway access on the spread and growth of activist organisations. Well-connected towns and cities were more likely to host at least one social movement organisation, and to see more rapid growth in membership numbers.

A key mechanism underlying this result is that railways reduced effective distances between places: the increased ability of individuals to travel and spread their ideas drove the spatial diffusion of movement membership.

The positive impact of rail is only as a result of increased passenger flows to a town or city – freight volumes had no impact, suggesting that it was the mobility of individuals that spread new ideas, not an acceleration of economic activity more broadly.

These findings are important because they shed light on the role played by railways, and by communication technology more broadly, in the diffusion of ideas. Recent work on this topic has focused on the role of social media on short-lived bursts of (extreme) collective action. Research by Daron Acemoglu, Tarek Hassan and Ahmed Tahoun, for example, shows that Twitter shaped protest activity during the Arab Spring.

My study shows that technology also matters for more broad-based popular engagement in nascent social movements over much longer time horizons. Identifying the importance of technology for the historical spread of democratic ideas can therefore sharpen our understanding of contemporary political events.

Upward mobility of Nazi party members during the Third Reich

by Matthias Blum and Alan de Bromhead (Queen’s Management School at Queen’s University Belfast)

This paper was presented at the EHS Annual Conference 2019 in Belfast.



Gathering of high-ranking Nazi officials in Berlin. Left to right: Georg von Detten, Heinrich Sahm, August Wilhelm of Prussia, Hermann Goering, Julius Lippert, Karl Ernst and Artur Görlitzer. Available at Wikimedia Commons.

Members of Nazi organisations climbed higher up the social ladder than non-members in the 1930s and 1940s. This was not due to Nazis being awarded higher-status jobs, but instead to already upwardly mobile individuals being attracted to the movement.

We examined a unique dataset of approximately 10,000 World War II German soldiers that contains detailed information on social background, such as occupation and education, as well as other characteristics like religion, criminal record and military service. The dataset also identifie membership of different Nazi organisations, such as the NSDAP, the SA, the SS and the Hitler Youth.

Comparing the social backgrounds of Nazi members and non-members reveal that Nazis were more likely to come from high-status backgrounds and had higher levels of education. Indeed, the odds of being a member of the Nazi party were almost twice as high for someone from a higher-status background than a low-status one. We also confirm a common finding that Catholics were less like to be Nazi members.

When looking at social mobility between generations, Nazi members advance further than non-members. But this appears to be driven by ‘upwardly mobile’ people – those that showed social mobility early on in their lives – subsequently joining the Nazis. This suggests that ‘ambitious’ or ‘driven’ individuals may have been attracted to the Nazi movement.

Although it is impossible to uncover exactly what motivated people to join the Nazis, our findings suggest that many educated and ambitious individuals from the higher end of the social scale were attracted to the movement. Interestingly, this seems to be the case not just for those who joined after the Nazi party came to power in 1933, but also to members who joined when the party was on the fringes of the Weimar political system in the 1920s.

Our study not only helps us to understand how the Nazi party emerged and came to power in the years before World War II, but also gives us an insight into how extremist organisations form and attract members more generally. It reminds us that we need to think beyond pure ideology when it comes to motivations for joining extremist groups and look at economic and social factors too.


For more information on the preliminary findings of the study, please visit: http://www.quceh.org.uk/uploads/1/0/5/5/10558478/wp17-04.pdf

Financing the fight: sovereignty, networks and the French resistance during World War II

by David Foulk (Oriel College, University of Oxford)

This paper was presented at the EHS Annual Conference 2019 in Belfast.


Commander of Free French Forces General Charles de Gaulle seated at his desk in London during the Second World War. Available at Wikimedia Commons.

Under General Charles de Gaulle, the Free French movement represented a different conception of France – free from the defeat that Marshal Philippe Pétain’s armistice and the Vichy regime represented. While metropolitan France had been overrun, subjugated under enemy jackboots, this could not be said for all French overseas territories.

When de Gaulle formed his military movement, in London, there was no indication that those colonies would support his efforts to rally an external resistance movement. But by the end of 1940, some had rallied to his side.

Such actions would fundamentally change the nature of the movement; from a purely martial enterprise, into a state-in-waiting. This raised important questions of sovereignty.

These territories were part of the French empire yet were being driven to support a rebel movement, in the hope of liberating France. Who was to support their economy? What part were they to play, both economically and militarily?

Having been separated from metropolitan French institutions, including the Banque de France, these territories began to experience economic difficulties, including the replacement of used banknotes and the brutal separation experienced, from their chief export markets.

Under the leadership of ‘experts’, supported by the Bank of England and His Majesty’s Treasury, the Free French financial service found a method to finance their cause: one based on British government advances, transnational donations and colonial exploitation.

Their funds supported covert action in France. The Gaullist equivalent of the Special Operations Executive, known as the Bureau Central de Renseignements et d’Action, parachuted containers, filled with weapons, equipment and currency, in francs and dollars, into France.

These groups were created to perform sabotage, diffuse propaganda and establish escape routes for downed Allied airmen and other groups, targeted by the invading forces or Vichy’s civil security. Obtaining finance was a perpetual problem.

Jean Moulin, the former prefect of Eure-et-Loir, was appointed as General de Gaulle’s representative to the internal resistance movements. His role was to act as a coordinator for the three main groups – Combat, Franc-Tireur and Libération. This was achieved through a judicious use of finances, organised by his secretary, Daniel Cordier.

Great stock was placed in Moulin’s powers of political persuasion. His mission was a success and the Mouvements unis de la Résistance was established in January 1943. Without the financial backing of the Gaullist movement, this internal network could not have existed. This did, nevertheless, offer credence to Vichy propaganda, which implied de Gaulle and his movement were under the control of the British government.

American economic support came in the form of Lend-Lease, by supplying the majority of Free French troops with equipment and weapons. Moreover, it was reimbursed, in part, through reciprocal aid.

This entailed French bases providing housing, office and workspace, to help US troops launched operations, notably in New Caledonia, in the Pacific. A delegation of Free French supporters, from within the United States, acted as a financial conduit for funds being sent from other support groups, throughout the world.

The predecessor to the CIA, the Office of Strategic Services, through Allen Dulles, financially supported resistance activity from Switzerland. This briefly allowed the Americans a means of bypassing Gaullist intelligence services. The transnational nature of the financing for the French resistance is underlined.

Using social network analysis, a digital humanities technique that cartographically plots interactions between correspondents, the key figures, from among the financiers of the resistance, are shown in my study.

Through their interactions, financial ties that bound the French resistance to those who drove its economic policy are revealed. Without international support, military resistance in France would have been inconceivable.

Economic exploitation: a comparative case study of the cost of human smuggling

by Alicia Kidd (Wilberforce Institute, University of Hull)

This paper was presented at the EHS Annual Conference 2019 in Belfast.


Border crossing at Gibraltar. Available at Wikimedia Commons.

The terms human smuggling and human trafficking are often confused and used interchangeably. While there is a terminological distinction between the two, it is possible for the lines to be blurred when an individual’s relationship with a smuggler becomes exploitative. My study, to be presented at the Economic History Society’s 2019 annual conference, uses information gathered from face-to-face interviews to argue that individuals’ economic circumstances play a role in blurring those lines.

My research considers two qualitative case studies of individuals who hired agents to assist them in illegitimately escaping their home countries in the hope of reaching safer living conditions abroad. Both respondents were confronted with dangerous situations in which they faced limited options of either staying in their home countries where they were at risk of death or unfair arrest, or finding a way to escape.

Both chose the latter, but there was no possibility of leaving their countries legally without attracting the attention of the authorities looking for each of them. This led both respondents to employ the assistance of smugglers to assist them out of their home countries.

The respondents were from significantly different backgrounds and their economic standing influenced the path that lay ahead for them. The first had a relatively wealthy upbringing, had received a good education and was working in a lucrative job.

He was able to pay a smuggler upfront to assist him in leaving the country to avoid the death penalty he was facing. This was a simple financial exchange in which he paid a set fee in return for a service. His relationship with the smuggler ended after he had crossed the border out of his home country.

The second respondent’s experience was quite different. She was from a poor background and did not have the money to pay the smuggler prior to travel. Instead, she agreed with the agent that, in return for safely removing her from the country, she would work for him on arrival to pay off her debt. But while an agreement had been made that this would be care work, after arriving in the UK she was locked in a basement and sexually exploited as a means to pay off her debt.

My research uses these two case studies to explore the hugely diverse outcomes of an exchange with a smuggler. By assessing the divergent economic positions of the two individuals, the research addresses how the difference between a payment and a debt can lead to the difference between safety and extreme exploitation.

Anthropometric history and the measurement of wellbeing

Bernard Harris (University of Strathclyde)

This paper was presented at the EHS Annual Conference 2019 in Belfast.


Variations in human stature, 1887. Available at Wikimedia Commons.

Interest in the history of human height, and other anthropometric indicators, has increased dramatically over the last four decades. Most of the earliest studies were based on measurements obtained from living subjects but increasing use has also been made of skeletal evidence (see for example, Steckel et al, 2019).

The development of the field reflects James Tanner’s conception of height as a ‘mirror of the condition of society’. The growth of children, he wrote, ‘is a wonderfully good gauge of living conditions and the relative prosperity of different groups in a population’ as well as an effective form of health screening (Tanner, 1987).

The use of height as a measure of human welfare can be traced back at least as far as the first half of the nineteenth century. In 1829, the French physician, Louis-René Villermé, argued that ‘human height becomes greater and growth more rapid… as a country is richer…. The circumstances which accompany poverty delay the age at which complete stature is reached and stunt adult height’ (Tanner, 1981).

During the 1980s and 1990s, Roderick Floud (1984), John Komlos (1987) and Richard Steckel (1992) all highlighted the value of height as a measure of human ‘wellbeing’. For Steckel, ‘average height is also conceptually consistent with [Amartya] Sen’s framework of functionings and capabilities though, of course, height registers primarily conditions of health during the growing years as opposed to one’s status with regard to commodities more generally’.

My paper at the Economic History Society’s 2019 annual conference revisits some of these arguments to ask whether studies of height still provide a general guide to the wellbeing of past societies. It starts by looking at the background to the development of the field before considering some possible challenges.

These include debates over the reliability of historical height data, the nature of human growth and the proximate determinants of variations in human stature. The paper also explores the extent to which these variations can also be associated with indicators of future wellbeing.



Floud, R (1984) ‘Measuring the transformation of the European economies: income, health and welfare’, CEPR Discussion Paper No. 33.

Komlos, J (1987) ‘The height and weight of West Point cadets: dietary change in antebellum America’, Journal of Economic History 47: 897-927.

Steckel, RH (1992) ‘Stature and living standards in the United States’, in R Gallman and J Wallis (eds) American economic growth before the Civil War, University of Chicago Press.

Steckel, RH, CS Larsen, CA Roberts and J Baten (eds) (2019) The backbone of Europe: health, diet, work and violence over two millennia, Cambridge University Press.

Tanner, J (1981) A history of the study of human growth, Cambridge University Press.

Tanner, J (1987), ‘Growth as a mirror of the condition of society: secular trends and class distinctions’, Pediatrics International 29: 96-103.

A comparative history of occupational structure and urbanisation across Africa: design, data and preliminary overview

by Gareth Austin (University of Cambridge) and Leigh Shaw-Taylor (University of Cambridge)

This paper was presented at the EHS Annual Conference 2019 in Belfast.


A lone giraffe in Nairobi National Park. Available at Wikimedia Commons.

The general story in the research literature is that under colonial rule from the around the 1890s to around 1960, African economies became structured around exports of primary products, and that this persisted through the unsuccessful early post-colonial policies of import-substituting industrialisation, was entrenched by ‘structural adjustment’ in the 1980s, and has continued through the relatively strong economic growth across the continent since around 1995.

Our research offers a preliminary overview of the AFCHOS project, an international collaboration involving 20 scholars currently preparing 15 national or sub-national case studies. The discussion is organised in two sections.

Section I describes how by creating country data-bases as an essential first step, we aim to develop the first overview of changing occupational structures across sub-Saharan Africa, from the moment when the necessary data became available in the country concerned, to the present.

We track the shifts between agriculture, extraction, the secondary sector and services, and explore the trends in specific occupational groups within each of these sectors. We also examine the closely related process of urbanisation.

The core of the enterprise is the construction of datasets that reflect without distortion the specificities of African conditions, are commensurable across the continent, and are also commensurable with the datasets developed by parallel projects on the occupational structures of Eurasia and the Americas.

Section II outlines preliminary findings. It is centred on four graphs, depicting the evolution of the share of the economically active population in each sector, for about 14 countries. We relate these to the indications of the evolution of the size and location of population, and the size and composition of GDP.

The population of sub-Saharan Africa has increased perhaps six times since the influenza pandemic of 1918, and average living standards have not fallen: a remarkable achievement in terms of aggregate economic growth, and one that has not been sufficiently appreciated.

It is also striking that the multiplication of population, enabled by falling mortality rates, was accompanied by rapid urbanisation. There were also improvements in living standards, though modest and uneven.

Agriculture’s share in employment generally fell, especially after 1960. The share of manufacturing evolved quite differently over space and time within Africa, as we will elaborate.

Urbanisation has been accompanied by a general growth of employment in services. Where we have disaggregated the latter, so far, there has been dramatic growth in transport and distributive trades, suggesting increasing integration of national and regional economies – an important step in economic development.

Cotton, industrialisation and a missing piece of the puzzle

by Alka Raman (London School of Economics)

This study was awarded the prize for the best new researcher poster at the EHS Annual Conference 2019 in Belfast. The poster can be viewed here.


Cotton merchant, taken by Francis Frith between 1850 and 1870. Available at Wikimedia Commons.

The first Industrial Revolution has long been seen as the beacon of modernity, heralding unprecedented economic growth and the biggest uplift of living standards in human history. Its prominence amid themes in economic history is such that it dwarfs all others in comparison, including the fact that the British cotton industry – the nucleus of industrialisation – was not the world’s first cotton manufacturing industry serving a global demand for cotton goods.

Handmade cotton fabrics were exported from India to the rest of the world as early as the twelfth century. Indeed every textbook on economic history, when charting the growth of the British cotton industry, precedes its achievements with a dutiful narration of the introduction of cotton goods into England by the English East India Company in 1699 and the ‘frenzy’ for these cottons within the domestic and overseas markets.

But a passing reference to imitations quickly gives way to an impressive series of mechanisations and illustrious British inventors associated with them. Any connection to the preceding handmade Indian product is effectively lost.

Consequently, a crucial piece of the puzzle – how the seat of cotton manufacturing went from the Indian subcontinent to the heart of England – has remained inadequately explained. Learning from pre-existing products has been mentioned, but what this learning contained, how it may have been transferred and with what kind of outcomes are concepts that have been under-explored.

Hence the question at the heart of my research: did the pre-existing, handmade and globally demanded Indian cottons influence the growth and technological trajectory of the nascent British cotton industry?

Central to my thesis is the idea that the pre-industrial Indian cotton textiles contained the material knowledge required for their successful imitation and reproduction. These handmade Indian cottons embodied the cloth quality, print, design and product finish that the machine-made goods sought to imitate. Did learning from these pre-existing market-approved products contribute to the growth of early British cotton manufacturing?

My research identifies learning from the benchmark product, as well as competition with it, as two simultaneous stimuli shaping the British cotton industry during its initial phase. In terms of methodology, the thesis tests these two stimuli against historical textual and material evidence.

The writings of manufacturers, traders and historians/commentators of the period show that both manufacturers and innovators recognised that there was a knowledge problem or a ‘skills gap’: British spinners could not spin cotton warp to match Indian hand-spun warp’s quality. Entrepreneurs identified matching the quality of Indian hand-spun warp as a key motivation for innovation. Their language of quality comparisons with reference to Indian cottons is crucial and highlights comparative quality-related learning from Indian cotton goods.

Does the material evidence corroborate this textual finding? To establish if cloth quality improved over time, I study the material evidence (surviving cotton textiles from the period) under a digital microscope and thread counter to chart the quality of these fabrics over the key decades of mechanisation. I use thread count to establish the comparative quality of the machine-made cotton fabrics vis-à-vis the handmade Indian cottons.

My findings show that early British ‘cottons’ were, in reality, mixed fabrics using linen warp and cotton weft. In addition, the results show a marked increase in cloth quality between 1746 and 1820.

Assessed together, the textual and material evidence demonstrate that mechanisation in the early British cotton industry was geared towards overcoming specific sequential quality-related bottlenecks, associated first with the ability to make the all-cotton cloth, followed by the ability to make the fine all-cotton cloth.

Imitation of benchmark Indian cottons steered the growth of the British cotton industry on a specific path of technological evolution – a trajectory that was shaped by the quest to match the quality of the handmade Indian cotton textiles.

The aftermath of sovereign debt crises

by Rui Esteves (Graduate Institute of International and Development Studies), Seán Kenny (Lund University) and Jason Lennard (National Institute of Economic and Social Research)

This paper was presented at the EHS Annual Conference 2019 in Belfast.



Financial Crisis. Available on Pixabay.


The memory of recent crises, such as the Argentinean default of 2001 or the Greek near-misses between 2010 and 2015, suggests that defaults are costly and are better avoided at all costs.

This was surely one of the key arguments that led the Syriza-led Greek government to back down from the uncompromising demands of debt relief. The fear of provoking the mother of all recessions by defaulting and exiting the euro focused the minds of politicians and paved the way for the third Greek bailout in 2015.

Likewise, everyone remembers the scenes of economic and political chaos after the Argentinean default of December 2001.

But countries do not usually stop paying their debts on a whim – defaults can be forced on them by large recessions, which sap their ability to collect taxes and repay their debts. Economists call these events ‘endogenous’ because the recessions are both a cause and consequence of defaults. It is therefore unclear whether defaults have any real penalty over and above the recessions that cause them in the first place.

This has led to disagreement in the research literature between authors finding large and persistent negative effects (Arteta and Hale, 2008; Furceri and Zdzienicka, 2012; Esteves and Jalles, 2016) and others who do not find any costs (Levy Yeyati and Panizza, 2011).

In our new study, we solve this empirical challenge by using a narrative approach to identify the causes of defaults since the mid-nineteenth century. Rather than relying on complicated statistical methods, we read contemporary reports from creditor organisations and financial newspapers.

Based on these sources, we classify each default as either endogenous (caused by economic shocks) or exogenous (caused by other factors, such as contagion or wars). The narrative approach has been used extensively in other contexts, such as identifying the effects of fiscal policy (Romer and Romer, 2010; Ramey, 2011), monetary policy (Romer and Romer, 2004; Lennard 2018) and banking crises (Jalil, 2015).

Our analysis suggests that some defaults are indeed caused by weak economies. For example, The Economist reported that ‘no commercial community has ever passed through a worse crisis than that of Uruguay’ prior to its default in 1876.

Others, however, are seemingly caused by more exogenous factors. On the Brazilian default in 1937, the Financial Times noted that there was ‘no sufficient economic justification for a suspension of existing payments’, citing the new dictator’s unwillingness to pay as the ultimate cause.

We then use the evidence from plausibly exogenous defaults and state-of-the-art empirical methods to settle cleanly the question of how defaults affect the economy. Our preliminary results show that there is a statistically and economically significant reduction in output in the aftermath of sovereign debt crises.




Arteta, C. and Hale, G., ‘Sovereign debt crises and credit to the private sector’, Journal of International Economics, 74 (2008), pp. 53-69.

Esteves, R. and Jalles, J., ‘Like father like sons? The cost of sovereign defaults in reduced credit to the private sector’, Journal of Money, Credit and Banking, 48 (2016), pp. 1515-45.

Furceri, D. and Zdzienicka, A., ‘How costly are debt crises?’, Journal of International Money and Finance, 31 (2012), pp. 726-42.

Jalil, A., ‘A new history of banking panics in the United States, 1825–1929: Construction and implications’, American Economic Journal: Macroeconomics, 7 (2015), pp. 295-330.

Lennard, J., ‘Did monetary policy matter? Narrative evidence from the classical gold standard’, Explorations in Economic History, 68 (2018), pp. 16-36.

Levy Yeyati, E. and Panizza, U., ‘The elusive costs of sovereign defaults’, Journal of Development Economics, 94 (2011), pp. 95-105.

Ramey, V. A., Identifying government spending shocks: It’s all in the timing’, Quarterly Journal of Economics, 126 (2011), pp. 1-50.

Romer, C. D. and Romer, D. H., ‘A new measure of monetary shocks: Derivation and implications’, American Economic Review, 94 (2004), pp. 1055-84.

Romer, C. D. and Romer, D. H., ‘The macroeconomic effects of tax changes: Estimates based on a new measure of fiscal shocks’, American Economic Review, 100 (2010), pp. 763-801.