Squeezing blood from a stone: eighteenth century debtors’ prisons worked

by Alex Wakelam (University of Cambridge)


Woodstreet Compter.jpg
Wood Street Compter, 1793. Image extracted from page 384 of volume 1 of Old and New London, Illustrated, by Walter Thornbury. Available at Wikimedia Commons. 

While it is often assumed that debtors’ prisons were illogical and ineffective, my research demonstrates that they were extremely economically effective for creditors though they could ruin the lives of debtors.

The debtors’ prison is a frequent historical bogeyman, a Dickensian symptom of the illogical cruelty of the past that disappeared with enlightened capitalism. As imprisoning someone who could not afford to pay their debts, keeping them away from work and family, seems futile it is assumed creditors were doing so to satisfy petty revenge.

But they were a feature of most of English history from 1283, and though their power was curbed in 1869, there were still debtors imprisoned in the 1920s. The reason they persisted, as my research shows, is because, for creditors, they worked well.

The majority of imprisoned debtors in the eighteenth century were released relatively quickly having paid their creditors. This revelation is timely when events in America demonstrate how easily these prisons can return.

As today, most eighteenth century purchases were done on credit due to the delay in wages, limited supply of coinage, and cultural preferences for buying goods on credit. But credit was based on a range of factors including personal reputation, social rank and moral status. Informal oral contracts could frequently be made with little sense of an individual’s actual financial status, particularly if they were a gentleman or aristocrat. As contracts were not based on goods and court processes were slow, it was difficult to seize property to recover debts when creditors required money.

Creditors were able to imprison debtors without trial in this period until they paid what they owed or died. The registers of a London Debtors’ Prison, the Woodstreet Compter (1741-1815), reveal that creditors had good reasons to do so. Most of the 10,156 debtors contained in the registers left prison relatively quickly – 91% were released in under a year while almost a third were released in less than 100 days.

In addition, 84% were ‘discharged’ by their creditors, indicating that either the prisoner had paid their debts or a new contract had been agreed. Imprisonment forced debtors to find a way to pay or at least to renegotiate with creditors.

Prisoners were not the poor, but usually middle class people in small amounts of debt. One of the largest groups was made up of shopkeepers (about 20% of prisoners) though male and female prisoners came from across society with gentlemen, cheesemongers, lawyers, wigmakers and professors rubbing shoulders.

Most used their time to coordinate the selling of goods to raise money, or borrowed yet more from family and friends. Many others called in their own debts by having their debtors imprisoned as well.

As prisons were relatively open, some debtors worked off their debts. John Grano, a trumpeter who worked for Handel, imprisoned in the 1720s, taught music lessons from his cell. Others sold liquor or food to fellow prisoners or continued as best they could at their trade in the prison yard. Those with a literary mind, such as Daniel Defoe, wrote their way out.

Though credit works on different terms today, that coercive imprisonment is effective at securing repayment remains true. There have been a number of US states operating what amount to debtors’ prisons in recent years where the poor, fined by the state usually for traffic violations, are held until they pay what they owe.

Attorney General Jeff Sessions even retracted an Obama era memo in December aimed at abolishing the practice. While eighteenth century prisons worked effectively for creditors, they could ruin the lives of debtors who were forced to sell anything they could to pay their dues and escape the unsanitary hole in which they were being kept without trial. Assuming that they did not work and therefore won’t return is shown by my research to be false.


Is committing to a free trade policy enough? Evidence from colonial Africa

by Federico Tadei (Department of Economic History, University of Barcelona)


French map of Africa from 1898, showing colonial claims. Originally published as “Carte Generale de l’Afrique’. Available at Wikimedia Commons.

Recent Brexit negotiations have led to intense debate on the type of trade agreements that should be put in place between the UK and the European Union. According to Policy Exchange’s February 2018 report, the UK should unilaterally commit to free trade. The assumption underlying this argument is that the removal of tariffs has the potential to reduce consumer prices due to greater competition and lower protection of domestic industries, which would promote innovation and increase productivity.

But the removal of tariffs and protectionist policies might not be sufficient to implement free trade fully. My research on trade from colonial Africa suggests that a legal commitment to free trade is not nearly enough.

Specifically, it appears that during the colonial period the British formally relied on free trade encouraging competition between trading firms, while the French made use of their political power to establish trade monopsonies and acquire African goods at prices lower than in the world markets.

Yet the situation on the ground might have been quite different than what formal policies envisaged. Did the British colonies actually enjoy free trade? Did producers in Africa who lived under British rule receive higher prices than those living under the French?

To answer these questions, I measure the degree of competitiveness of trade under the two colonial powers by computing profit margins for trading companies that bought goods from the African coast and resold them in Europe.

To do so, I use data on African export prices and European import prices for a variety of agricultural commodities exported from British and French colonies between 1898 and 1939 and estimated trade costs from Africa to Europe. The rationale behind this methodology is simple: if the colonisers relied on free trade, profit margins of trading companies should be close to zero.

Tadei Figures

On average, profit margins in the British colonies were lower than in the French colonies, suggesting a higher reliance on free trade in the British Empire (see Figure 1). But if we compare the two colonial powers within one same region (West or East Africa) (Figures 2 and 3), it appears that the actual extent of free trade depended more on the conditions in the colonies than on formal policies of the colonial power.

Profit margins were statistically indistinguishable from zero in British East Africa, suggesting free trade, but they were large (10-15%) in West African colonies under both the French and the British, suggesting the presence of monopsony power.

These results suggest that, in spite of formal policies, other factors were at play in determining the actual implementation of free trade in Africa. In the Western colonies, the longer history of trade and higher level of commercialisation reduced the operational costs of trading companies. At the same time, most of agricultural production was based on small African farmers, with little political power and ability to oppose de facto trade monopsonies.

Conversely, in East Africa, production was often controlled by European settlers who had a much larger political influence over the metropolitan government, increasing the cost of establishing trade monopsonies and allowing better implementation of colonial free trade policy.

Overall, despite formal policies, the ability of trading firms in West Africa to eliminate competition was costly in terms of economic growth. African producers received lower prices than they would have in a competitive market and consumers paid more for imported goods. Formal commitment to free trade policies might not be sufficient to reap the full benefits of free trade.

How the Bank of England managed the financial crisis of 1847

by Kilian Rieder (University of Oxford)

New Branch Bank of England, Manchester, antique print, 1847. Available at <https://www.antiquemapsandprints.com/lancs-new-branch-bank-of-england-manchester-antique-print-1847-101568-p.asp&gt;

What drives a central bank’s decision to grant or refuse liquidity provision during a financial crisis? How does the central bank manage counterparty risk during such periods of high demand for liquidity, when time constraints make it hard to process all relevant information? How does a central bank juggle the provision of large amounts of liquidity with its monetary policy obligations?

All of these questions were live issues for the Bank of England during the financial crisis of 1847 just as they would be in 2007. My research uses archival data to shed light on these questions by looking at the Bank’s discount window policies in the crisis year of 1847.

The Bank had to manage the 1847 financial crisis despite being limited by a legal monetary policy provision in the Act to back any expansion of its note issue with gold. It is often cited as the last episode of financial distress during which the Bank rationed central bank liquidity before fully assuming its role as a lender of last resort (Bignon et al, 2012).

We find that the Bank did not engage in any kind of simple threshold rationing but rather monitored and managed its private sector asset holdings in similar ways to central banks have developed since the financial crisis of 2007. In another echo of the recent crisis, the Bank of England also required an indemnity from the UK government in 1847 allowing the Bank to supply more liquidity than it was legally allowed. This indemnity became part of the ‘reaction function’ in future financial crises.

Most importantly, the year 1847 witnessed the introduction of a sophisticated discount ledger system at the Bank. The Bank used the ledger system to record systematically its day-to-day transactions with key counterparties. Discount loan applicants submitted bills in parcels, sometimes containing a hundred or more, which the Bank would have to analyse collectively ‘on the fly’.

The Bank would reject those it didn’t like and then discount the remainder, typically charging a single interest rate. Subsequently, the parcels were ‘unpacked’ into individual bills in the separate customer ‘with and upon ledgers’ where they were classified under the name of their discounter and acceptor alongside several other characteristics at the bill level (drawer, place of origin, maturity, amount, etc.). By analysing these bills and their characteristics we are better able to understanding the Bank’s discount window policies.

We first find evidence that during crisis weeks the Bank was more likely to reject demands for credit from bill brokers – the money market mutual funds of their time – while favouring a small group of regular large discounters. Equally, firms associated with the commercial crisis and the corn price speculation in 1847 (many of which subsequently failed) were less likely to obtain central bank credit. The Bank was discerning about whom it lent to and the discount window was not entirely ‘frosted’ as suggested by Capie (2001).

But our findings support Capie’s main hypothesis that the decision whether to accept or reject a bill depended largely on individual bill characteristics. The Bank appeared to use a set of rules to decide on this, which it applied consistently in both crisis weeks and non-crisis weeks. Most ‘collateral characteristics’ – inter alia, the quality of the names endorsing a bill – were highly significant factors driving the Bank’s decision to reject.

This finding supports the idea that the Bank needed to be active in monitoring key counterparties in the financial system well before formal methods of supervision in the twentieth century, echoing results obtained by Flandreau and Ugolini (2011) for the later 1866 crisis.


Financial neoliberalism: British insurance and the revolution in the management of uncertainty

by Thomas Gould (University of Bristol)


Margaret Thatcher on a visit to Salford, 1982. Available at Wikimedia Commons.

What has been the relationship between the growth of finance and ‘neoliberalism’ in post-war Britain? My research shows that the drive towards popular capitalism and a property-owning democracy was not directly created by Thatcherism, which qualifies popular narratives about the impact of government reforms such as deregulation and privatisation.

Instead, away from the battlegrounds of mainstream economics and politics, a silent ‘neoliberal revolution’ developed deep within the financial industry before Thatcher came to power.

For example, between 1967 and 1980, the number of personalised life insurance policies directly linked to asset values increased from 81,000 to 3.5 million. This development marked a sea change in the way that society managed financial risk and uncertainty.

It had little to do with mainstream politics, and it was so powerful that by 1990 there were over 12 million of these unit-linked policies in force, showing that Thatcherite reforms merely accelerated the pace of change for developments that were already underway.

A cornerstone of traditional insurance, the objective of collective security, was superseded by the interests of individual fairness. The burden of financial risk was increasingly allocated to individual policyholders and the management of financial risk to the markets.

Together, unitised insurance policies and mathematical finance re-engineered the landscape of British capitalism by undermining the scientific foundations and appeal of traditional forms of protective insurance, such as industrial life insurance policies, annuities and defined benefit pension schemes.

Vast concentrations of personal wealth accumulated in institutional funds. The conduct and behaviour of firms became more diverse and complex as the science behind financial risk management was revolutionised. There were four key contours of change:

  • First, collective provision was increasingly superseded by considerations of individual equity.
  • Second, financial analysis and treatment of assets assumed greater importance than the management of liabilities.
  • Third, insurance and protection were increasingly displaced by savings and investment media.
  • Finally, traditional actuarial science was gradually substituted for a paradigm of financial economics.


Financial neoliberalism – the increased role and responsibility of financial markets and financial theories in the provision of economic security – redesigned the management of uncertainty and risk in insurance by changing the relationships between experts, individuals and the regulator within an increasingly sophisticated and competitive financial environment.

Risk-taking financial behaviour became an exigency. The presumption that financial uncertainty could, and should, be managed through financial markets gained saliency. The financial world, and its future, was increasingly understood through the lenses of advanced computing, mathematics and statistics.

Financial neoliberalism dramatically changed the ways in which the financial industry and government engaged with uncertainty; and it influenced the increasingly risk-based techniques, and forms of knowledge, through which they sought to manage and control that future.

Political philosophy may be thought to have represented the main attack on collectivism and the welfare state. Yet, removed from mainstream political discourse, the journals of the actuarial profession show how financial economics gradually displaced actuarial science as the principle scientific paradigm that managed financial uncertainty.

Furthermore, data compiled from the Association of British Insurers show that the attack on principles of collectivism were already underway in the late 1960s and early 1970s as individuals increasingly acquired these personalised insurance policies.

Thus, the practice of unitising the management of risk gradually merged with a new paradigm of financial economics that scientifically legitimised investment and savings rather than mutual protection and risk pooling. In this sense, many of the Thatcher government’s reforms geared towards promoting popular capitalism and property ownership simply pushed at an open door.

Institutional choice in the governance of the early Atlantic sugar trade: diasporas, markets and courts

by Daniel Strum (University of São Paulo)

This article is published by The Economic History Review, and it is available for EHS members here.


Strum Pic
Figure 1. Cartographic chart of the Atlantic Ocean (c. 1600). Source: Biblioteca Nazionale Centrale di Firenze, Florence, Italy. Port. 27.  By kind permission of the Ministero per i Beni e le Attivitá Culturali della Repubblica Italiana.
Reproduction of this image by any means is strictly prohibited.

In the age of sailboats, how could traders be confident that the parties with whom they were considering working on the other side of the ocean would not act opportunistically? Commercial agents overseas spared merchants time and the hazards of travel and allowed them to diversify their investments; but agents might also cheat or renege on or neglect their commitments.

My research about the merchants of Jewish origin plying the sugar trade linking Brazil, Portugal and the Netherlands demonstrates that the same merchants chose different feasible mechanisms (institutions) to curb opportunism in different types of transactions. Its main contribution is to establish a clear pattern linking the attributes of these transactions to those of the mechanisms chosen to enforce them. It also shows how these mechanisms interrelated.

Around 1600, Europe experienced rapidly growing urban populations and dependence on trade for supplies of basic products, while overseas possessions contributed to a surging output of marketable commodities, including sugar. Brazil was turned into the first large-scale plantation economy and became the world’s main sugar producer, with Amsterdam emerging as its main distribution and refining centre. Most of the Brazilian sugar trade was intermediated by merchants in Portugal, and traders of Jewish origin scattered along this trade route played a prominent role in the sugar trade.  The Brazilian sugar trade required institutions with low costs in agency services and contract enforcement because it was a significantly competitive market. Its political, legal, and administrative framework raised relatively few obstacles to market entrants, and trade in a semi-luxury commodity necessitated low start-up costs.

Sources reveal that merchants of Jewish origin engaged mostly individuals of other backgrounds in transactions in which agents had little latitude, performed simple tasks over short periods, and managed small sums (see table 1). Insiders were not left out in these transactions, but the background of agents was not determinant.The research shows that these transactions were primarily enforced by an informal mechanism that linked one’s expected income to one’s professional reputation. Bad conduct led to marginalization while good behaviour vouched for more opportunities by the same and other principals. This mechanism functioned among all traders, despite their differing backgrounds, who were active in these interconnected marketplaces. This professional reputation mechanism worked because a standardization of basic mercantile practices produced a shared understanding of how trade should be conducted. At the same time, the marketplaces’ structure together with patterns of transportation and correspondence increased the speed, frequency, volume, and diversity of the information flow within and between these marketplaces. This information system facilitated both the detection of good and bad conduct and relatively rapid response to news about it.


Strum Pic 2
Figure 2. Sugar crate being weighted at the Palace Square in Lisbon. Source: Dirk Stoop – Terreiro do Paço no século XVII, 1662. Painting. Museu da Cidade, Lisboa, Portugal. MC.PIN.261.© Museu da Cidade – Câmara Municipal de Lisboa.

The professional reputation mechanism worked better on transactions involving small sums and fewer, simpler, and shorter tasks. Misconduct in these tasks were easier to detect and expose amid an extensive and heterogeneous network; and if the agent cheated, the small sums assigned were not enough to live on while forsaking trade.


Table 1. Backgrounds of agents in complex and simple arrangements

Type of transaction Outsiders Probable outsiders Insiders Probable insiders Relatives
Complex 2.6% 4.9% 69.9% 2.1% 20.6%
Simple 20.0% 70.0% 0% 10.0% 0%

Source: original article in the Economic History Review.


On the other hand, merchants of Jewish origin preferred to engage members of their diaspora in complex, larger, and longer transactions (see table 1). A reputation mechanism within diaspora was more effective in governing transactions that were difficult to follow. Although enforcement within the diaspora benefitted from the general information system, the diaspora’s social structure generated more information more rapidly about the conduct of its members. In each centre, insiders knew each other and marriages and socialization within the group prevailed. Insiders usually had personal acquaintances and often relatives in other centres as well. They were conscious of their common history and fragile status. Such social structure also provided greater economic and social incentives for honesty and diligence than the professional mechanism, making the internal mechanism preferable in transactions involving larger sums and wider latitude.

Finally, the research shows that the legal system was able to impose sanctions across wide distances and political units. Yet owing to courts’ slowness and costliness, merchants resorted to litigation only after nonjudicial mechanisms failed. Furthermore, courts could not punish inattention that did not breach legal, customary, or contractual specifications, nor could courts reward accomplishment.

Litigation had to supplemented the professional mechanism because its incentives were not homogeneous across all marketplaces and diasporas. Courts also reinforced the diaspora mechanism by limiting the future income an agent expected to gain from misappropriating large sums from one or many principals. Finally, the professional mechanism supplemented the diaspora mechanism by limiting alternative agency relations with outsiders for insiders who had engaged in misconduct.

Because merchants were capable of matching transactions with the most appropriate governing mechanisms, they were able to diversify their transactions, expand the market for agents, better allocate agents to tasks, and stimulate competition among them. The resulting decrease in agency costs was critical in a significantly competitive market as the sugar trade. Institutional choice thus supported and reinforced—rather than caused—expansion of exchange.

How Swiss banks influenced capital regulation

by Simon Amrein (European University Institute)




The regulation of capital has become a cornerstone of banking legislation in almost every country around the world. The last financial crisis has revived interest in the topic.

Various expert groups have identified low capitalisation in banking as a weakness of the financial system. Historically, banks in Switzerland had significantly influenced the regulation of capital, leading to lower capital requirements. It allowed them to grow rapidly and contributed to the leveraging of the banking sector.

Proportionally to total assets, equity capital has experienced a major change since the nineteenth century: Whereas balance sheets of US banks in 1850 consisted of 40% equity capital, the figure dropped to about 7% in 2000. Similar declines can be observed in other countries, such as Germany, Switzerland and the UK. During the last financial crisis, the equity capital to total assets ratio (capital/assets ratio) of large international banks dropped even lower, in some cases to below 3%.

The evolution of capital/assets ratios in banking is historically well documented. But there has been relatively little research on how capital was regulated over time and the role of regulators, supervisors and banks in developing the regulatory framework.


When a quarter of the banking market fails to comply with regulation

Analysis of banking legislation in Switzerland from 1934 to 1991 shows that capital requirements were eased through lower capital requirements and broader definitions of capital. Banks themselves were highly involved in shaping the design of the regulation within which they operated.

A new dataset provides insights into the so-called capital coverage ratio, comparing the actual capital of banks with the required capital according to regulation. By 1963, the three largest Swiss banks did not meet the statutory capital requirements anymore. Measured in total assets, the three banks represented about a quarter of Switzerland’s banking market.

Archival material shows that the banks entered into a series of negotiations with Switzerland’s banking supervisor, the Federal Banking Commission. The regulation of capital was changed several times between the 1960s and the 1990s.

Besides lowering the capital ratios, banks also lobbied for the extensive use of undisclosed reserves and subordinated as part of their regulatory capital. The regulatory changes coincide with significant improvements of the capital coverage ratio, showing that the banks’ lobbying was successful.


Regulatory changes enabled the growth of Swiss banking

Switzerland became one of the globally leading financial centres during the 1960s. Domestic banks thrived, and the balance sheets of the big Swiss banks – of which UBS and Credit Suisse still exist – grew by up to 20% per year.

Without changes in capital regulation, the balance sheets of Swiss banks would have been up to 35% smaller. Therefore, the evolution of the big Swiss banks into global financial players would have been severely hampered.

This research provides insights into regulatory and supervisory practice and shows that banking regulation has to be viewed in a historical context.

Is bad news ever good for stocks? The importance of time-varying war risk and stock returns

by Gertjan Verdickt (University of Antwerp)

This paper was presented at the EHS Annual Conference 2019 in Belfast.


Brussels Stock Exchange Building (Bourse or Beurs). Available at Wikimedia Commons.

One of the most severe events that affect stock markets is arguably a war. Because wars rarely occur, it is difficult to document what the effect of an increase in the threat and act of war is. Going back to history can go a long way to fill this gap.

In my research, I start by collecting a large sample of articles from the archives of The Economist to create the metrics, Threat and Act. This sample contains 79,568 articles from the period January 1885 to December 1913. To mimic investors and understand the content of news items, I rely on a textual analysis with a thorough human reading.

First, I document that Threat is a good predictor for actual events. If The Economist writes more about a potential military conflict, the probability of that conflict actually happening in the future is higher.

The other metric, Act, only captures conflicts that are happening right now. This suggests that, in contrast to what other historians find, The Economist did not write about war excessively but chose their war news coverage appropriately.

Verdickt Graph

Second, I focus on seven countries with stock listings on the Brussels Stock Exchange: Belgium, France, Germany, Italy, Russia, Spain and the Netherlands. These countries are important for Belgium, either through import and export or with a large number of stock listings in Brussels.

Additionally, I use information on other European and non-European countries with stock listings in Brussels to test whether war risk could be considered a European or global form of risk.

For the seven countries, I document that firms do not adjust dividend policies when there is an increase in the threat of war, but only when there is an outbreak of war.

Investors, on the other hand, sell their stocks when there is an increase in the potential and outbreak of a military conflict. When the threat is not followed by an act, stock prices adjust increase to the similar levels as before.

But when there is an outbreak of war, stock returns are negative up to 12 months after the initial increase. This shows that war risk is priced appropriately in stock markets, but that the outbreak of war is associated with higher uncertainty and welfare costs.

More interestingly, I show that there is a decrease in stock prices for other European countries, but no effect for non-European countries. This suggests that investors value the importance of proximity to a war. But firms from these countries do not adjust their dividend policy when threat and act increase.

Delusions of competence: the near-death of Lloyd’s of London 1980-2002

by Robin Pearson (University of Hull) 
This paper was presented at the EHS Annual Conference 2019 in Belfast. 

Rapid structural change resulting from system collapse seems to be a less common phenomenon in insurance than in the history of other financial services. One notably exception is the crisis that rocked Lloyd’s of London, the world’s oldest continuous insurance market, in the late twentieth century. 

Hitherto, explanations for the crisis have focused on catastrophic losses and problems of internal governance. My study argues that while these factors were important, they may not have resulted in institutional collapse had it not been for multiple delusions of competence among the various parties involved. 

Lloyd’s was a self-governing market that comprised investors – known as names – who put up their personal assets to back the insurance written on their behalf, and accepted unlimited individual liability for losses. Names were organised into syndicates led by an underwriter and a managing agency. Business could only be brought to syndicates by brokers licensed by Lloyd’s. Large broking firms owned most of the managing agencies and thereby controlled the syndicates, giving rise to serious conflicts of interest.  

 In 1970, Lloyd’s resolved to expand capacity by lowering property qualifications for new names. As a result, the membership exploded from 6,000 to over 32,000 by 1988. Many new names were less well-heeled than their predecessors and largely ignorant of the insurance business. Despite a series of scandals involving underwriters siphoning off syndicate funds for their own personal use, the number of entrants kept rising thanks to double digit investment returnsthe tax advantages of membership, and aggressive recruiting.  

While capacity was increasing, underwriters competed vigorously to write long-tail liability and catastrophe business in the form of excess loss (XL) reinsurance. Under these contracts, the reinsurer agreed to indemnify the reinsured in the event of the latter sustaining a loss in excess of a pre-determined figure. The reinsurer in turn usually retroceded (laid off) some of the amount reinsured to another insurer. 

Many Lloyd’s underwriters went into this market despite having little experience of the business. Some syndicates doing XL reinsurance retroceded to other XL syndicates, so that instead of the risks being dispersed, they circulated around the same market, becoming increasingly opaque and concentrated in a few syndicates. This became the infamous London Market Excess of Loss (LMX) spiral. 

By 1990, over one quarter of business at Lloyd’s was XL reinsurance. The spiral offered brokers, underwriters and managing agents the opportunity to earn commission and fees on every reinsurance and retrocession written. 

It also enabled underwriters to arbitrage the differential between the premiums they charged for the original insurance, and the lower premiums they paid for reinsurance and retrocessionsA later inquiry also showed that those writing at the top of the spiral accepted, out of ignorance or carelessness, premium rates that were far too low for the higher layers, in the belief that these were virtually risk-free.  

Unscrupulous underwriters could also offload the worst risks onto ‘dustbin’ syndicates of outsider names, while picking the best risks to be reinsured with so-called ‘baby’ syndicates of insiders. Poor information recording made it difficult to track the risks insured in the LMX spiral. 

Lloyd’s membership peaked in 1988, which also marked the first of five years of unprecedented losses. ‘Long-tail risks on liability insurance generated many of the losses, as well as a series of storms, earthquakes, hurricanes, oil industry disasters and the Gulf war. Asbestosis and industrial pollution claims in the United States poured in, some from policies dating as far back as the 1930s. 

The tsunami of claims overwhelmed Lloyd’s. Groups of names resisted calls and sued on the grounds that Lloyd’s market supervision had failed. Most political opinion moved towards accepting the need for fundamental reform, despite a fierce rearguard action from traditionalists.  

In 1993, for the first time in its history, Lloyd’s permitted the entry of corporate investors with limited liability, and these soon accounted for 80% of market capacity. The number of individual names collapsed. A vehicle was created – Equitas – to reinsure all liabilities incurred prior to 1993, funded by a levy on members. 

In 1996, Lloyd’s achieved a £3.1 billion settlement with its litigants. In 1998, the new Labour government announced that Lloyd’s would be independently regulated by the Financial Services Authority 

Studies of decision-making under uncertainty and the fallacies of experts are helpful in explaining behaviour at Lloyd’s revealed by the crisiswhich included arrogance, elitism, greed, corruption and stubborn resistance to reform in defence of vested interestsPolitically entrenched ideas about the virtues of self-regulation, and an exaggerated faith in the ability of insider experts to know what was best for the institution, also played a role. 

The practice of syndicate underwriters ‘following’ the premium rate set by a recognised ‘lead’ underwriter reinforced behavioural traits such as herding, the desire to avoid being an outlier in one’s predictions; ‘cognitive dissonance’, the inability to know the limits of one’s expertise; overconfidence and optimistic bias. 

The combined effect of these behaviours on XL underwriting at Lloyd’s was a heightened tendency to ignore ‘black swans’, the unknown or unimagined events that can deliver catastrophic losses. There are obvious parallels with the behaviour of investors in the market for sub-prime mortgage default risk, the collapse of which brought about the global financial crisis of 2007/08. 


The aftermath of sovereign debt crises

by Rui Esteves (Graduate Institute of International and Development Studies), Seán Kenny (Lund University) and Jason Lennard (National Institute of Economic and Social Research)

This paper was presented at the EHS Annual Conference 2019 in Belfast.



Financial Crisis. Available on Pixabay.


The memory of recent crises, such as the Argentinean default of 2001 or the Greek near-misses between 2010 and 2015, suggests that defaults are costly and are better avoided at all costs.

This was surely one of the key arguments that led the Syriza-led Greek government to back down from the uncompromising demands of debt relief. The fear of provoking the mother of all recessions by defaulting and exiting the euro focused the minds of politicians and paved the way for the third Greek bailout in 2015.

Likewise, everyone remembers the scenes of economic and political chaos after the Argentinean default of December 2001.

But countries do not usually stop paying their debts on a whim – defaults can be forced on them by large recessions, which sap their ability to collect taxes and repay their debts. Economists call these events ‘endogenous’ because the recessions are both a cause and consequence of defaults. It is therefore unclear whether defaults have any real penalty over and above the recessions that cause them in the first place.

This has led to disagreement in the research literature between authors finding large and persistent negative effects (Arteta and Hale, 2008; Furceri and Zdzienicka, 2012; Esteves and Jalles, 2016) and others who do not find any costs (Levy Yeyati and Panizza, 2011).

In our new study, we solve this empirical challenge by using a narrative approach to identify the causes of defaults since the mid-nineteenth century. Rather than relying on complicated statistical methods, we read contemporary reports from creditor organisations and financial newspapers.

Based on these sources, we classify each default as either endogenous (caused by economic shocks) or exogenous (caused by other factors, such as contagion or wars). The narrative approach has been used extensively in other contexts, such as identifying the effects of fiscal policy (Romer and Romer, 2010; Ramey, 2011), monetary policy (Romer and Romer, 2004; Lennard 2018) and banking crises (Jalil, 2015).

Our analysis suggests that some defaults are indeed caused by weak economies. For example, The Economist reported that ‘no commercial community has ever passed through a worse crisis than that of Uruguay’ prior to its default in 1876.

Others, however, are seemingly caused by more exogenous factors. On the Brazilian default in 1937, the Financial Times noted that there was ‘no sufficient economic justification for a suspension of existing payments’, citing the new dictator’s unwillingness to pay as the ultimate cause.

We then use the evidence from plausibly exogenous defaults and state-of-the-art empirical methods to settle cleanly the question of how defaults affect the economy. Our preliminary results show that there is a statistically and economically significant reduction in output in the aftermath of sovereign debt crises.




Arteta, C. and Hale, G., ‘Sovereign debt crises and credit to the private sector’, Journal of International Economics, 74 (2008), pp. 53-69.

Esteves, R. and Jalles, J., ‘Like father like sons? The cost of sovereign defaults in reduced credit to the private sector’, Journal of Money, Credit and Banking, 48 (2016), pp. 1515-45.

Furceri, D. and Zdzienicka, A., ‘How costly are debt crises?’, Journal of International Money and Finance, 31 (2012), pp. 726-42.

Jalil, A., ‘A new history of banking panics in the United States, 1825–1929: Construction and implications’, American Economic Journal: Macroeconomics, 7 (2015), pp. 295-330.

Lennard, J., ‘Did monetary policy matter? Narrative evidence from the classical gold standard’, Explorations in Economic History, 68 (2018), pp. 16-36.

Levy Yeyati, E. and Panizza, U., ‘The elusive costs of sovereign defaults’, Journal of Development Economics, 94 (2011), pp. 95-105.

Ramey, V. A., Identifying government spending shocks: It’s all in the timing’, Quarterly Journal of Economics, 126 (2011), pp. 1-50.

Romer, C. D. and Romer, D. H., ‘A new measure of monetary shocks: Derivation and implications’, American Economic Review, 94 (2004), pp. 1055-84.

Romer, C. D. and Romer, D. H., ‘The macroeconomic effects of tax changes: Estimates based on a new measure of fiscal shocks’, American Economic Review, 100 (2010), pp. 763-801.

Unequal pay in Victorian Britain

by Chris Minns (LSE) and Emma Griffin (UEA)


Thames embankment, London, England. Available at Wikimedia Commons.

Women made a vital contribution to the labour force in Victorian Britain. Census evidence suggests that close to 40% of women in Britain were employed in the second half of the nineteenth century, which is roughly twice the rate found for the United States at the same time. This implies that the labour market earnings of women made a substantial contribution to the fortunes of many working-class households. 

But how did the industrial economy of mid-Victorian Britain treat women who sought work? It is well-known that women experienced large-scale occupational segregation with women excluded entirely from many professions and industries. Less well known, however, is how the pay of women evolved after 1850, particularly in relation to their male counterparts.  

In a new study, to be presented at the Economic History Society’s 2019 annual conference, we draw on the reports of wages and salaries paid between the 1850 and 1890 prepared by the Board of Trade. In total these sources contain over 9,000 wage quotations for male workers in industry, and well over 1,000 similar quotations for female workers. 

We use this information to compute the gender pay gap in Britain between 1850 and 1800, and to examine the structure of the disadvantages experienced by women at this time.  

Overall, we find that between 1850 and 1890, women in British industry had earnings a little more that 40% of male earnings in industry. The gender pay gap closes by only a few percentage points over the period we study, and it would appear that it is at least as large in the second half of the nineteenth century as what others have found for the first part of the Industrial Revolution between 1780 and 1850. 

While part of the explanation for the large pay gap is the exclusion of women from the best paid industries and trades, our preliminary work suggests that differences in the composition of employment between men and women can only account for a small fraction of the gender gap. 

When comparing matched wage quotations for men and women in the same location, industry, occupation and year, the gender pay gap is only modestly smaller, at 51%. Consistent with this finding, we do not find evidence of substantial gender pay gap differences between regions or industries that were major employers of women. 

What are the main implications of these findings? 

First, it appears that the dynamics of gender pay in late nineteenth century Britain were strikingly different than in the United States. The gender pay gap in UK industry at the end of the nineteenth century was about 15 percentage points larger than in American manufacturing, which saw a more noticeable narrowing over the century. These transatlantic differences in the relative price of women’s labour may have implications for the patterns of industrial development seen in Britain versus the United States. 

Second, the fairly uniform gender pay gap across British industry, despite notable differences in skill and strength requirements between occupations speaks to a pattern of broad-based labour market segmentation that worked to suppress women’s wages well before the spread of internal labour markets that and long-term contracts thought to formalise different pay structures for men and women.