by Jamus Jerome Lim (ESSEC Business School and Center for Analytical Finance)
In 2017, the bilateral trade deficit between China and the United States amounted to $375 billion, a staggering amount just shy of what the latter incurred against the rest of the world combined. And not only is this deficit large, it has been remarkably persistent: the chronic imbalance emerged in earnest in 1989, and has persisted for the better part of three decades. Some have even pointed to such imbalances as a contributing factor to the global financial crisis of 2008.
While such massive, chronic imbalances may strike one as artefacts of a modern, hyperglobalised world economy, nothing could be further from the truth. For example, recent economic history records large, persistent imbalances between the United States and Britain during the former’s earlier stages of development. Such imbalances also characterised the rise of Japan following the Second World War.
In recent research, we show that external imbalances between two major economic powers – an established leader, and a rising follower – were also observed over three earlier periods in economic history. These were the deficits borne by the Roman empire vis-à-vis pre-Gupta India circa 1CE; the borrowing by the Abbasid caliphate from Carolingian Frankia in the early ninth century; and the imbalances between West European kingdoms and the Byzantine empire that emerged around the 1300s.
Although data paucity implies that definitive claims on current account deficits are all but impossible, it is possible to rely on indirect sources of evidence to infer the likely presence of imbalances. One such source consists of trade-related documents from the time as well as pottery finds, which ascertain not just the existence but also the size of exchange relationships.
For example, using such records, we demonstrate that Baghdad – the capital of the Abbasid Caliphate – received furs and slaves from the comparative economic backwater that was the Carolingian empire, in exchange for goods such as spices, dates and olive oil. This imbalance may have lasted as long as several centuries.
A second source of evidence comes from numismatic records, especially coin hoards. Hoards of Roman gold aurei and silver dinarii have been discovered, for example, in India, with coinage dating from as early as the reign of Augustus through until at least that of Marcus Aurelius, well over half a century. Rome relied on such specie exports to fund, among other expenditures, continued military adventurism during the second century.
Our final source of evidence relies on fiscal records. Given the close relationship between external and fiscal balances – all else equal, greater government borrowing gives rise to a larger external deficit – chronic budgetary shortfalls generally give rise to rising imbalances.
This was very much the case in Byzantium prior to its decline: around the turn of the previous millennium, the Empire’s saving and reserves were in significant surplus, lending credence to the notion that the flow of products went from East to West. The recipients of such goods? The kingdoms of Western Europe, paid for with silver.
If you look up as you walk along the streets of British towns and cities, you will see the proud and sometimes colourful traces of nineteenth century savings banks. But evidence of the importance of savings banks to working- and middle-class savers is harder to locate in economic history research.
English and Welsh savings banks operated on a ‘savings only’ model that funded interest payments to savers by purchasing government bonds and, in doing so, placed themselves outside the history of productive financialisation (Horne, 1947). This is a matter of regret, because whatever minor role trustee savings banks played in the productive economy, there is little doubt that they helped to financialise segments of society previously detached from such activities.
The research that Stuart Henderson (Ulster University) and I presented at the EHS 2019 annual conference looks in detail at the financial activity of depositors in one savings bank – the Limehouse Savings Bank, situated in the East End of London.
Savings bank ledgers are a rich source of social history data in addition to the financial, especially in socially diverse larger cities. The apostils of clerks reveal amusement at the names chosen for local clubs (for example, the Royal Order of the Jolly Cocks merits an exclamation mark) or a note as to love gone wrong (for example, a woman who returns the passbook of a lover from whom she has not heard for two years).
We also want to look beyond the aggregate deposit figures for Limehouse recorded in the government reports to discover how individuals used the bank over the period 1830-76.
As a start, we have recorded the account transactions for each of the 195 new accounts opened in 1830, from the first deposit to the last withdrawal – a total of 3,598 transactions. Using the account header information, we have also compiled the personal details of the account holder – such as gender, occupation and place of residence. We use the header profile to trace individual savers in the historical record in order to establish their age and any notable life events, such as marriage and the birth of children.
Apart from 12 accounts, which were registered to individuals who gave addresses other than East End parishes, all the 1830 savers were registered at addresses within a four miles by one mile strip of urban development, which also enabled us to record the residential clustering of savers.
Summary statistics enable us to establish the differences between the categories of savers across several different indicators of transaction activity.
Perhaps unsurprisingly, the men in our 1830 sample tended to make larger deposits and larger withdrawals than the women, with the difference in magnitude masked somewhat by large transactions undertaken by widows. Widows in our sample tended to have a relatively large opening balance and a higher number of withdrawals, suggesting that their accounts functioned more as a ‘draw down’ fund (Perriton and Maltby, 2015).Men also tended to make more transactions than women.
We also see a significant portion of accounts where activity was very limited. The median number of deposits across our 195 accounts was just two, suggesting that a large proportion of accounts acted as something of a (very) temporary financial warehouse. Minors and servants tended to have smaller transactions, but appear to have accumulated more – relatively speaking – than others.
But our interest in the savers goes beyond summary statistics. We know that very few accounts were managed in the way that the sponsors of savings bank legislation intended; the low median of deposits is testament to that.
The basic information in the ledger headers for each account provides a starting point for thinking about when in the life-cycle savings was more successful. Even with the compulsory registration of births, deaths and marriages after 1837 and census data after 1841, the ability to trace an individual saver is not guaranteed.
With so few data points, it is easy to lose individuals at the periphery of the professional and skilled working classes, even in a relatively well documented city like London. Yet the ability to build individual case studies of savers is important to our understanding of savings banks in terms of establishing who were the ‘successful’ savers, and also when – relative to the overall life-cycle of the saver – accounts were held.
Our research presents ten case study accounts from our larger sample to challenge the proposition in social history research on household finances that savings increased when teenage and young adult children were contributing wages to the household. We also look at the evidence for any savings in anticipation of significant life events such as marriage or childbirth. The evidence is weak on both counts.
The distribution of age at account opening among the ten case studies is varied: under 20 years old (3), 21-29 (2), 30-39 (2), 40-49 (0) and 50-59 (3). The three cases of accounts opened after the age of 50 relate to a widow and two married couples, who all had children aged 10-25. But the majority of the accounts we examined were opened by younger adults with young children and growing families.
There is no obvious case for suggesting that savings were possible because expenses could be offset against the wages of teenage or young adult children. Nor can we see any obvious anticipatory or responsive saving for life events in the case studies.
One of our sample account holders did open her account soon after being widowed, but another widow opened her account seven years after the death of her husband. Two men opened accounts when their children were very young, but not in anticipation of their arrival. The only evidence we have in the case studies for changed behaviour as a result of a life event is in the case of marriage – where all account activity ceased for one of our men in the first years of his union.
The mixed quantitative and biographical approach that we use in our study of the Limehouse Savings Bank point to a promising alternative direction for historical savings bank research – one that reconnects savings bank history with the wider history of retail banking and allows for a much richer interplay between social history and financial history.
By looking at the patterns of use by the Limehouse account holders, it is possible to see the ways in which working families and individuals interacted with a standard product and standard service offering, sometimes adding layers of complexity in order to create a different banking product, or using the accounts to budget within a short-term cycle rather than saving for a significant purchase or event.
Horne, HO (1947) A History of Savings Banks, Oxford University Press.
Perriton, L, and J Maltby (2015) ‘Working-class Households and Savings in England, 1850-1880’, Enterprise and Society 16(2): 413-45.
While it is often assumed that debtors’ prisons were illogical and ineffective, my research demonstrates that they were extremely economically effective for creditors though they could ruin the lives of debtors.
The debtors’ prison is a frequent historical bogeyman, a Dickensian symptom of the illogical cruelty of the past that disappeared with enlightened capitalism. As imprisoning someone who could not afford to pay their debts, keeping them away from work and family, seems futile it is assumed creditors were doing so to satisfy petty revenge.
But they were a feature of most of English history from 1283, and though their power was curbed in 1869, there were still debtors imprisoned in the 1920s. The reason they persisted, as my research shows, is because, for creditors, they worked well.
The majority of imprisoned debtors in the eighteenth century were released relatively quickly having paid their creditors. This revelation is timely when events in America demonstrate how easily these prisons can return.
As today, most eighteenth century purchases were done on credit due to the delay in wages, limited supply of coinage, and cultural preferences for buying goods on credit. But credit was based on a range of factors including personal reputation, social rank and moral status. Informal oral contracts could frequently be made with little sense of an individual’s actual financial status, particularly if they were a gentleman or aristocrat. As contracts were not based on goods and court processes were slow, it was difficult to seize property to recover debts when creditors required money.
Creditors were able to imprison debtors without trial in this period until they paid what they owed or died. The registers of a London Debtors’ Prison, the Woodstreet Compter (1741-1815), reveal that creditors had good reasons to do so. Most of the 10,156 debtors contained in the registers left prison relatively quickly – 91% were released in under a year while almost a third were released in less than 100 days.
In addition, 84% were ‘discharged’ by their creditors, indicating that either the prisoner had paid their debts or a new contract had been agreed. Imprisonment forced debtors to find a way to pay or at least to renegotiate with creditors.
Prisoners were not the poor, but usually middle class people in small amounts of debt. One of the largest groups was made up of shopkeepers (about 20% of prisoners) though male and female prisoners came from across society with gentlemen, cheesemongers, lawyers, wigmakers and professors rubbing shoulders.
Most used their time to coordinate the selling of goods to raise money, or borrowed yet more from family and friends. Many others called in their own debts by having their debtors imprisoned as well.
As prisons were relatively open, some debtors worked off their debts. John Grano, a trumpeter who worked for Handel, imprisoned in the 1720s, taught music lessons from his cell. Others sold liquor or food to fellow prisoners or continued as best they could at their trade in the prison yard. Those with a literary mind, such as Daniel Defoe, wrote their way out.
Though credit works on different terms today, that coercive imprisonment is effective at securing repayment remains true. There have been a number of US states operating what amount to debtors’ prisons in recent years where the poor, fined by the state usually for traffic violations, are held until they pay what they owe.
Attorney General Jeff Sessions even retracted an Obama era memo in December aimed at abolishing the practice. While eighteenth century prisons worked effectively for creditors, they could ruin the lives of debtors who were forced to sell anything they could to pay their dues and escape the unsanitary hole in which they were being kept without trial. Assuming that they did not work and therefore won’t return is shown by my research to be false.
by Federico Tadei (Department of Economic History, University of Barcelona)
Recent Brexit negotiations have led to intense debate on the type of trade agreements that should be put in place between the UK and the European Union. According to Policy Exchange’s February 2018 report, the UK should unilaterally commit to free trade. The assumption underlying this argument is that the removal of tariffs has the potential to reduce consumer prices due to greater competition and lower protection of domestic industries, which would promote innovation and increase productivity.
But the removal of tariffs and protectionist policies might not be sufficient to implement free trade fully. My research on trade from colonial Africa suggests that a legal commitment to free trade is not nearly enough.
Specifically, it appears that during the colonial period the British formally relied on free trade encouraging competition between trading firms, while the French made use of their political power to establish trade monopsonies and acquire African goods at prices lower than in the world markets.
Yet the situation on the ground might have been quite different than what formal policies envisaged. Did the British colonies actually enjoy free trade? Did producers in Africa who lived under British rule receive higher prices than those living under the French?
To answer these questions, I measure the degree of competitiveness of trade under the two colonial powers by computing profit margins for trading companies that bought goods from the African coast and resold them in Europe.
To do so, I use data on African export prices and European import prices for a variety of agricultural commodities exported from British and French colonies between 1898 and 1939 and estimated trade costs from Africa to Europe. The rationale behind this methodology is simple: if the colonisers relied on free trade, profit margins of trading companies should be close to zero.
On average, profit margins in the British colonies were lower than in the French colonies, suggesting a higher reliance on free trade in the British Empire (see Figure 1). But if we compare the two colonial powers within one same region (West or East Africa) (Figures 2 and 3), it appears that the actual extent of free trade depended more on the conditions in the colonies than on formal policies of the colonial power.
Profit margins were statistically indistinguishable from zero in British East Africa, suggesting free trade, but they were large (10-15%) in West African colonies under both the French and the British, suggesting the presence of monopsony power.
These results suggest that, in spite of formal policies, other factors were at play in determining the actual implementation of free trade in Africa. In the Western colonies, the longer history of trade and higher level of commercialisation reduced the operational costs of trading companies. At the same time, most of agricultural production was based on small African farmers, with little political power and ability to oppose de facto trade monopsonies.
Conversely, in East Africa, production was often controlled by European settlers who had a much larger political influence over the metropolitan government, increasing the cost of establishing trade monopsonies and allowing better implementation of colonial free trade policy.
Overall, despite formal policies, the ability of trading firms in West Africa to eliminate competition was costly in terms of economic growth. African producers received lower prices than they would have in a competitive market and consumers paid more for imported goods. Formal commitment to free trade policies might not be sufficient to reap the full benefits of free trade.
What drives a central bank’s decision to grant or refuse liquidity provision during a financial crisis? How does the central bank manage counterparty risk during such periods of high demand for liquidity, when time constraints make it hard to process all relevant information? How does a central bank juggle the provision of large amounts of liquidity with its monetary policy obligations?
All of these questions were live issues for the Bank of England during the financial crisis of 1847 just as they would be in 2007. My research uses archival data to shed light on these questions by looking at the Bank’s discount window policies in the crisis year of 1847.
The Bank had to manage the 1847 financial crisis despite being limited by a legal monetary policy provision in the Act to back any expansion of its note issue with gold. It is often cited as the last episode of financial distress during which the Bank rationed central bank liquidity before fully assuming its role as a lender of last resort (Bignon et al, 2012).
We find that the Bank did not engage in any kind of simple threshold rationing but rather monitored and managed its private sector asset holdings in similar ways to central banks have developed since the financial crisis of 2007. In another echo of the recent crisis, the Bank of England also required an indemnity from the UK government in 1847 allowing the Bank to supply more liquidity than it was legally allowed. This indemnity became part of the ‘reaction function’ in future financial crises.
Most importantly, the year 1847 witnessed the introduction of a sophisticated discount ledger system at the Bank. The Bank used the ledger system to record systematically its day-to-day transactions with key counterparties. Discount loan applicants submitted bills in parcels, sometimes containing a hundred or more, which the Bank would have to analyse collectively ‘on the fly’.
The Bank would reject those it didn’t like and then discount the remainder, typically charging a single interest rate. Subsequently, the parcels were ‘unpacked’ into individual bills in the separate customer ‘with and upon ledgers’ where they were classified under the name of their discounter and acceptor alongside several other characteristics at the bill level (drawer, place of origin, maturity, amount, etc.). By analysing these bills and their characteristics we are better able to understanding the Bank’s discount window policies.
We first find evidence that during crisis weeks the Bank was more likely to reject demands for credit from bill brokers – the money market mutual funds of their time – while favouring a small group of regular large discounters. Equally, firms associated with the commercial crisis and the corn price speculation in 1847 (many of which subsequently failed) were less likely to obtain central bank credit. The Bank was discerning about whom it lent to and the discount window was not entirely ‘frosted’ as suggested by Capie (2001).
But our findings support Capie’s main hypothesis that the decision whether to accept or reject a bill depended largely on individual bill characteristics. The Bank appeared to use a set of rules to decide on this, which it applied consistently in both crisis weeks and non-crisis weeks. Most ‘collateral characteristics’ – inter alia, the quality of the names endorsing a bill – were highly significant factors driving the Bank’s decision to reject.
This finding supports the idea that the Bank needed to be active in monitoring key counterparties in the financial system well before formal methods of supervision in the twentieth century, echoing results obtained by Flandreau and Ugolini (2011) for the later 1866 crisis.
What has been the relationship between the growth of finance and ‘neoliberalism’ in post-war Britain? My research shows that the drive towards popular capitalism and a property-owning democracy was not directly created by Thatcherism, which qualifies popular narratives about the impact of government reforms such as deregulation and privatisation.
Instead, away from the battlegrounds of mainstream economics and politics, a silent ‘neoliberal revolution’ developed deep within the financial industry before Thatcher came to power.
For example, between 1967 and 1980, the number of personalised life insurance policies directly linked to asset values increased from 81,000 to 3.5 million. This development marked a sea change in the way that society managed financial risk and uncertainty.
It had little to do with mainstream politics, and it was so powerful that by 1990 there were over 12 million of these unit-linked policies in force, showing that Thatcherite reforms merely accelerated the pace of change for developments that were already underway.
A cornerstone of traditional insurance, the objective of collective security, was superseded by the interests of individual fairness. The burden of financial risk was increasingly allocated to individual policyholders and the management of financial risk to the markets.
Together, unitised insurance policies and mathematical finance re-engineered the landscape of British capitalism by undermining the scientific foundations and appeal of traditional forms of protective insurance, such as industrial life insurance policies, annuities and defined benefit pension schemes.
Vast concentrations of personal wealth accumulated in institutional funds. The conduct and behaviour of firms became more diverse and complex as the science behind financial risk management was revolutionised. There were four key contours of change:
First, collective provision was increasingly superseded by considerations of individual equity.
Second, financial analysis and treatment of assets assumed greater importance than the management of liabilities.
Third, insurance and protection were increasingly displaced by savings and investment media.
Finally, traditional actuarial science was gradually substituted for a paradigm of financial economics.
Financial neoliberalism – the increased role and responsibility of financial markets and financial theories in the provision of economic security – redesigned the management of uncertainty and risk in insurance by changing the relationships between experts, individuals and the regulator within an increasingly sophisticated and competitive financial environment.
Risk-taking financial behaviour became an exigency. The presumption that financial uncertainty could, and should, be managed through financial markets gained saliency. The financial world, and its future, was increasingly understood through the lenses of advanced computing, mathematics and statistics.
Financial neoliberalism dramatically changed the ways in which the financial industry and government engaged with uncertainty; and it influenced the increasingly risk-based techniques, and forms of knowledge, through which they sought to manage and control that future.
Political philosophy may be thought to have represented the main attack on collectivism and the welfare state. Yet, removed from mainstream political discourse, the journals of the actuarial profession show how financial economics gradually displaced actuarial science as the principle scientific paradigm that managed financial uncertainty.
Furthermore, data compiled from the Association of British Insurers show that the attack on principles of collectivism were already underway in the late 1960s and early 1970s as individuals increasingly acquired these personalised insurance policies.
Thus, the practice of unitising the management of risk gradually merged with a new paradigm of financial economics that scientifically legitimised investment and savings rather than mutual protection and risk pooling. In this sense, many of the Thatcher government’s reforms geared towards promoting popular capitalism and property ownership simply pushed at an open door.
In the age of sailboats, how could traders be confident that the parties with whom they were considering working on the other side of the ocean would not act opportunistically? Commercial agents overseas spared merchants time and the hazards of travel and allowed them to diversify their investments; but agents might also cheat or renege on or neglect their commitments.
My research about the merchants of Jewish origin plying the sugar trade linking Brazil, Portugal and the Netherlands demonstrates that the same merchants chose different feasible mechanisms (institutions) to curb opportunism in different types of transactions. Its main contribution is to establish a clear pattern linking the attributes of these transactions to those of the mechanisms chosen to enforce them. It also shows how these mechanisms interrelated.
Around 1600, Europe experienced rapidly growing urban populations and dependence on trade for supplies of basic products, while overseas possessions contributed to a surging output of marketable commodities, including sugar. Brazil was turned into the first large-scale plantation economy and became the world’s main sugar producer, with Amsterdam emerging as its main distribution and refining centre. Most of the Brazilian sugar trade was intermediated by merchants in Portugal, and traders of Jewish origin scattered along this trade route played a prominent role in the sugar trade. The Brazilian sugar trade required institutions with low costs in agency services and contract enforcement because it was a significantly competitive market. Its political, legal, and administrative framework raised relatively few obstacles to market entrants, and trade in a semi-luxury commodity necessitated low start-up costs.
Sources reveal that merchants of Jewish origin engaged mostly individuals of other backgrounds in transactions in which agents had little latitude, performed simple tasks over short periods, and managed small sums (see table 1). Insiders were not left out in these transactions, but the background of agents was not determinant.The research shows that these transactions were primarily enforced by an informal mechanism that linked one’s expected income to one’s professional reputation. Bad conduct led to marginalization while good behaviour vouched for more opportunities by the same and other principals. This mechanism functioned among all traders, despite their differing backgrounds, who were active in these interconnected marketplaces. This professional reputation mechanism worked because a standardization of basic mercantile practices produced a shared understanding of how trade should be conducted. At the same time, the marketplaces’ structure together with patterns of transportation and correspondence increased the speed, frequency, volume, and diversity of the information flow within and between these marketplaces. This information system facilitated both the detection of good and bad conduct and relatively rapid response to news about it.
The professional reputation mechanism worked better on transactions involving small sums and fewer, simpler, and shorter tasks. Misconduct in these tasks were easier to detect and expose amid an extensive and heterogeneous network; and if the agent cheated, the small sums assigned were not enough to live on while forsaking trade.
Table 1. Backgrounds of agents in complex and simple arrangements
Type of transaction
Source: original article in the Economic History Review.
On the other hand, merchants of Jewish origin preferred to engage members of their diaspora in complex, larger, and longer transactions (see table 1). A reputation mechanism within diaspora was more effective in governing transactions that were difficult to follow. Although enforcement within the diaspora benefitted from the general information system, the diaspora’s social structure generated more information more rapidly about the conduct of its members. In each centre, insiders knew each other and marriages and socialization within the group prevailed. Insiders usually had personal acquaintances and often relatives in other centres as well. They were conscious of their common history and fragile status. Such social structure also provided greater economic and social incentives for honesty and diligence than the professional mechanism, making the internal mechanism preferable in transactions involving larger sums and wider latitude.
Finally, the research shows that the legal system was able to impose sanctions across wide distances and political units. Yet owing to courts’ slowness and costliness, merchants resorted to litigation only after nonjudicial mechanisms failed. Furthermore, courts could not punish inattention that did not breach legal, customary, or contractual specifications, nor could courts reward accomplishment.
Litigation had to supplemented the professional mechanism because its incentives were not homogeneous across all marketplaces and diasporas. Courts also reinforced the diaspora mechanism by limiting the future income an agent expected to gain from misappropriating large sums from one or many principals. Finally, the professional mechanism supplemented the diaspora mechanism by limiting alternative agency relations with outsiders for insiders who had engaged in misconduct.
Because merchants were capable of matching transactions with the most appropriate governing mechanisms, they were able to diversify their transactions, expand the market for agents, better allocate agents to tasks, and stimulate competition among them. The resulting decrease in agency costs was critical in a significantly competitive market as the sugar trade. Institutional choice thus supported and reinforced—rather than caused—expansion of exchange.
The regulation of capital has become a cornerstone of banking legislation in almost every country around the world. The last financial crisis has revived interest in the topic.
Various expert groups have identified low capitalisation in banking as a weakness of the financial system. Historically, banks in Switzerland had significantly influenced the regulation of capital, leading to lower capital requirements. It allowed them to grow rapidly and contributed to the leveraging of the banking sector.
Proportionally to total assets, equity capital has experienced a major change since the nineteenth century: Whereas balance sheets of US banks in 1850 consisted of 40% equity capital, the figure dropped to about 7% in 2000. Similar declines can be observed in other countries, such as Germany, Switzerland and the UK. During the last financial crisis, the equity capital to total assets ratio (capital/assets ratio) of large international banks dropped even lower, in some cases to below 3%.
The evolution of capital/assets ratios in banking is historically well documented. But there has been relatively little research on how capital was regulated over time and the role of regulators, supervisors and banks in developing the regulatory framework.
When a quarter of the banking market fails to comply with regulation
Analysis of banking legislation in Switzerland from 1934 to 1991 shows that capital requirements were eased through lower capital requirements and broader definitions of capital. Banks themselves were highly involved in shaping the design of the regulation within which they operated.
A new dataset provides insights into the so-called capital coverage ratio, comparing the actual capital of banks with the required capital according to regulation. By 1963, the three largest Swiss banks did not meet the statutory capital requirements anymore. Measured in total assets, the three banks represented about a quarter of Switzerland’s banking market.
Archival material shows that the banks entered into a series of negotiations with Switzerland’s banking supervisor, the Federal Banking Commission. The regulation of capital was changed several times between the 1960s and the 1990s.
Besides lowering the capital ratios, banks also lobbied for the extensive use of undisclosed reserves and subordinated as part of their regulatory capital. The regulatory changes coincide with significant improvements of the capital coverage ratio, showing that the banks’ lobbying was successful.
Regulatory changes enabled the growth of Swiss banking
Switzerland became one of the globally leading financial centres during the 1960s. Domestic banks thrived, and the balance sheets of the big Swiss banks – of which UBS and Credit Suisse still exist – grew by up to 20% per year.
Without changes in capital regulation, the balance sheets of Swiss banks would have been up to 35% smaller. Therefore, the evolution of the big Swiss banks into global financial players would have been severely hampered.
This research provides insights into regulatory and supervisory practice and shows that banking regulation has to be viewed in a historical context.
This paper was presented at the EHS Annual Conference 2019 in Belfast.
One of the most severe events that affect stock markets is arguably a war. Because wars rarely occur, it is difficult to document what the effect of an increase in the threat and act of war is. Going back to history can go a long way to fill this gap.
In my research, I start by collecting a large sample of articles from the archives of The Economist to create the metrics, Threat and Act. This sample contains 79,568 articles from the period January 1885 to December 1913. To mimic investors and understand the content of news items, I rely on a textual analysis with a thorough human reading.
First, I document that Threat is a good predictor for actual events. If The Economist writes more about a potential military conflict, the probability of that conflict actually happening in the future is higher.
The other metric, Act, only captures conflicts that are happening right now. This suggests that, in contrast to what other historians find, The Economist did not write about war excessively but chose their war news coverage appropriately.
Second, I focus on seven countries with stock listings on the Brussels Stock Exchange: Belgium, France, Germany, Italy, Russia, Spain and the Netherlands. These countries are important for Belgium, either through import and export or with a large number of stock listings in Brussels.
Additionally, I use information on other European and non-European countries with stock listings in Brussels to test whether war risk could be considered a European or global form of risk.
For the seven countries, I document that firms do not adjust dividend policies when there is an increase in the threat of war, but only when there is an outbreak of war.
Investors, on the other hand, sell their stocks when there is an increase in the potential and outbreak of a military conflict. When the threat is not followed by an act, stock prices adjust increase to the similar levels as before.
But when there is an outbreak of war, stock returns are negative up to 12 months after the initial increase. This shows that war risk is priced appropriately in stock markets, but that the outbreak of war is associated with higher uncertainty and welfare costs.
More interestingly, I show that there is a decrease in stock prices for other European countries, but no effect for non-European countries. This suggests that investors value the importance of proximity to a war. But firms from these countries do not adjust their dividend policy when threat and act increase.
by Robin Pearson (University of Hull) This paper was presented at the EHS Annual Conference 2019 in Belfast.
Rapid structural change resulting from system collapse seems to be a less common phenomenon in insurance than in the history of other financial services.One notably exception is the crisis that rocked Lloyd’s of London, the world’s oldest continuous insurance market, in the late twentieth century.
Hitherto, explanations for the crisis have focused on catastrophic losses and problems of internal governance. Mystudy arguesthat while these factors were important, they may not have resulted in institutional collapse had it not been for multiple delusions of competence among the various parties involved.
Lloyd’s was a self-governing market that comprised investors – known as ‘names’ – who put up their personal assets to back the insurance written on their behalf, and accepted unlimited individual liability for losses. Names were organised into syndicates led by an underwriter and a managing agency. Business could only be brought to syndicates by brokers licensed by Lloyd’s. Large broking firms owned most of the managing agencies and thereby controlled the syndicates, giving rise to serious conflicts of interest.
In 1970,Lloyd’s resolved to expand capacity by lowering property qualifications for new names. As a result, the membership exploded from 6,000 to over 32,000 by 1988. Many new names were less well-heeled than their predecessors and largely ignorant of the insurance business. Despite a series of scandals involving underwriters siphoning off syndicate funds for their own personal use, the number of entrants kept rising thanks to double digit investment returns, the tax advantages of membership, and aggressive recruiting.
Whilecapacity was increasing, underwriters competed vigorously to write‘long-tail’ liability and catastrophe business in the form of excess loss (XL) reinsurance. Under these contracts, the reinsurer agreed to indemnify the reinsured in the event of the latter sustaining a loss in excess of a pre-determined figure. The reinsurer in turn usually ‘retroceded’(laid off) some of the amount reinsured to another insurer.
Many Lloyd’s underwriters went into this market despite having little experience of the business. Some syndicates doing XL reinsurance retroceded to other XL syndicates, so that instead of the risks being dispersed, they circulated around the same market, becoming increasingly opaque and concentrated in a few syndicates. This became the infamous London Market Excess of Loss (LMX) spiral.
By 1990, over one quarter of business at Lloyd’s was XL reinsurance. The spiral offered brokers, underwriters and managing agents the opportunity to earn commission and fees on every reinsurance and retrocession written.
It also enabled underwriters to arbitrage the differential between the premiumsthey charged for the original insurance, and the lower premiumsthey paid for reinsurance and retrocessions. A later inquiry also showed that those writing at the top of the spiral accepted, out of ignorance or carelessness, premium rates that were far too low for the higher layers, in the belief that these were virtually risk-free.
Unscrupulous underwriters could also offload the worst risks onto ‘dustbin’ syndicates of outsider names, while picking the best risks to be reinsured with so-called ‘baby’ syndicates of insiders.Poor information recording made it difficult to track the risks insured in the LMX spiral.
Lloyd’s membership peaked in 1988, which also marked the first of five years of unprecedented losses. ‘Long-tail’ risks on liability insurance generated many of the losses, as well as a series of storms, earthquakes, hurricanes, oil industry disasters and the Gulf war.Asbestosis and industrial pollution claims in the United Statespoured in, some from policies dating as far back as the 1930s.
The tsunami of claims overwhelmed Lloyd’s. Groups of names resisted calls and sued on the grounds that Lloyd’s market supervision had failed.Most political opinion moved towards accepting the need for fundamental reform, despite a fierce rearguard action from traditionalists.
In 1993, for the first time in its history, Lloyd’s permitted the entry of corporate investorswith limited liability, and these soon accounted for 80% of market capacity. The number of individual names collapsed. A vehicle was created – Equitas – to reinsure all liabilities incurred prior to 1993, funded by a levy on members.
In 1996, Lloyd’s achieved a £3.1 billion settlement with its litigants.In 1998, the new Labour government announced that Lloyd’s would be independently regulated by the Financial Services Authority.
Studies of decision-making under uncertainty and the fallacies of experts arehelpful in explaining behaviour at Lloyd’s revealed by the crisis, which included arrogance, elitism, greed, corruption and stubborn resistance to reform in defence of vested interests. Politically entrenched ideas about the virtues of self-regulation, and an exaggerated faith in the ability of insider experts to know what was best for the institution, also played a role.
The practice of syndicate underwriters ‘following’ the premium rate set by a recognised ‘lead’ underwriter reinforced behavioural traits such as ‘herding’,the desire to avoid being an outlier in one’s predictions; ‘cognitive dissonance’, the inability to know the limits of one’s expertise; overconfidence and optimistic bias.
The combined effect of these behaviours on XL underwriting at Lloyd’s was a heightened tendency to ignore ‘black swans’, the unknown or unimagined events that can deliver catastrophic losses.There are obvious parallels with the behaviour of investors in the market for sub-prime mortgage default risk, the collapse of which brought about the global financial crisis of 2007/08.