Welcome to The Long Run

On behalf of the Economic History Society (EHS), it is a pleasure to welcome you to The Long Run, the EHS blog.

This blog aims to encourage discussion of economic and social history, broadly defined. We live in a time of major social and economic change, and research in social science is showing more and more that a historical and long-term approach to current issues is the key to understanding our times.

We welcome any contribution or suggestion – please contact us at ehs.thelongrun@gmail.com

 

Lessons for the euro from Italian and German monetary unification in the nineteenth century

by Roger Vicquéry (London School of Economics)

Unificazione-Monetaria-Italiana-2012
Special euro-coin issued in 2012 to celebrate the 150th anniversary of the monetary unification of Italy. From Numismatica Pacchiega, available at <https://www.numismaticapacchiega.it/5-euro-annivesario-unificazione/&gt;

Is the euro area sustainable in its current membership form? My research provides new lessons from past examples of monetary integration, looking at the monetary unification of Italy and Germany in the second half of the nineteenth century.

 

Currency areas’ optimal membership has recently been at the forefront of the policy debate, as the original choice of letting peripheral countries join the euro was widely blamed for the common currency existential crisis. Academic work on ‘optimum currency areas’ (OCA) traditionally warned against the risk of adopting a ‘one size fits all’ monetary policy for regions with differing business cycles.

Krugman (1993) even argued that monetary unification in itself might increase its own costs over time, as regions are encouraged to specialise and thus become more different to one another. But those concerns were dismissed by Frankel and Rose’s (1998) influential ‘OCA endogeneity’ theory: once regions with ex-ante diverging paths join a common currency, they will see their business cycle synchronise progressively ex-post.

My findings question the consensus view in favour of ‘OCA endogeneity’ and raise the issue of the adverse effects of monetary integration on regional inequality. I argue that the Italian monetary unification played a role in the emergence of the regional divide between Italy’s Northern and Southern regions by the turn of the twentieth century.

I find that pre-unification Italian regions experienced largely asymmetric shocks, pointing to high economic costs stemming from the 1862 Italian monetary unification. While money markets in Northern Italy were synchronised with the core of the European monetary system, Southern Italian regions tended to move together with the European periphery.

The Italian unification is an exception in this respect, as I show that other major monetary arrangements in this period, particularly the German monetary union but also the Latin Monetary Convention and the Gold Standard, occurred among regions experiencing high shock synchronisation.

Contrary to what ‘OCA endogeneity’ would imply, shock asymmetry among Italian regions actually increased following monetary unification. I estimate that pairs of Italian provinces that came to be integrated following unification became, over four decades, up to 15% more dissimilar to one another in their economic structure compared to pairs of provinces that already belonged to the same monetary union. This means that, in line with Krugman’s pessimistic take on currency areas, economic integration in itself increased the likelihood of asymmetric shocks.

In this respect, the global grain crisis of the 1880s, disproportionally affecting the agricultural South while Italy pursued a restrictive monetary policy, might have laid the foundations for the Italian ‘Southern Question’. As pointed out by Krugman, asymmetric shocks in a currency area with low transaction costs can lead to permanent loss in regional income, as prices are unable to adjust fast enough to prevent factors of production to permanently leave the affected region.

The policy implications of this research are twofold.

First, the results caution against the prevalent view that cyclical symmetry within a currency area is bound to improve by itself over time. In particular, the role of specialisation and factor mobility in driving cyclical divergence needs to be reassessed. As the euro area moves towards more integration, additional specialisation of its regions could further magnify – by increasing the likelihood of asymmetric shocks – the challenges posed by the ‘one size fits all’ policy of the European Central Bank on the periphery.

Second, the Italian experience of monetary unification underlines how the sustainability of currency areas is chiefly related to political will rather than economic costs. Despite the fact that the Italian monetary union has been sub-optimal from the start and to a large extent remained so, it has managed to survive unscathed for the last century and a half. While the OCA framework is a good predictor of currency areas’ membership and economic performance, their sustainability is likely to be a matter of political integration.

London fog: a century of pollution and mortality, 1866-1965

by Walker Hanlon (UCLA)

23695833473_b1b7c7cca2_b
Photogravure by Donald Macleish from Wonderful London by St John Adcock, 1927. Available at <https://www.flickr.com/photos/norfolkodyssey/23695833473&gt;

For more than a century, London struggled with some of the worst air pollution on earth. But how much did air pollution affect health in London? How did these effects change as the city developed? Can London’s long experience teach us lessons that are relevant for modern cities, from Beijing to New Delhi, that are currently struggling with their own air pollution problems?

To answer these questions, I study the effects of air pollution in London across a full century from 1866 to 1965. Using new data, I show that air pollution was a major contributor to mortality in London during this century – accounting for at least one out of every 200 deaths during this century.

As London developed, the impact of air pollution changed. In the nineteenth century, Londoners suffered from a range of infectious diseases, including respiratory diseases like measles and tuberculosis. I show that being exposed to high levels of air pollution made these diseases deadlier, while the presence of these diseases made air pollution more harmful. As a result, when public health and medical improvements reduced the prevalence of these infectious diseases, they also lowered the mortality cost of pollution exposure.

This finding has implications for modern developing countries. It tells us that air pollution is likely to be more deadly in the developing world, but also that investments that improve health in other ways can lower the health costs of pollution exposure.

An important challenge in studying air pollution in the past is that direct pollution measures were not collected in a consistent way until the mid-twentieth century. To overcome this challenge, this study takes advantage of London’s famous fog events, which trapped pollution in the city and substantially increased exposure levels.

While some famous fog events are well known – such as the Great Fog of 1952 or the Cattle Show Fog of 1873, which killed the Queen’s prize bull – London experienced hundreds of lesser-known events over the century I study. By reading weather reports from the Greenwich Observatory covering over 26,000 days, we identified every day in which heavy fog occurred.

To study how these fog events affected health, I collected detailed new mortality data describing deaths in London at the weekly level. Digitised from original sources, and covering over 350,000 observations, this new data set opens the door to a more detailed analysis of London’s mortality experience than has previously been possible.

These new mortality data allow me to analyse the effects of air pollution from a variety of different angles. I provide new evidence on how the effects of air pollution varied across age groups, how the effect on different age groups evolved over time, how pollution interacted with infectious diseases and other causes of death, etc. This enriches our understanding of London’s history while opening up a range of new possibilities for studying the impact of air pollution over the long run.

Cash Converter: The Liquidity of the Victorian Capital Market

by John Turner (Queen’s University Centre for Economic History)

Liquidity is the ease with which an asset such as a share or a bond can be converted into cash. It is important for financial systems because it enables investors to liquidate and diversify their assets at a low cost. Without liquid markets, portfolio diversification becomes very costly for the investor. As a result, firms and governments must pay a premium to induce investors to buy their bonds and shares. Liquid capital markets also spur firms and entrepreneurs to invest in long-run projects, which increases productivity and economic growth.

From an historical perspective, share liquidity in the UK played a major role in the widespread adoption of the company form in the second half of the nineteenth century. Famously, as I discuss in a recent book chapter published in the Research Handbook on the History of Corporate and Company Law, political and legal opposition to share liquidity held up the development of the company form in the UK.

However, given the economic and historical importance of liquidity, very little has been written on the liquidity of UK capital markets before 1913. Ron Alquist (2010) and Matthieu Chavaz and Marc Flandreau (2017) examine the liquidity risk and premia of various sovereign bonds which were traded on the London Stock Exchange during the late Victorian and early Edwardian eras. Along with Graeme Acheson (2008), I document the thinness of the market for bank shares in the nineteenth century, using the share trading records of a small number of banks.

In a major study, Gareth Campbell (Queen’s University Belfast), Qing Ye (Xi’an Jiaotong-Liverpool University) and I have recently attempted to understand more about the liquidity of the Victorian capital market. To this end, we have just published a paper in the Economic History Review which looks at the liquidity of the London share and bond markets from 1825 to 1870. The London capital market experienced considerable growth in this era. The liberalisation of incorporation law and Parliament’s liberalism in granting company status to railways and other public-good providers, resulted in the growth of the number of business enterprises having their shares and bonds traded on stock exchanges. In addition, from the 1850s onwards, there was an increase in the number of foreign countries and companies raising bond finance on the London market.

How do we measure the liquidity of the market for bonds and stocks in the 1825-70 era? Using end-of-month stock price data from a stockbroker list called the Course of the Exchange and end-of-month bond prices from newspaper sources, we calculate for each security, the number of months in the year where it had a zero return and divide that by the number of months it was listed in the year. Because zero returns are indicative of illiquidity (i.e., that a security has not been traded), one minus our illiquidity ratio gives us a liquidity measure for each security in our sample. We calculate the overall market liquidity for shares and bonds by taking averages. Figure 1 displays market liquidity for bonds and stocks for the period 1825-70.

fig1
Figure 01. Stock and bond liquidity on London Stock Exchange, 1825-1870. Source: Campbell, Turner and Ye (2018, p.829)

Figure 1 reveals that bond market liquidity was relatively high throughout this period but shows no strong trend over time. By way of contrast, there was a strong secular increase in stock liquidity from 1830 to 1870. This increase may have stimulated greater participation in the stock market by ordinary citizens. It may also have affected the growth and deepening of the overall stock market and resulted in higher economic growth.

We examine the cross-sectional differences in liquidity between stocks in order to understand the main determinants of stock liquidity in this era. Our main finding in this regard is that firm size and the number of issued shares were major correlates of liquidity, which suggests that larger firms and firms with a greater number of shares were more frequently traded. Our study also reveals that unusual features which were believed to impede liquidity, such as extended liability, uncalled capital or high share denominations, had little effect on stock liquidity.

We also examine whether asset illiquidity was priced by investors, resulting in higher costs of capital for firms and governments. We find little evidence that the illiquidity of stock or bonds was priced, suggesting that investors at the time did not put much emphasis on liquidity in their valuations. Indeed, this is consistent with J. B. Jefferys (1938), who argued that what mattered to investors during this era was not share liquidity, but the dividend or coupon they received.

In conclusion, the vast majority of stocks and bonds in this early capital market were illiquid. It is remarkable, however, that despite this illiquidity, the UK capital market grew substantially between 1825 and 1870. There was also an increase in investor participation, with investing becoming progressively democratised in this era.

 

To contact the author: j.turner@qub.ac.uk
Twitter: @profjohnturner

 

Bibliography:

Acheson, G.G., and Turner, J.D. “The Secondary Market for Bank Shares in Nineteenth-Century Britain.” Financial History Review 15, no. 2 (October 2008): 123–51. doi:10.1017/S0968565008000139.

Alquist, R. “How Important Is Liquidity Risk for Sovereign Bond Risk Premia? Evidence from the London Stock Exchange.” Journal of International Economics 82, no. 2 (November 1, 2010): 219–29. doi:10.1016/j.jinteco.2010.07.007.

Campbell, G., Turner, J.D., and Ye, Q. “The Liquidity of the London Capital Markets, 1825–70†.” The Economic History Review 71, no. 3 (August 1, 2018): 823–52. doi:10.1111/ehr.12530.

Chavaz, M., and Flandreau, M. “‘High & Dry’: The Liquidity and Credit of Colonial and Foreign Government Debt and the London Stock Exchange (1880–1910).” The Journal of Economic History 77, no. 3 (September 2017): 653–91. doi:10.1017/S0022050717000730.

Jefferys, J.B. Trends in Business Organisation in Great Britain Since 1856: With Special Reference to the Financial Structure of Companies, the Mechanism of Investment and the Relations Between the Shareholder and the Company. University of London, 1938.

Global Trade and the Transformation of Consumer Cultures

by Beverly Lemire (University of Alberta)

The Society has arranged with CUP that a 20% discount is available on this book, valid until the 11th October 2018. The discount page is: www.cambridge.org/ehs20

 

Our ancestors knew the comfort of a pipe. But some may have preferred the functionality of cigarettes, an alternative to the rituals of nursing tobacco embers. Historic periods are defined by habits and fashions, manifesting economic and political systems, legal and illegal. These are the focus of my recent book. New networks of exchange, cross-cultural contact and material translation defined the period c. 1500-1820. Tobacco is one thematic focus. I trace how global societies domesticated a Native American herb and Native American forms of tobacco. Its spread distinguishes this period from all others, when the Americas were fully integrated into global systems. Native American knowledge, lands and communities then faced determined intervention from all quarters. This crop became commoditized within decades, eluding censure to become an essential component of sociability, whether in Japan or Southeast Asia, the West Coast of Africa or the courts of Europe. [Figure 1]

pic01.png
Figure 1. Malayan and his wife in Batavia, with pipe.

 

Tobacco is a denominator of the early global era, grown in almost every context by 1600 and incorporated into diverse cultural and material modes. Importantly, its capacity to ease fatigue was quickly noted by military and imperial administrations and soon used to discipline or encourage essential labour. A sacred herb was transposed into a worldly good. Modes of coercive consumption were notable in the western slave trade, as well as on plantations. Tobacco also served disciplinary roles among workers essential to the movement of cargoes; deep-sea long-distance sailors and riverine paddlers in the North American fur trade were vulnerable to exploitation on account of their need and dependence on tobacco during long stints of back-breaking labour.

Early global trade built on established commercial patterns – most importantly the textile trade including the long-standing exchange of fabric for fur. The fabric / fur dynamic linked northern and southern Eurasia and north Africa, a pattern of elite and non-elite consumption that surged after the late 1500s, especially with the establishment of the Qing dynasty in China (1636-1912), with their deep cultural preference for furs. Equally important, deepening trade on the northeast coast of North America formalized Indigenous Americans’ appetite for cloth, willingly bartered for furs. The fabric / fur exchange preceded and continued with western colonization in the Americas. Meanwhile, on both sides of the Bering Strait and along the northwest coast of America, Indigenous communities were pulled more fully into the Qing economic orbit, with its boundless demand for peltry. Russian imperial expansion also served this commerce. The ecologies touched by this capacious trade extended worldwide, memorialized in surviving Qing fur garments and secondhand beaver hats traded for slaves in West Africa.

I routinely incorporate object study in my analysis, an essential way to assess the dynamism of consumer practice. I trawled museum collections as commonly as archives and libraries, where I found essential evidence of globalized fads and fashions. Strategies of a Qing-era man are revealed, as he navigated Chinese sumptuary laws while attempting to demonstrate fashion (on a budget). His seeming mink-lined robe used this costly fur only where it was visible. Sheepskin lined all the hidden areas. His concern for thrift is laid bare, along with his love of style.

Elsewhere in the book, I trace responses to early globalism through translations and interpretations of early global Asian designs, in needlework. The movement of people, as well as vast cargoes, stimulated these expressive fashions, ones that required minimal investment and gave voice to the widest range of women and men. The flow of Asian patterned goods and (often forced) relocation of Asian embroiderers to Europe began this tale – both increased the clamour for floral-patterned wares. This analysis culminates in North America with the turn from geometric to floral patterning among Indigenous embroiderers. They, too, responded to the influx of Asian floriated things. Europeans were intermediaries in this stage of the global process.

Human desires and shifting tastes are recurring themes, expressed in efforts to acquire new goods through various entrepreneurial channels. ‘Industriousness’ was manifest by women of many ethnicities through petty market-oriented trade, as well as waged employment, often working at the margins of formal commerce. Industriousness, legal and extralegal, large and small, flourished in conjunction with large-scale enterprise. Extralegal activities irritated administrators, however, who wanted only regulated and measurable business. Nonetheless, extralegal activities were ubiquitous in every imperial realm and an important vein of entrepreneurship. My case studies in extralegal ventures range from the traffic in tropical shells in Kirkcudbright, Scotland; the lucrative smuggling of European wool cloth to Qing China, a new mode among urban cognoscenti; and the harvesting of peppercorns from a Kentish beach, illustrating the importance of shipwrecks in redistributing cargoes to coastal communities everywhere. [Figure 2] Coastal peoples were schooled in the materials of globalism, cast up by the tides, though some authorities might call them criminal. Ultimately, the shifting materials of daily life marked this dynamic history.

pic02
Figure 2. Shipwreck of the DEGRAVE, East Indiaman. The Adventures of Robert Drury, During Fifteen Years Captivity on the Island of Madagascar … (London: W. Meadows, 1807). Library of Congress, Digital Prints and Photographs, Washington, D.C.

 

To contact the author: Lemire@ualberta.ca

Small Bills and Petty Finance: co-creating the history of the Old Poor Law

by Alannah Tomkins (Keele University) 

Alannah Tomkins and Professor Tim Hitchcock (University of Sussex), won an AHRC award to investigate ‘Small Bills and Petty Finance: co-creating the history of the Old Poor Law’.  It is a three-year project from January 2018. The application was for £728K, which has been raised, through indexing, to £740K.  The project website can be found at: thepoorlaw.org.

 

Twice in my career I’ve been surprised by a brick – or more precisely by bricks, hurtling into my research agenda. In the first instance I found myself supervising a PhD student working on the historic use of brick as a building material in Staffordshire (from the sixteenth to the eighteenth centuries). The second time, the bricks snagged my interest independently.

The AHRC-funded project ‘Small bills and petty finance’ did not set out to look for bricks. Instead it promises to explore a little-used source for local history, the receipts and ‘vouchers’ gathered by parish authorities as they relieved or punished the poor, to write multiple biographies of the tradesmen and others who serviced the poor law. A parish workhouse, for example, exerted a considerable influence over a local economy when it routinely (and reliably) paid for foodstuffs, clothing, fuel and other necessaries. This influence or profit-motive has not been studied in any detail for the poor law before 1834, and vouchers’ innovative content is matched by an exciting methodology. The AHRC project calls on the time and expertise of archival volunteers to unfold and record the contents of thousands of vouchers surviving in the three target counties of Cumbria, East Sussex and Staffordshire. So where do the bricks come in?

The project started life in Staffordshire as a pilot in advance of AHRC funding. The volunteers met at the Stafford archives and started by calendaring the contents of vouchers for the market town of Uttoxeter, near the Staffordshire/Derbyshire border. And the Uttoxeter workhouse did not confine itself to accommodating and feeding the poor. Instead in the 1820s it managed two going concerns: a workhouse garden producing vegetables for use and sale, and a parish brickyard. Many parishes under the poor law embedded make-work schemes in their management of the resident poor, but no others that I’m aware of channelled pauper labour into the manufacture of bricks.

pic01

The workhouse and brickyard were located just to the north of the town of Uttoxeter, in an area known as The Heath. The land was subsequently used to build the Uttoxeter Union workhouse in 1837-8 (after the reform of the poor law in 1834) so no signs of the brickyard remain in the twenty-first century. It was, however, one of several such yards identified at The Heath in the tithe map for Uttoxeter of 1842, and probably made use of a fixed kiln rather than a temporary clamp. This can be deduced from the parish’s sale of both bricks and tiles to brickyard customers. Tiles were more refined products than bricks and require more control over the firing process, whereas clamp firings were more difficult to regulate. The yard provided periodic employment to the adult male poor of the Uttoxeter workhouse, in accordance with the seasonal pattern imposed on all brick manufacture at the time. Firings typically began in March or April each year, and continued until September or October depending on the weather.

This is important because the variety of vouchers relating to the parish brickyard allow us to understand something of its place in the town’s economy, both as a producer and as a consumer of other products and services. Brickyards needed coal, so it is no surprise that one of the major expenses for the support of the yard lay in bringing coal to the town from elsewhere via the canal. The Uttoxeter canal wharf was also at The Heath, and access to transport by water may explain the development of a number of brickyards in its proximity. The yard also required wood and other raw materials in addition to clay, and specific products to protect the bricks after cutting but before firing. The parish bought quantities of archangel mats, rough woven pieces that could be used like a modern protective fleece to protect against frost damage. We are surmising that Uttoxeter used the mats to cover both the bricks and any tender plants in the workhouse garden.

screen-shot-2018-09-04-at-18-00-20.png

Similarly the bricks were sold chiefly to local purchasers, including members of the parish vestry. Some men who were owed money by the parish for their work as suppliers allowed the debt to be offset by bricks. Finally the employment of workhouse men as brickyard labourers gives us, when combined with some genealogical research, a rare glimpse of the place of workhouse work in the life-cycle of the adult poor. More than one man employed at the yard in the 1820s and 1830s went on to independence as a lodging-house keeper in the town by the time of the 1841 census.

As I say, I’ve been surprised by brick. I had no idea that such a mundane product would prove so engaging. All this goes to show that it’s not the stolidity of the brick but its deployment that matters, historically speaking.

 

To contact the author: a.e.tomkins@keele.ac.uk

 

 

 

 

Wages of sin: slavery and the banks, 1830-50

by Aaron Graham (University College London)

 

jon-bull
From the cartoon ‘Slave Emancipation; Or, John Bull Gulled Out Of Twenty Millions’ by C.J. Grant. In Richard Pound (UCL, 1998), C.J. Grant’s ‘Political Drama’, a radical satirist rediscovered‘. Available at <https://www.ucl.ac.uk/lbs/project/logo/&gt;

In 1834, the British Empire emancipated its slaves. This should have quickly triggered a major shift away from plantation labour and towards a free society where ex-slaves would bargain for better wages and force the planters to adopt new business models or go under. But the planters and plantation system survived, even if slavery did not. What went wrong?

This research follows the £20 million paid in compensation by the British government in 1834 (equivalent to about £20 billion today). This money was paid not to the slaves, but to the former slave-owners for the loss of their human property.

Thanks to the Legacies of British Slave-ownership project at University College London, we now know who received the money and how much. But until this study, we knew very little about how the former slave-owners used this money, or what effect this had on colonial societies in the West Indies or South Africa as they confronted the demands of this new world.

The study suggests why so little changed. It shows that slave-owners in places such as Jamaica, Guyana, South Africa and Mauritius used the money they received not just to pay off their debts, but also to set up new banks, which created credit by issuing bank notes and then supplied the planters with cash and credit.

Planters used the credit to improve their plantations and the cash to pay wages to their new free labourers, who therefore lacked the power to bargain for better conditions. Able to accommodate the social and economic pressures that would otherwise have forced them to reassess their business models and find new approaches that did not rely on the unremitting exploitation of black labour, planters could therefore resist the demands for broader economic and social change.

Tracking the ebb and flow of money shows that in Jamaica, for example, in 1836 about 200 planters chose to subscribe half the £450,000 they had received in compensation in the new Bank of Jamaica. By 1839, the bank had issued almost £300,000 in notes, enabling planters across the island to meet their workers’ wages without otherwise altering the plantation system.

When the Planters’ Bank was founded in 1839, it issued a further £100,000. ‘We congratulate the country on the prospects of a local institution of this kind’, the Jamaica Despatch commented in May 1839, ‘ … designed to aid and relieve those who are labouring under difficulties peculiar to the Jamaican planter at the present time’.

In other cases, the money even allowed farmers to expand the system of exploitation. In the Cape of Good Hope, the Eastern Province Bank at Grahamstown raised £26,000 with money from slavery compensation but provided the British settlers with £170,000 in short-term loans, helping them to dispossess native peoples of their land and use them as cheap labour to raise wool for Britain’s textile factories.

‘With united influence and energy’, the bank told its shareholders in 1840, for example, ‘the bank must become useful, as well to the residents at Grahamstown and our rapidly thriving agriculturists as prosperous itself’.

This study shows for the first time why planters could carry on after 1834 with business as usual. The new banks created after 1834 helped planters throughout the British Empire to evade the major social and economic changes that abolitionists had wanted and which their opponents had feared.

By investing their slavery compensation money in banks that then offered cash and credit, the planters could prolong and even expand their place in economies and societies built on the plantation system and the exploitation of black labour.

 

To contact the author: aaron.graham@ucl.ac.uk

 

The UK’s unpaid war debts to the United States, 1917-1980

by David James Gill (University of Nottingham)

ww1fe-562830
Trenches in World War I. From <www.express.co.uk>

We all think we know the consequences of the Great War – from the millions of dead to the rise of Nazism – but the story of the UK’s war debts to the United States remains largely untold.

In 1934, the British government defaulted on these loans, leaving unpaid debts exceeding $4 billion. The UK decided to cease repayment 18 months after France had defaulted on its war debts, making one full and two token repayments prior to Congressional approval of the Johnson Act, which prohibited further partial contributions.

Economists and political scientists typically attribute such hesitation to concerns about economic reprisals or the costs of future borrowing. Historians have instead stressed that delay reflected either a desire to protect transatlantic relations or a naive hope for outright cancellation.

Archival research reveals that the British cabinet’s principal concern was that many states owing money to the UK might use its default on war loans as an excuse to cease repayment on their own debts. In addition, ministers feared that refusal to pay would profoundly shock a large section of public opinion, thereby undermining the popularity of the National government. Eighteen months of continued repayment therefore provided the British government with more time to manage these risks.

The consequences of the UK’s default have attracted curiously limited attention. Economists and political scientists tend to assume dire political costs to incumbent governments as well as significant short-term economic shocks in terms of external borrowing, international trade, and the domestic economy. None of these consequences apply to the National government or the UK in the years that followed.

Most historians consider these unpaid war debts to be largely irrelevant to the course of domestic and international politics within five years. Yet archival research reveals that they continued to play an important role in British and American policy-making for at least four more decades.

During the 1940s, the issue of the UK’s default arose on several occasions, most clearly during negotiations concerning Lend-Lease and the Anglo-American loan, fuelling Congressional resistance that limited the size and duration of American financial support.

Successive American administrations also struggled to resist growing Congressional pressure to use these unpaid debts as a diplomatic tool to address growing balance of payment deficits from the 1950s to the 1970s. In addition, British default presented a formidable legal obstacle for the UK’s return to the New York bond market in the late 1970s, threatening to undermine the efficient refinancing of the government’s recent loans from the International Monetary Fund.

The consequences of the UK’s default on its First World War debts to the United States were therefore longer lasting and more significant to policy-making on both sides of the Atlantic than widely assumed.

 

Judges and the death penalty in Nazi Germany: New research evidence on judicial discretion in authoritarian states

nazipeoplescourt
The German People’s Court. Available at https://www.foreignaffairs.com/reviews/review-essay/good-germans

Do judicial courts in authoritarian regimes act as puppets for the interests of a repressive state – or do judges act with greater independence? How much do judges draw on their political and ideological affiliations when imposing the death sentence?

A study of Nazi Germany’s notorious People’s Court, recently published in the Economic Journal, reveals direct empirical evidence of how the judiciary in one of the world’s most notoriously politicised courts were influenced in their life-and-death decisions.

The research provides important empirical evidence that the political and ideological affiliations of judges do come into play – a finding that has applications for modern authoritarian regimes and also for democracies that administer the death penalty.

The research team – Dr Wayne Geerling (University of Arizona), Prof Gary Magee, Prof Russell Smyth, and Dr Vinod Mishra (Monash Business School) – explore the factors influencing the likelihood of imposing the death sentence in Nazi Germany for crimes against the state – treason and high treason.

The authors examine data compiled from official records of individuals charged with treason and high treason who appeared before the People’s Courts up to the end of the Second World War.

Established by the Nazis in 1934 to hear cases of serious political offences, the People’s Courts have been vilified as ‘blood tribunals’ in which judges meted out pre-determined sentences.

But in recent years, while not contending that the People’s Court judgments were impartial or that its judges were not subservient to the wishes of the regime, a more nuanced assessment has emerged.

For the first time, the new study presents direct empirical evidence of the reasons behind the use of judicial discretion and why some judges appeared more willing to implement the will of the state than others.

The researchers find that judges with a deeper ideological commitment to Nazi values – typified by being members of the Alte Kampfer (‘Old Fighters’ or early members of the Nazi party) – were indeed more likely to impose the death penalty than those who did not share it.

These judges were more likely to hand down death penalties to members of the most organised opposition groups, those involved in violent resistance against the state and ‘defendants with characteristics repellent to core Nazi beliefs’:

‘The Alte Kampfer were thus more likely to sentence devout Roman Catholics (24.7 percentage points), defendants with partial Jewish ancestry (34.8 percentage points), juveniles (23.4 percentage points), the unemployed (4.9 percentage points) and foreigners (42.3 percentage points) to death.’

Judges who became adults during two distinct historical periods (the Revolution of 1918-19 and the period of hyperinflation from June 1921 to January 1924), which may have shaped these judges’ views with respect to Nazism, were more likely to impose the death sentence.

 Alte Kampfer members whose hometown or suburb lay near a centre of the Revolution of 1918-19 were more likely to sentence a defendant to death.

Previous economic research on sentencing in capital cases has focused mainly on gender and racial disparities, typically in the United States. But the understanding of what determines whether courts in modern authoritarian regimes outside the United States impose the death penalty is scant. By studying a politicised court in an historically important authoritarian state, the authors of the new study shed light on sentencing more generally in authoritarian states.

The findings are important because they provide insights into the practical realities of judicial empowerment by providing rare empirical evidence on how the exercise of judicial discretion in authoritarian states is reflected in sentencing outcomes.

To contact the authors:
Russell Smyth (russell.smyth@monash.edu)

BAD LOCATIONS: Many French towns have been trapped in obsolete places for centuries

Beaumaris,_1610
John Speed (1610), 17th century map of Beaumaris. Available on Wiki Commons.

Only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. That is one of the findings of research by Guy Michaels (London School of Economics) and Ferdinand Rauch (University of Oxford), which uses the contrasting experiences of British and French cities after the fall of the Roman Empire as a natural experiment to explore the impact of history on economic geography – and what leads cities to get stuck in undesirable locations, a big issue for modern urban planners.

The study, published in the February 2018 issue of the Economic Journal, notes that in France, post-Roman urban life became a shadow of its former self, but in Britain it completely disappeared. As a result, medieval towns in France were much more likely to be located near Roman towns than their British counterparts. But many of these places were obsolete because the best locations in Roman times weren’t the same as in the Middle Ages, when access to water transport was key.

The world is rapidly urbanising, but some of its growing cities seem to be misplaced. Their locations are hampered by poor access to world markets, shortages of water or vulnerability to flooding, earthquakes, volcanoes and other natural disasters. This outcome – cities stuck in the wrong places – has potentially dire economic and social consequences.

When thinking about policy responses, it is worth looking at the past to see how historical events can leave cities trapped in locations that are far from ideal. The new study does that by comparing the evolution of two initially similar urban networks following a historical calamity that wiped out one, while leaving the other largely intact.

The setting for the analysis of urban persistence is north-western Europe, where the authors trace the effects of the collapse of the Western Roman Empire more than 1,500 years ago through to the present day. Around the dawn of the first millennium, Rome conquered, and subsequently urbanised, areas including those that make up present day France and Britain (as far north as Hadrian’s Wall). Under the Romans, towns in the two places developed similarly in terms of their institutions, organisation and size.

But around the middle of the fourth century, their fates diverged. Roman Britain suffered invasions, usurpations and reprisals against its elite. Around 410CE, when Rome itself was first sacked, Roman Britain’s last remaining legions, which had maintained order and security, departed permanently. Consequently, Roman Britain’s political, social and economic order collapsed. Between 450CE and 600CE, its towns no longer functioned.

Although some Roman towns in France also suffered when the Western Roman Empire fell, many of them survived and were taken over by Franks. So while the urban network in Britain effectively ended with the fall of the Western Roman Empire, there was much more urban continuity in France.

The divergent paths of these two urban networks makes it possible to study the spatial consequences of the ‘resetting’ of an urban network, as towns across Western Europe re-emerged and grew during the Middle Ages. During the High Middle Ages, both Britain and France were again ruled by a common elite (Norman rather than Roman) and had access to similar production technologies. Both features make is possible to compare the effects of the collapse of the Roman Empire on the evolution of town locations.

Following the asymmetric calamity and subsequent re-emergence of towns in Britain and France, one of three scenarios can be imagined:

  • First, if locational fundamentals, such as coastlines, mountains and rivers, consistently favour a fixed set of places, then those locations would be home to both surviving and re-emerging towns. In this case, there would be high persistence of locations from the Roman era onwards in both British and French urban networks.
  • Second, if locational fundamentals or their value change over time (for example, if coastal access becomes more important) and if these fundamentals affect productivity more than the concentration of human activity, then both urban networks would similarly shift towards locations with improved fundamentals. In this case, there would be less persistence of locations in both British and French urban networks relative to the Roman era.
  • Third, if locational fundamentals or their value change, but these fundamentals affect productivity less than the concentration of human activity, then there would be ‘path-dependence’ in the location of towns. The British urban network, which was reset, would shift away from Roman-era locations towards places that are more suited to the changing economic conditions. But French towns would tend to remain in their original Roman locations.

The authors’ empirical investigation finds support for the third scenario, where town locations are path-dependent. Medieval towns in France were much more likely to be located near Roman towns than their British counterparts.

These differences in urban persistence are still visible today; for example, only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. This finding suggests that the urban network in Britain shifted towards newly advantageous locations between the Roman and medieval eras, while towns in France remained in locations that may have become obsolete.

But did it really matter for future economic development that medieval French towns remained in Roman-era locations? To shed light on this question, the researchers focus on a particular dimension of each town’s location: its accessibility to transport networks.

During Roman times, roads connected major towns, facilitating movements of the occupying army. But during the Middle Ages, technical improvements in water transport made coastal access more important. This technological change meant that having coastal access mattered more for medieval towns in Britain and France than for Roman ones.

The study finds that during the Middle Ages, towns in Britain were roughly two and a half times more likely to have coastal access – either directly or via a navigable river – than during the Roman era. In contrast, in France, there was little change in the urban network’s coastal access.

The researchers also show that having coastal access did matter for towns’ subsequent population growth, which is a key indicator of their economic viability. Specifically, they find that towns with coastal access grew faster between 1200 and 1700, and for towns with poor coastal access, access to canals was associated with faster population growth. The investments in the costly building and maintenance of these canals provide further evidence of the value of access to water transport networks.

The conclusion is that many French towns were stuck in the wrong places for centuries, since their locations were designed for the demands of Roman times and not those of the Middle Ages. They could not take full advantage of the improved transport technologies because they had poor coastal access.

Taken together, these findings show that urban networks may reconfigure around locational fundamentals that become more valuable over time. But this reconfiguration is not inevitable, and towns and cities may remain trapped in bad locations over many centuries and even millennia. This spatial misallocation of economic activity over hundreds of years has almost certainly induced considerable economic costs.

‘Our findings suggest lessons for today’s policy-makers – conclude the authors – The conclusion that cities may be misplaced still matters as the world’s population becomes ever more concentrated in urban areas. For example, parts of Africa, including some of its cities, are hampered by poor access to world markets due to their landlocked position and poor land transport infrastructure. Our research suggests that path-dependence in city locations can still have significant costs.’

‘‘Resetting the Urban Network: 117-2012’ by Guy Michaels and Ferdinand Rauch was published in the February 2018 issue of the Economic Journal.

To contact the authors:
Guy Michaels (G.Michaels@lse.ac.uk)
Ferdinand Rauch (ferdinand.rauch@economics.ox.ac.uk)

THE ‘WITCH CRAZE’ OF 16th & 17th CENTURY EUROPE: Economists uncover religious competition as driving force of witch hunts

11328679url_&amp;&amp;version=1501231358665
“The Pendle Witches”. Available at https://www.theanneboleynfiles.com/witchcraft-in-tudor-and-stuart-times/

Economists Peter Leeson (George Mason University) and Jacob Russ (Bloom Intelligence) have uncovered new evidence to resolve the longstanding puzzle posed by the ‘witch craze’ that ravaged Europe in the sixteenth and seventeenth centuries and resulted in the trial and execution of tens of thousands for the dubious crime of witchcraft.

 

In research forthcoming in the Economic Journal, Leeson and Russ argue that the witch craze resulted from competition between Catholicism and Protestantism in post-Reformation Christendom. For the first time in history, the Reformation presented large numbers of Christians with a religious choice: stick with the old Church or switch to the new one. And when churchgoers have religious choice, churches must compete.

In an effort to woo the faithful, competing confessions advertised their superior ability to protect citizens against worldly manifestations of Satan’s evil by prosecuting suspected witches. Similar to how Republicans and Democrats focus campaign activity in political battlegrounds during US elections to attract the loyalty of undecided voters, Catholic and Protestant officials focused witch trial activity in religious battlegrounds during the Reformation and Counter-Reformation to attract the loyalty of undecided Christians.

Analysing new data on more than 40,000 suspected witches whose trials span Europe over more than half a millennium, Leeson and Russ find that when and where confessional competition, as measured by confessional warfare, was more intense, witch trial activity was more intense too. Furthermore, factors such as bad weather, formerly thought to be key drivers of the witch craze, were not in fact important.

The new data reveal that the witch craze took off only after the Protestant Reformation in 1517, following the new faith’s rapid spread. The craze reached its zenith between around 1555 and 1650, years co-extensive with peak competition for Christian consumers, evidenced by the Catholic Counter-Reformation, during which Catholic officials aggressively pushed back against Protestant successes in converting Christians throughout much of Europe.

Then, around 1650, the witch craze began its precipitous decline, with prosecutions for witchcraft virtually vanishing by 1700.

What happened in the middle of the seventeenth century to bring the witch craze to a halt? The Peace of Westphalia, a treaty entered in 1648, which ended decades of European religious warfare and much of the confessional competition that motivated it by creating permanent territorial monopolies for Catholics and Protestants – regions of exclusive control, in which one confession was protected from the competition of the other.

The new analysis suggests that the witch craze should also have been focused geographically, located where Catholic-Protestant rivalry was strongest and vice versa. And indeed it was: Germany alone, which was ground zero for the Reformation, laid claim to nearly 40% of all witchcraft prosecutions in Europe.

In contrast, Spain, Italy, Portugal and Ireland – each of which remained a Catholic stronghold after the Reformation and never saw serious competition from Protestantism – collectively accounted for just 6% of Europeans tried for witchcraft.

Religion, it is often said, works in unexpected ways. The new study suggests that the same can be said of competition between religions.

 

To contact the authors:  Peter Leeson (PLeeson@GMU.edu)