The UK’s unpaid war debts to the United States, 1917-1980

by David James Gill (University of Nottingham)

ww1fe-562830
Trenches in World War I. From <www.express.co.uk>

We all think we know the consequences of the Great War – from the millions of dead to the rise of Nazism – but the story of the UK’s war debts to the United States remains largely untold.

In 1934, the British government defaulted on these loans, leaving unpaid debts exceeding $4 billion. The UK decided to cease repayment 18 months after France had defaulted on its war debts, making one full and two token repayments prior to Congressional approval of the Johnson Act, which prohibited further partial contributions.

Economists and political scientists typically attribute such hesitation to concerns about economic reprisals or the costs of future borrowing. Historians have instead stressed that delay reflected either a desire to protect transatlantic relations or a naive hope for outright cancellation.

Archival research reveals that the British cabinet’s principal concern was that many states owing money to the UK might use its default on war loans as an excuse to cease repayment on their own debts. In addition, ministers feared that refusal to pay would profoundly shock a large section of public opinion, thereby undermining the popularity of the National government. Eighteen months of continued repayment therefore provided the British government with more time to manage these risks.

The consequences of the UK’s default have attracted curiously limited attention. Economists and political scientists tend to assume dire political costs to incumbent governments as well as significant short-term economic shocks in terms of external borrowing, international trade, and the domestic economy. None of these consequences apply to the National government or the UK in the years that followed.

Most historians consider these unpaid war debts to be largely irrelevant to the course of domestic and international politics within five years. Yet archival research reveals that they continued to play an important role in British and American policy-making for at least four more decades.

During the 1940s, the issue of the UK’s default arose on several occasions, most clearly during negotiations concerning Lend-Lease and the Anglo-American loan, fuelling Congressional resistance that limited the size and duration of American financial support.

Successive American administrations also struggled to resist growing Congressional pressure to use these unpaid debts as a diplomatic tool to address growing balance of payment deficits from the 1950s to the 1970s. In addition, British default presented a formidable legal obstacle for the UK’s return to the New York bond market in the late 1970s, threatening to undermine the efficient refinancing of the government’s recent loans from the International Monetary Fund.

The consequences of the UK’s default on its First World War debts to the United States were therefore longer lasting and more significant to policy-making on both sides of the Atlantic than widely assumed.

 

Judges and the death penalty in Nazi Germany: New research evidence on judicial discretion in authoritarian states

nazipeoplescourt
The German People’s Court. Available at https://www.foreignaffairs.com/reviews/review-essay/good-germans

Do judicial courts in authoritarian regimes act as puppets for the interests of a repressive state – or do judges act with greater independence? How much do judges draw on their political and ideological affiliations when imposing the death sentence?

A study of Nazi Germany’s notorious People’s Court, recently published in the Economic Journal, reveals direct empirical evidence of how the judiciary in one of the world’s most notoriously politicised courts were influenced in their life-and-death decisions.

The research provides important empirical evidence that the political and ideological affiliations of judges do come into play – a finding that has applications for modern authoritarian regimes and also for democracies that administer the death penalty.

The research team – Dr Wayne Geerling (University of Arizona), Prof Gary Magee, Prof Russell Smyth, and Dr Vinod Mishra (Monash Business School) – explore the factors influencing the likelihood of imposing the death sentence in Nazi Germany for crimes against the state – treason and high treason.

The authors examine data compiled from official records of individuals charged with treason and high treason who appeared before the People’s Courts up to the end of the Second World War.

Established by the Nazis in 1934 to hear cases of serious political offences, the People’s Courts have been vilified as ‘blood tribunals’ in which judges meted out pre-determined sentences.

But in recent years, while not contending that the People’s Court judgments were impartial or that its judges were not subservient to the wishes of the regime, a more nuanced assessment has emerged.

For the first time, the new study presents direct empirical evidence of the reasons behind the use of judicial discretion and why some judges appeared more willing to implement the will of the state than others.

The researchers find that judges with a deeper ideological commitment to Nazi values – typified by being members of the Alte Kampfer (‘Old Fighters’ or early members of the Nazi party) – were indeed more likely to impose the death penalty than those who did not share it.

These judges were more likely to hand down death penalties to members of the most organised opposition groups, those involved in violent resistance against the state and ‘defendants with characteristics repellent to core Nazi beliefs’:

‘The Alte Kampfer were thus more likely to sentence devout Roman Catholics (24.7 percentage points), defendants with partial Jewish ancestry (34.8 percentage points), juveniles (23.4 percentage points), the unemployed (4.9 percentage points) and foreigners (42.3 percentage points) to death.’

Judges who became adults during two distinct historical periods (the Revolution of 1918-19 and the period of hyperinflation from June 1921 to January 1924), which may have shaped these judges’ views with respect to Nazism, were more likely to impose the death sentence.

 Alte Kampfer members whose hometown or suburb lay near a centre of the Revolution of 1918-19 were more likely to sentence a defendant to death.

Previous economic research on sentencing in capital cases has focused mainly on gender and racial disparities, typically in the United States. But the understanding of what determines whether courts in modern authoritarian regimes outside the United States impose the death penalty is scant. By studying a politicised court in an historically important authoritarian state, the authors of the new study shed light on sentencing more generally in authoritarian states.

The findings are important because they provide insights into the practical realities of judicial empowerment by providing rare empirical evidence on how the exercise of judicial discretion in authoritarian states is reflected in sentencing outcomes.

To contact the authors:
Russell Smyth (russell.smyth@monash.edu)

BAD LOCATIONS: Many French towns have been trapped in obsolete places for centuries

Beaumaris,_1610
John Speed (1610), 17th century map of Beaumaris. Available on Wiki Commons.

Only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. That is one of the findings of research by Guy Michaels (London School of Economics) and Ferdinand Rauch (University of Oxford), which uses the contrasting experiences of British and French cities after the fall of the Roman Empire as a natural experiment to explore the impact of history on economic geography – and what leads cities to get stuck in undesirable locations, a big issue for modern urban planners.

The study, published in the February 2018 issue of the Economic Journal, notes that in France, post-Roman urban life became a shadow of its former self, but in Britain it completely disappeared. As a result, medieval towns in France were much more likely to be located near Roman towns than their British counterparts. But many of these places were obsolete because the best locations in Roman times weren’t the same as in the Middle Ages, when access to water transport was key.

The world is rapidly urbanising, but some of its growing cities seem to be misplaced. Their locations are hampered by poor access to world markets, shortages of water or vulnerability to flooding, earthquakes, volcanoes and other natural disasters. This outcome – cities stuck in the wrong places – has potentially dire economic and social consequences.

When thinking about policy responses, it is worth looking at the past to see how historical events can leave cities trapped in locations that are far from ideal. The new study does that by comparing the evolution of two initially similar urban networks following a historical calamity that wiped out one, while leaving the other largely intact.

The setting for the analysis of urban persistence is north-western Europe, where the authors trace the effects of the collapse of the Western Roman Empire more than 1,500 years ago through to the present day. Around the dawn of the first millennium, Rome conquered, and subsequently urbanised, areas including those that make up present day France and Britain (as far north as Hadrian’s Wall). Under the Romans, towns in the two places developed similarly in terms of their institutions, organisation and size.

But around the middle of the fourth century, their fates diverged. Roman Britain suffered invasions, usurpations and reprisals against its elite. Around 410CE, when Rome itself was first sacked, Roman Britain’s last remaining legions, which had maintained order and security, departed permanently. Consequently, Roman Britain’s political, social and economic order collapsed. Between 450CE and 600CE, its towns no longer functioned.

Although some Roman towns in France also suffered when the Western Roman Empire fell, many of them survived and were taken over by Franks. So while the urban network in Britain effectively ended with the fall of the Western Roman Empire, there was much more urban continuity in France.

The divergent paths of these two urban networks makes it possible to study the spatial consequences of the ‘resetting’ of an urban network, as towns across Western Europe re-emerged and grew during the Middle Ages. During the High Middle Ages, both Britain and France were again ruled by a common elite (Norman rather than Roman) and had access to similar production technologies. Both features make is possible to compare the effects of the collapse of the Roman Empire on the evolution of town locations.

Following the asymmetric calamity and subsequent re-emergence of towns in Britain and France, one of three scenarios can be imagined:

  • First, if locational fundamentals, such as coastlines, mountains and rivers, consistently favour a fixed set of places, then those locations would be home to both surviving and re-emerging towns. In this case, there would be high persistence of locations from the Roman era onwards in both British and French urban networks.
  • Second, if locational fundamentals or their value change over time (for example, if coastal access becomes more important) and if these fundamentals affect productivity more than the concentration of human activity, then both urban networks would similarly shift towards locations with improved fundamentals. In this case, there would be less persistence of locations in both British and French urban networks relative to the Roman era.
  • Third, if locational fundamentals or their value change, but these fundamentals affect productivity less than the concentration of human activity, then there would be ‘path-dependence’ in the location of towns. The British urban network, which was reset, would shift away from Roman-era locations towards places that are more suited to the changing economic conditions. But French towns would tend to remain in their original Roman locations.

The authors’ empirical investigation finds support for the third scenario, where town locations are path-dependent. Medieval towns in France were much more likely to be located near Roman towns than their British counterparts.

These differences in urban persistence are still visible today; for example, only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. This finding suggests that the urban network in Britain shifted towards newly advantageous locations between the Roman and medieval eras, while towns in France remained in locations that may have become obsolete.

But did it really matter for future economic development that medieval French towns remained in Roman-era locations? To shed light on this question, the researchers focus on a particular dimension of each town’s location: its accessibility to transport networks.

During Roman times, roads connected major towns, facilitating movements of the occupying army. But during the Middle Ages, technical improvements in water transport made coastal access more important. This technological change meant that having coastal access mattered more for medieval towns in Britain and France than for Roman ones.

The study finds that during the Middle Ages, towns in Britain were roughly two and a half times more likely to have coastal access – either directly or via a navigable river – than during the Roman era. In contrast, in France, there was little change in the urban network’s coastal access.

The researchers also show that having coastal access did matter for towns’ subsequent population growth, which is a key indicator of their economic viability. Specifically, they find that towns with coastal access grew faster between 1200 and 1700, and for towns with poor coastal access, access to canals was associated with faster population growth. The investments in the costly building and maintenance of these canals provide further evidence of the value of access to water transport networks.

The conclusion is that many French towns were stuck in the wrong places for centuries, since their locations were designed for the demands of Roman times and not those of the Middle Ages. They could not take full advantage of the improved transport technologies because they had poor coastal access.

Taken together, these findings show that urban networks may reconfigure around locational fundamentals that become more valuable over time. But this reconfiguration is not inevitable, and towns and cities may remain trapped in bad locations over many centuries and even millennia. This spatial misallocation of economic activity over hundreds of years has almost certainly induced considerable economic costs.

‘Our findings suggest lessons for today’s policy-makers – conclude the authors – The conclusion that cities may be misplaced still matters as the world’s population becomes ever more concentrated in urban areas. For example, parts of Africa, including some of its cities, are hampered by poor access to world markets due to their landlocked position and poor land transport infrastructure. Our research suggests that path-dependence in city locations can still have significant costs.’

‘‘Resetting the Urban Network: 117-2012’ by Guy Michaels and Ferdinand Rauch was published in the February 2018 issue of the Economic Journal.

To contact the authors:
Guy Michaels (G.Michaels@lse.ac.uk)
Ferdinand Rauch (ferdinand.rauch@economics.ox.ac.uk)

THE ‘WITCH CRAZE’ OF 16th & 17th CENTURY EUROPE: Economists uncover religious competition as driving force of witch hunts

11328679url_&amp;&amp;version=1501231358665
“The Pendle Witches”. Available at https://www.theanneboleynfiles.com/witchcraft-in-tudor-and-stuart-times/

Economists Peter Leeson (George Mason University) and Jacob Russ (Bloom Intelligence) have uncovered new evidence to resolve the longstanding puzzle posed by the ‘witch craze’ that ravaged Europe in the sixteenth and seventeenth centuries and resulted in the trial and execution of tens of thousands for the dubious crime of witchcraft.

 

In research forthcoming in the Economic Journal, Leeson and Russ argue that the witch craze resulted from competition between Catholicism and Protestantism in post-Reformation Christendom. For the first time in history, the Reformation presented large numbers of Christians with a religious choice: stick with the old Church or switch to the new one. And when churchgoers have religious choice, churches must compete.

In an effort to woo the faithful, competing confessions advertised their superior ability to protect citizens against worldly manifestations of Satan’s evil by prosecuting suspected witches. Similar to how Republicans and Democrats focus campaign activity in political battlegrounds during US elections to attract the loyalty of undecided voters, Catholic and Protestant officials focused witch trial activity in religious battlegrounds during the Reformation and Counter-Reformation to attract the loyalty of undecided Christians.

Analysing new data on more than 40,000 suspected witches whose trials span Europe over more than half a millennium, Leeson and Russ find that when and where confessional competition, as measured by confessional warfare, was more intense, witch trial activity was more intense too. Furthermore, factors such as bad weather, formerly thought to be key drivers of the witch craze, were not in fact important.

The new data reveal that the witch craze took off only after the Protestant Reformation in 1517, following the new faith’s rapid spread. The craze reached its zenith between around 1555 and 1650, years co-extensive with peak competition for Christian consumers, evidenced by the Catholic Counter-Reformation, during which Catholic officials aggressively pushed back against Protestant successes in converting Christians throughout much of Europe.

Then, around 1650, the witch craze began its precipitous decline, with prosecutions for witchcraft virtually vanishing by 1700.

What happened in the middle of the seventeenth century to bring the witch craze to a halt? The Peace of Westphalia, a treaty entered in 1648, which ended decades of European religious warfare and much of the confessional competition that motivated it by creating permanent territorial monopolies for Catholics and Protestants – regions of exclusive control, in which one confession was protected from the competition of the other.

The new analysis suggests that the witch craze should also have been focused geographically, located where Catholic-Protestant rivalry was strongest and vice versa. And indeed it was: Germany alone, which was ground zero for the Reformation, laid claim to nearly 40% of all witchcraft prosecutions in Europe.

In contrast, Spain, Italy, Portugal and Ireland – each of which remained a Catholic stronghold after the Reformation and never saw serious competition from Protestantism – collectively accounted for just 6% of Europeans tried for witchcraft.

Religion, it is often said, works in unexpected ways. The new study suggests that the same can be said of competition between religions.

 

To contact the authors:  Peter Leeson (PLeeson@GMU.edu)

Decimalising the pound: a victory for the gentlemanly City against the forces of modernity?

by Andy Cook (University of Huddersfield)

 

1813 guinea

Some media commentators have identified the decimalisation of the UK’s currency in 1971 as the start of a submerging of British identity. For example, writing in the Daily Mail, Dominic Sandbrook characterises it as ‘marking the end of a proud history of defiant insularity and the beginning of the creeping ­Europeanisation of ­Britain’s institutions.’

This research, based on Cabinet papers, Bank of England archives, Parliamentary records and other sources, reveals that this interpretation is spurious and reflects more modern preoccupations with the arguments that dominated much of the Brexit debate, rather than the actual motivation of key players at the time.

The research examines arguments made by the proponents of alternative systems based on either decimalising the pound, or creating a new unit worth the equivalent of 10 shillings. South Africa, Australia and New Zealand had all recently adopted a 10-shilling unit, and this system was favoured by a wide range of interest groups in the UK, representing consumers, retailers, small and large businesses, and media commentators.

Virtually a lone voice in lobbying for retention of the pound was the City of London, and its arguments, articulated by the Bank of England, were based on a traditional attachment to the international status of sterling. These arguments were accepted, both by the Committee of Enquiry on Decimal currency, which reported in 1963, and, in 1966, by a Labour government headed by Harold Wilson, who shared the City’s emotional attachment to the pound.

Yet by 1960, the UK had faced the imminent prospect of being virtually the only country retaining non-decimal coinage. Most key economic players agreed that decimalisation was necessary and the only significant bone of contention was the choice of system.

Most informed opinion favoured a new major unit equivalent to 10 shillings, as reflected in evidence given by retailers and other businesses to the Committee of Enquiry on Decimal Coinage, and the formation of a Decimal Action Committee by the Consumers Association to press for such a system.

The City, represented by the Bank of England, was implacably opposed to such a system, arguing that the pound’s international prestige was crucial to underpinning the position of the City as a leading financial centre. This assertion was not evidence-based, and internal Bank documents acknowledge that their argument was ‘to some extent based on sentiment’.

This sentiment was shared by Harold Wilson, whose government announced the decision to introduce decimal currency based on the pound in 1966. Five years earlier, he had made an emotional plea to keep the pound arguing that ‘the world will lose something if the pound disappears from the markets of the world’.

Far from being the end of ‘defiant insularity’, the decision to retain a higher-value basic currency unit of any major economy, rather than adopting one closer in value either to the US dollar or the even lower-value European currencies, reflected the desire of the City and government to maintain a distinctive symbol of Britishness, the pound, overcoming opposition from interests with more practical concerns.

British perceptions of German post-war industrial relations

By Colin Chamberlain (University of Cambridge)

Some 10,000 steel workers participate in a demonstration to demand a 10 per...
A demonstration in Stuttgart, 11th January 1962.  Picture alliance/AP Images, available at <https://www.gewerkschaftsgeschichte.de/1953-schwerpunkt-tarifpolitik.html&gt;

‘Almost idyllic’ – this was the view of one British commentator on the state of post-war industrial relations in West Germany. No one could say the same about British industrial relations. Here, industrial conflict grew inexorably from year to year, forcing governments to expend ever more effort on preserving industrial peace.

Deeply frustrated, successive governments alternated between appeasing trade unionists and threatening them with new legal sanctions in an effort to improve their behaviour, thereby avoiding tackling the fundamental issue of their institutional structure. If the British had only studied the German ‘model’ of industrial relations more closely, they would have understood better the reforms that needed to be made.

Britain’s poor state of industrial relations was a major, if not the major, factor holding back Britain’s economic growth, which was regularly less than half the rate in Germany, not to speak of the chronic inflation and balance of payments problems that only made matters worse. So, how come the British did not take a deeper look at the successful model of German industrial relations and learn any lessons?

Ironically, the British were in control of Germany at the time the trade union movement was re-establishing itself after the war. The Trades Union Congress and the British labour movement offered much goodwill and help to the Germans in their task.

But German trade unionists had very different ideas to the British trade unions on how to go about organising their industrial relations, ideas that the British were to ignore consistently over the post-war period. These included:

    • In Britain, there were hundreds of trade unions, but in Germany, there were only 16 re-established after the war, each representing one or more industries, thereby avoiding the demarcation disputes so common in Britain.
    • Terms and conditions were negotiated on this industry-basis by strong well-funded trade unions, which welcomed the fact that their two or three year long collective agreements were legally enforceable in Germany’s system of industrial courts.
    • Trade unions were not involved in workplace grievances and disputes. These were left to employees and managers meeting together in Germany’s highly successful works councils to resolve such issues informally along with engaging in consultative exercises on working practices and company reorganisations. As a result, German companies did not seek to lay-off staff as British companies did on any fall in demand, but rathet to retrain and reallocate them.

British trade unions pleaded that their very untidy institutional structure with hundreds of competing trade unions was what their members actually wanted and should therefore be outside any government interference. The trade unions jealously guarded their privileges and especially rejected any idea of industry-based unions, legally enforceable collective agreements and works councils.

A heavyweight Royal Commission was appointed, but after three years’ deliberation, it came up with little more than the status quo. It was reluctant to study any ideas emanating from Germany.

While the success of industrial relations in Germany was widely recognised in Britain, there was little understanding about why this was so or indeed much interest in it. The British were deeply conservative about the ‘institutional shape’ of industrial relations and had a fear of putting forward any radical German ideas. Britain was therefore at a big disadvantage as far as creating modern trade unions operating in a modern state.

So, what economic price the failure to sort out the institutional structure of the British trade unions?

Transatlantic Slavery and Abolition: a Pan-European Affair

By Felix Brahm (German Historical Institute London) and Eve Rosenhaft (University of Liverpool)

Slavery Hinterland. Transatlantic Slavery and Continental Europe, 1680–1850 is published by Boydell Press for the Economic History Society’s series ‘People, Markets, Goods: Economies and Societies in History’. SAVE 25% when you order direct from the publisher -offer ends on the 28th June 2018. See below for details.

 

coverThe history of transatlantic slavery is one of the most active and fruitful fields of international historical research, and an important lesson of the latest work on maritime countries like Britain and France is that there the profits of slavery and indeed abolition ‘trickled down’ to very wide sections of the population and to places well away from the principal slave-trading ports. Recently historians have started to look beyond the familiar Atlantic axis and to apply the same paradigm to the European hinterlands of the triangular trade. That is, they have sought its traces and impacts in territories that were not directly involved (or were relatively minor participants) in the traffic in Africans: the German-speaking countries, Scandinavia, Italy and Central Europe. And they are finding that the slave trade, the plantation economies that it fed, the consequences of its abolition, and not least the questions of moral and political principle that it threw up, were very much a part of the texture of society right across Europe.

In material terms, it is clear that the manufacture of trade goods – the wares with which Europeans paid African traders for the enslaved men, women and children whom they then shipped to the Americas – was an important element of many regional economies. Firearms, iron bars and ironware travelled from Denmark and the Baltic to Western Europe’s slaving ports. Glass beads were exported from Bohemia (the Czech lands), and the higher quality Venetian products attracted Liverpool merchants to set up branch offices in Italy to secure their supply. The Swiss family firm Burckhardt/Bourcard began by supplying cotton cloth for the slave trade and importing slave-produced luxury goods and moved into equipping its own slaving ships. Textile plants in the Wupper Valley in Western Germany and the hand looms of Eastern Prussia provided linens of varying quality for use on the slave plantations, though because they were shipped through English and Dutch ports their German origins have often been obscured. And the trading networks established in the context of the slave economy supported German exporting projects even after the trade was abolished, as German firms continued to trade into territories – Brazil and the Caribbean – where slavery persisted until the late 19th century.

Germans in particular were keen observers of the Atlantic slave economy, and they had their own perspective on international debates about the trade and its abolition. At the beginnings of the trade, the rulers of Brandenburg Prussia had some hopes of buying into it, establishing a slave fort on the Gold Coast between 1682 and 1720. One of the key documents of this episode is the diary of a ship’s barber, Johann Peter Oettinger, who sailed on slaving expeditions. He chose to make no comment about the brutalities that he witnessed and recorded. Characteristically, though, when the diaries were published for German readers 200 years later, they were given a moralising spin; by the 1880s, Germany was at the forefront of the Scramble for Africa, justifying colonisation in the name of suppressing the internal slave trade. Before that, and once the German states were no longer involved in the slave trade, German-speaking scientists and administrators placed themselves in the service of those states that were: Ernst Schimmelmann, whose family had one foot in Hamburg and one in Copenhagen, was a plantation owner and manager of the Swedish state slaving company, but also responsible for the abolition of the Danish slave trade in 1792. And initiatives for the post-abolition exploitation of tropical territories relied on the work of German scientists in service to the Danish state like the botanist Julius von Rohr.

Scholarly attention to the German case is also bringing the Atlantic plantation economies into dialogue with the practices of unfree labour that existed in Central Europe at the same time. Analysis of the conditions of linen production on eastern Prussia’s aristocratic estates indicates that their low production costs helped to keep down the costs of production on slave plantations. And when Germans confronted the moral and legal challenges to slavery that were crystallising into a political movement in Britain and France by the 1790s, they could not escape the implications of abolitionist arguments for the future of their own ‘peculiar institutions’ of serfdom and personal service. This was true of Theresa Huber, the author and journalist who stands for two generations of Germans who engaged in transnational abolitionist networks, and who was equally sharp in her critique of serfdom. And it was true of Prussian administrators who, when challenged by enslaved Africans on German soil to enforce the notion that ‘there are no slaves in Prussia’, could not help asking themselves what that might mean for the process towards reform of feudal institutions.

These issues have only begun to receive greater attention – more studies are needed to gain a clearer understanding of the various links through which continental Europe was connected to the Transatlantic slave business and its abolition.

 

SAVE 25% when you order direct from the publisher using the offer code BB500 in the box at the checkout. Discount applies to print and eBook editions. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291, and quote the same code. Offer ends on the 28th June 2018. Any queries please email marketing@boydell.co.uk

 

To contact the authors:
Felix Brahm (brahm@ghil.ac.uk);
Eve Rosenhaft (Dan85@liverpool.ac.uk)

From VoxEU – Wellbeing inequality in retrospect

Rising trends in GDP per capita are often interpreted as reflecting rising levels of general wellbeing. But GDP per capita is at best a crude proxy for wellbeing, neglecting important qualitative dimensions. 36 more words

via Wellbeing inequality in retrospect — VoxEU.org: Recent Articles

To elaborate further on the topic, Prof. Leandro de la Escosura has made available several databases on inequality, accessible here, as well as a book on long-term Spanish economic growth, available as open source here

 

THE FINANCIAL POWER OF THE POWERLESS: Evidence from Ottoman Istanbul on socio-economic status, legal protection and the cost of borrowing

In Ottoman Istanbul, privileged groups such as men, Muslims and other elites paid more for credit than the under-privileged – the exact opposite of what happens in a modern economy.

New research by Professors Timur Kuran (Duke University) and Jared Rubin (Chapman University), published in the March 2018 issue of the Economic Journal, explains why: a key influence on the cost of borrowing is the rule of law and in particular the extent to which courts will enforce a credit contract.

In pre-modern Turkey, it was the wealthy who could benefit from judicial bias to evade their creditors – and who, because of this default risk, faced higher interest rates on loans. Nowadays, it is under-privileged people who face higher borrowing costs because there are various institutions through which they can escape loan repayment, including bankruptcy options and organisations that will defend poor defaulters as victims of exploitation.

In the modern world, we take it for granted that the under-privileged incur higher borrowing costs than the upper socio-economic classes. Indeed, Americans in the bottom quartile of the US income distribution usually borrow through pawnshops and payday lenders at rates of around 450% per annum, while those in the top quartile take out short-term loans through credit cards at 13-16%. Unlike the under-privileged, the wealthy also have access to long-term credit through home equity loans at rates of around 4%.

The logic connecting socio-economic status to borrowing costs will seem obvious to anyone familiar with basic economics: the higher costs of the poor reflect higher default risk, for which the lender must be compensated.

The new study sets out to test whether the classic negative correlation between socio-economic status and borrowing cost holds in a pre-modern setting outside the industrialised West. To this end, the authors built a data set of private loans issued in Ottoman Istanbul during the period from 1602 to 1799.

These data reveal the exact opposite of what happens in a modern economy: the privileged paid more for credit than the under-privileged. In a society where the average real interest rate was around 19%, men paid an interest surcharge of around 3.4 percentage points; Muslims paid a surcharge of 1.9 percentage points; and elites paid a surcharge of about 2.3 percentage points (see Figure 1).

pic

What might explain this reversal of relative borrowing costs? Why did socially advantaged groups pay more for credit, not less?

The data led the authors to consider a second factor contributing to the price of credit, often taken for granted: the partiality of the law. Implicit in the logic that explains relative credit costs in modern lending markets is that financial contracts are enforceable impartially when the borrower is able to pay. Thus, the rich pay less for credit because they are relatively unlikely to default and because, if they do, lenders can force repayment through courts whose verdicts are more or less impartial.

But in settings where the courts are biased in favour of the wealthy, creditors will expect compensation for the risk of being unable to obtain restitution. The wealth and judicial partiality effects thus work against each other. The former lowers the credit cost for the rich; the latter raises it.

Islamic Ottoman courts served all Ottoman subjects through procedures that were manifestly biased in favour of clearly defined groups. These courts gave Muslims rights that they denied to Christians and Jews. They privileged men over women.

Moreover, because the courts lacked independence from the state, Ottoman subjects connected to the sultan enjoyed favourable treatment. Theory developed in the new study explains why their weak legal power may translate into strong financial power.

More generally, this research suggests that in a free financial market, any hindrance to the enforcement of a credit contract will raise the borrower’s credit cost. Just as judicial biases in favour of the wealthy raise their interest rates on loans, institutions that allow the poor to escape loan repayment – bankruptcy options, shielding of assets from creditors, organisations that defend poor defaulters as victims of exploitation – raise interest rates charged to the poor.

Today, wealth and credit cost are negatively correlated for multiple reasons. The rich benefit both from a higher capacity to post collateral and from better enforcement of their credit obligations relative to those of the poor.

 

To contact the authors:
Timur Kuran (t.kuran@duke.edu); Jared Rubin (jrubin@chapman.edu)

Perpetuating the family name: female inheritance, in-marriage and gender norms

by Duman Bahrami-Rad (Simon Fraser University)

549027601
Tartanspartan: Muslim wedding, Lahore, Pakistan — Frank Horvat, 1952. Available on Pinterest <https://www.pinterest.co.uk/pin/491947959265621479/&gt;

Why is it so common for Muslims to marry their cousins (more than 30% of all marriages in the Middle East)? Why, despite explicit injunctions in the Quran to include women in inheritance, do women in the Middle East generally face unequal gender relations, and their labour force participation remain the lowest in the world (less than 20%)?

This study presents a theory, supported by empirical evidence, concerning the historical origins of such marriage and gender norms. It argues that in patrilineal societies that nevertheless mandate female inheritance, cousin marriage becomes a way to preserve property in the male line and prevent fragmentation of land.

In these societies, female inheritance also leads to the seclusion and veiling of women as well as restrictions on their sexual freedom in order to encourage cousin marriages and avoid out-of-wedlock children as potential heirs. The incompatibility of such restrictions with female participation in agriculture has further influenced the historical gender division of labour.

Analyses of data on pre-industrial societies, Italian provinces, and women in Indonesia show that female inheritance, consistent with these hypotheses, is associated with lower female labour participation, greater stress on female virginity before marriage and higher rates of endogamy, consanguinity and arranged marriages.

The study also uses the recent reform of inheritance regulations in India – which greatly enhanced Indian women’s right to inherit property – to provide further evidence of the causal impact of female inheritance. The analysis shows that among women affected by the reform, the rate of cousin marriage is significantly higher, and that of premarital sex significantly lower.

The implications of these findings are important. It is believed that cousin marriage helps create and maintain kinship groups such as tribes and clans, which impair the development of an individualistic social psychology, undermine social trust, large-scale cooperation and democratic institutions, and encourage corruption and conflict.

This study contributes to this literature by highlighting a historical origin of clannish social organisation. It also sheds light on the origins of gender inequality as both a human rights issues and a development issue.