From LSE Business Review – “Turf wars? Placing geographical indications at the heart of international trade”

by David M. Higgins (Newcastle University), originally published on 09 October 2018 on the LSE Business Review

 

igpWhen doing your weekly shop have you ever observed the small blue/yellow and red/yellow circles that appear on the wrappers of Wensleydale cheese or Parma ham? Such indicia are examples of geographical indications (GIs), or appellations: they show that a product possesses certain attributes (taste, smell, texture) that are unique to a specific product and which can only be derived from a tightly demarcated and fiercely protected geographical region. The relationship between product attributes and geography can be summed up in one word: terroir. These GIs formed an important part of the EU’s agricultural policy, launched in 1992 and represented by the logos PDO and PGI, to insulate EU farmers from the effects of globalisation by encouraging them to produce ‘quality’ products that were unique.

GIs have a considerable lineage: legislation enacted in 1666 reserved the sole right to ‘Roquefort’ to cheese cured in the caves at Roquefort. Until the later nineteenth century domestic legislation was the primary means by which GIs were protected from misrepresentation. Thereafter, the rapid acceleration of international trade necessitated global protocols, of which the Paris Convention for the Protection of Industrial Property (1883) and its successors, including the Madrid Agreement for the Repression of False or Deceptive Indications of Source on Goods (1890).

Full article here: http://blogs.lse.ac.uk/businessreview/2018/10/09/turf-wars-placing-geographical-indications-at-the-heart-of-international-trade/

 

Britain’s post-Brexit trade: learning from the Edwardian origins of imperial preference

by Brian Varian (Swansea University)

798px-Imperial_Federation,_map_of_the_world_showing_the_extent_of_the_British_Empire_in_1886
Imperial Federation, map of the world showing the extent of the British Empire in 1886. Wikimedia Commons

In December 2017, Liam Fox, the Secretary of State for International Trade, stated that ‘as the United Kingdom negotiates its exit from the European Union, we have the opportunity to reinvigorate our Commonwealth partnerships, and usher in a new era where expertise, talent, goods, and capital can move unhindered between our nations in a way that they have not for a generation or more’.

As policy-makers and the public contemplate a return to the halcyon days of the British Empire, there is much to be learned from those past policies that attempted to cultivate trade along imperial lines. Let us consider the effect of the earliest policies of imperial preference: policies enacted during the Edwardian era.

In the late nineteenth century, Britain was the bastion of free trade, imposing tariffs on only a very narrow range of commodities. Consequently, Britain’s free trade policy afforded barely any scope for applying lower or ‘preferential’ duties to imports from the Empire.

The self-governing colonies of the Empire possessed autonomy in tariff-setting and, with the notable exception of New South Wales, did not emulate the mother country’s free trade policy. In the 1890s and 1900s, when the emergent industrial nations of Germany and the United States reduced Britain’s market share in these self-governing colonies, there was indeed scope for applying preferential duties to imports from Britain, in the hope of diverting trade back toward the Empire.

Trade policies of imperial preference were implemented in succession by Canada (1897), the South African Customs Union (1903), New Zealand (1903) and Australia (1907). By the close of the first era of globalisation in 1914, Britain enjoyed some margin of preference in all of the Dominions. Yet my research, a case study of New Zealand, casts doubt on the effectiveness of these polices at raising Britain’s share in the imports of the Dominions.

Unlike the policies of the other Dominions, New Zealand’s policy applied preferential duties to only selected commodity imports (44 out of 543). This cross-commodity variation in the application of preference is useful for estimating the effect of preference. I find that New Zealand’s Preferential and Reciprocal Trade Act of 1903 had no effect on the share of the Empire, or of Britain specifically, in New Zealand’s imports.

Why was the policy ineffective at raising Britain’s share of New Zealand’s imports? There are several likely reasons: that Britain’s share was already quite large; that some imported commodities were highly differentiated and certain varieties were only produced in other industrial countries; and, most importantly, that the margin of preference – the extent to which duties were lower for imports from Britain – was too small to effect any trade diversion.

As Britain considers future trade agreements, perhaps with Commonwealth countries, it should be remembered that a trade agreement does not necessarily entail a great, or even any, increase in trade. The original policies of imperial preference were rather symbolic measures and, at least in the case of New Zealand, economically inconsequential.

Brexit might well present an ‘opportunity to reinvigorate our Commonwealth partnerships’, but would that be a reinvigoration in substance or in appearance?

London fog: a century of pollution and mortality, 1866-1965

by Walker Hanlon (UCLA)

23695833473_b1b7c7cca2_b
Photogravure by Donald Macleish from Wonderful London by St John Adcock, 1927. Available at <https://www.flickr.com/photos/norfolkodyssey/23695833473&gt;

For more than a century, London struggled with some of the worst air pollution on earth. But how much did air pollution affect health in London? How did these effects change as the city developed? Can London’s long experience teach us lessons that are relevant for modern cities, from Beijing to New Delhi, that are currently struggling with their own air pollution problems?

To answer these questions, I study the effects of air pollution in London across a full century from 1866 to 1965. Using new data, I show that air pollution was a major contributor to mortality in London during this century – accounting for at least one out of every 200 deaths during this century.

As London developed, the impact of air pollution changed. In the nineteenth century, Londoners suffered from a range of infectious diseases, including respiratory diseases like measles and tuberculosis. I show that being exposed to high levels of air pollution made these diseases deadlier, while the presence of these diseases made air pollution more harmful. As a result, when public health and medical improvements reduced the prevalence of these infectious diseases, they also lowered the mortality cost of pollution exposure.

This finding has implications for modern developing countries. It tells us that air pollution is likely to be more deadly in the developing world, but also that investments that improve health in other ways can lower the health costs of pollution exposure.

An important challenge in studying air pollution in the past is that direct pollution measures were not collected in a consistent way until the mid-twentieth century. To overcome this challenge, this study takes advantage of London’s famous fog events, which trapped pollution in the city and substantially increased exposure levels.

While some famous fog events are well known – such as the Great Fog of 1952 or the Cattle Show Fog of 1873, which killed the Queen’s prize bull – London experienced hundreds of lesser-known events over the century I study. By reading weather reports from the Greenwich Observatory covering over 26,000 days, we identified every day in which heavy fog occurred.

To study how these fog events affected health, I collected detailed new mortality data describing deaths in London at the weekly level. Digitised from original sources, and covering over 350,000 observations, this new data set opens the door to a more detailed analysis of London’s mortality experience than has previously been possible.

These new mortality data allow me to analyse the effects of air pollution from a variety of different angles. I provide new evidence on how the effects of air pollution varied across age groups, how the effect on different age groups evolved over time, how pollution interacted with infectious diseases and other causes of death, etc. This enriches our understanding of London’s history while opening up a range of new possibilities for studying the impact of air pollution over the long run.

The UK’s unpaid war debts to the United States, 1917-1980

by David James Gill (University of Nottingham)

ww1fe-562830
Trenches in World War I. From <www.express.co.uk>

We all think we know the consequences of the Great War – from the millions of dead to the rise of Nazism – but the story of the UK’s war debts to the United States remains largely untold.

In 1934, the British government defaulted on these loans, leaving unpaid debts exceeding $4 billion. The UK decided to cease repayment 18 months after France had defaulted on its war debts, making one full and two token repayments prior to Congressional approval of the Johnson Act, which prohibited further partial contributions.

Economists and political scientists typically attribute such hesitation to concerns about economic reprisals or the costs of future borrowing. Historians have instead stressed that delay reflected either a desire to protect transatlantic relations or a naive hope for outright cancellation.

Archival research reveals that the British cabinet’s principal concern was that many states owing money to the UK might use its default on war loans as an excuse to cease repayment on their own debts. In addition, ministers feared that refusal to pay would profoundly shock a large section of public opinion, thereby undermining the popularity of the National government. Eighteen months of continued repayment therefore provided the British government with more time to manage these risks.

The consequences of the UK’s default have attracted curiously limited attention. Economists and political scientists tend to assume dire political costs to incumbent governments as well as significant short-term economic shocks in terms of external borrowing, international trade, and the domestic economy. None of these consequences apply to the National government or the UK in the years that followed.

Most historians consider these unpaid war debts to be largely irrelevant to the course of domestic and international politics within five years. Yet archival research reveals that they continued to play an important role in British and American policy-making for at least four more decades.

During the 1940s, the issue of the UK’s default arose on several occasions, most clearly during negotiations concerning Lend-Lease and the Anglo-American loan, fuelling Congressional resistance that limited the size and duration of American financial support.

Successive American administrations also struggled to resist growing Congressional pressure to use these unpaid debts as a diplomatic tool to address growing balance of payment deficits from the 1950s to the 1970s. In addition, British default presented a formidable legal obstacle for the UK’s return to the New York bond market in the late 1970s, threatening to undermine the efficient refinancing of the government’s recent loans from the International Monetary Fund.

The consequences of the UK’s default on its First World War debts to the United States were therefore longer lasting and more significant to policy-making on both sides of the Atlantic than widely assumed.

 

Judges and the death penalty in Nazi Germany: New research evidence on judicial discretion in authoritarian states

nazipeoplescourt
The German People’s Court. Available at https://www.foreignaffairs.com/reviews/review-essay/good-germans

Do judicial courts in authoritarian regimes act as puppets for the interests of a repressive state – or do judges act with greater independence? How much do judges draw on their political and ideological affiliations when imposing the death sentence?

A study of Nazi Germany’s notorious People’s Court, recently published in the Economic Journal, reveals direct empirical evidence of how the judiciary in one of the world’s most notoriously politicised courts were influenced in their life-and-death decisions.

The research provides important empirical evidence that the political and ideological affiliations of judges do come into play – a finding that has applications for modern authoritarian regimes and also for democracies that administer the death penalty.

The research team – Dr Wayne Geerling (University of Arizona), Prof Gary Magee, Prof Russell Smyth, and Dr Vinod Mishra (Monash Business School) – explore the factors influencing the likelihood of imposing the death sentence in Nazi Germany for crimes against the state – treason and high treason.

The authors examine data compiled from official records of individuals charged with treason and high treason who appeared before the People’s Courts up to the end of the Second World War.

Established by the Nazis in 1934 to hear cases of serious political offences, the People’s Courts have been vilified as ‘blood tribunals’ in which judges meted out pre-determined sentences.

But in recent years, while not contending that the People’s Court judgments were impartial or that its judges were not subservient to the wishes of the regime, a more nuanced assessment has emerged.

For the first time, the new study presents direct empirical evidence of the reasons behind the use of judicial discretion and why some judges appeared more willing to implement the will of the state than others.

The researchers find that judges with a deeper ideological commitment to Nazi values – typified by being members of the Alte Kampfer (‘Old Fighters’ or early members of the Nazi party) – were indeed more likely to impose the death penalty than those who did not share it.

These judges were more likely to hand down death penalties to members of the most organised opposition groups, those involved in violent resistance against the state and ‘defendants with characteristics repellent to core Nazi beliefs’:

‘The Alte Kampfer were thus more likely to sentence devout Roman Catholics (24.7 percentage points), defendants with partial Jewish ancestry (34.8 percentage points), juveniles (23.4 percentage points), the unemployed (4.9 percentage points) and foreigners (42.3 percentage points) to death.’

Judges who became adults during two distinct historical periods (the Revolution of 1918-19 and the period of hyperinflation from June 1921 to January 1924), which may have shaped these judges’ views with respect to Nazism, were more likely to impose the death sentence.

 Alte Kampfer members whose hometown or suburb lay near a centre of the Revolution of 1918-19 were more likely to sentence a defendant to death.

Previous economic research on sentencing in capital cases has focused mainly on gender and racial disparities, typically in the United States. But the understanding of what determines whether courts in modern authoritarian regimes outside the United States impose the death penalty is scant. By studying a politicised court in an historically important authoritarian state, the authors of the new study shed light on sentencing more generally in authoritarian states.

The findings are important because they provide insights into the practical realities of judicial empowerment by providing rare empirical evidence on how the exercise of judicial discretion in authoritarian states is reflected in sentencing outcomes.

To contact the authors:
Russell Smyth (russell.smyth@monash.edu)

Decimalising the pound: a victory for the gentlemanly City against the forces of modernity?

by Andy Cook (University of Huddersfield)

 

1813 guinea

Some media commentators have identified the decimalisation of the UK’s currency in 1971 as the start of a submerging of British identity. For example, writing in the Daily Mail, Dominic Sandbrook characterises it as ‘marking the end of a proud history of defiant insularity and the beginning of the creeping ­Europeanisation of ­Britain’s institutions.’

This research, based on Cabinet papers, Bank of England archives, Parliamentary records and other sources, reveals that this interpretation is spurious and reflects more modern preoccupations with the arguments that dominated much of the Brexit debate, rather than the actual motivation of key players at the time.

The research examines arguments made by the proponents of alternative systems based on either decimalising the pound, or creating a new unit worth the equivalent of 10 shillings. South Africa, Australia and New Zealand had all recently adopted a 10-shilling unit, and this system was favoured by a wide range of interest groups in the UK, representing consumers, retailers, small and large businesses, and media commentators.

Virtually a lone voice in lobbying for retention of the pound was the City of London, and its arguments, articulated by the Bank of England, were based on a traditional attachment to the international status of sterling. These arguments were accepted, both by the Committee of Enquiry on Decimal currency, which reported in 1963, and, in 1966, by a Labour government headed by Harold Wilson, who shared the City’s emotional attachment to the pound.

Yet by 1960, the UK had faced the imminent prospect of being virtually the only country retaining non-decimal coinage. Most key economic players agreed that decimalisation was necessary and the only significant bone of contention was the choice of system.

Most informed opinion favoured a new major unit equivalent to 10 shillings, as reflected in evidence given by retailers and other businesses to the Committee of Enquiry on Decimal Coinage, and the formation of a Decimal Action Committee by the Consumers Association to press for such a system.

The City, represented by the Bank of England, was implacably opposed to such a system, arguing that the pound’s international prestige was crucial to underpinning the position of the City as a leading financial centre. This assertion was not evidence-based, and internal Bank documents acknowledge that their argument was ‘to some extent based on sentiment’.

This sentiment was shared by Harold Wilson, whose government announced the decision to introduce decimal currency based on the pound in 1966. Five years earlier, he had made an emotional plea to keep the pound arguing that ‘the world will lose something if the pound disappears from the markets of the world’.

Far from being the end of ‘defiant insularity’, the decision to retain a higher-value basic currency unit of any major economy, rather than adopting one closer in value either to the US dollar or the even lower-value European currencies, reflected the desire of the City and government to maintain a distinctive symbol of Britishness, the pound, overcoming opposition from interests with more practical concerns.

British perceptions of German post-war industrial relations

By Colin Chamberlain (University of Cambridge)

Some 10,000 steel workers participate in a demonstration to demand a 10 per...
A demonstration in Stuttgart, 11th January 1962.  Picture alliance/AP Images, available at <https://www.gewerkschaftsgeschichte.de/1953-schwerpunkt-tarifpolitik.html&gt;

‘Almost idyllic’ – this was the view of one British commentator on the state of post-war industrial relations in West Germany. No one could say the same about British industrial relations. Here, industrial conflict grew inexorably from year to year, forcing governments to expend ever more effort on preserving industrial peace.

Deeply frustrated, successive governments alternated between appeasing trade unionists and threatening them with new legal sanctions in an effort to improve their behaviour, thereby avoiding tackling the fundamental issue of their institutional structure. If the British had only studied the German ‘model’ of industrial relations more closely, they would have understood better the reforms that needed to be made.

Britain’s poor state of industrial relations was a major, if not the major, factor holding back Britain’s economic growth, which was regularly less than half the rate in Germany, not to speak of the chronic inflation and balance of payments problems that only made matters worse. So, how come the British did not take a deeper look at the successful model of German industrial relations and learn any lessons?

Ironically, the British were in control of Germany at the time the trade union movement was re-establishing itself after the war. The Trades Union Congress and the British labour movement offered much goodwill and help to the Germans in their task.

But German trade unionists had very different ideas to the British trade unions on how to go about organising their industrial relations, ideas that the British were to ignore consistently over the post-war period. These included:

    • In Britain, there were hundreds of trade unions, but in Germany, there were only 16 re-established after the war, each representing one or more industries, thereby avoiding the demarcation disputes so common in Britain.
    • Terms and conditions were negotiated on this industry-basis by strong well-funded trade unions, which welcomed the fact that their two or three year long collective agreements were legally enforceable in Germany’s system of industrial courts.
    • Trade unions were not involved in workplace grievances and disputes. These were left to employees and managers meeting together in Germany’s highly successful works councils to resolve such issues informally along with engaging in consultative exercises on working practices and company reorganisations. As a result, German companies did not seek to lay-off staff as British companies did on any fall in demand, but rathet to retrain and reallocate them.

British trade unions pleaded that their very untidy institutional structure with hundreds of competing trade unions was what their members actually wanted and should therefore be outside any government interference. The trade unions jealously guarded their privileges and especially rejected any idea of industry-based unions, legally enforceable collective agreements and works councils.

A heavyweight Royal Commission was appointed, but after three years’ deliberation, it came up with little more than the status quo. It was reluctant to study any ideas emanating from Germany.

While the success of industrial relations in Germany was widely recognised in Britain, there was little understanding about why this was so or indeed much interest in it. The British were deeply conservative about the ‘institutional shape’ of industrial relations and had a fear of putting forward any radical German ideas. Britain was therefore at a big disadvantage as far as creating modern trade unions operating in a modern state.

So, what economic price the failure to sort out the institutional structure of the British trade unions?

From VoxEU – Wellbeing inequality in retrospect

Rising trends in GDP per capita are often interpreted as reflecting rising levels of general wellbeing. But GDP per capita is at best a crude proxy for wellbeing, neglecting important qualitative dimensions. 36 more words

via Wellbeing inequality in retrospect — VoxEU.org: Recent Articles

To elaborate further on the topic, Prof. Leandro de la Escosura has made available several databases on inequality, accessible here, as well as a book on long-term Spanish economic growth, available as open source here

 

Winning the capital, winning the war: retail investors in the First World War

by Norma Cohen (Queen Mary University of London)

 

Put_it_into_National_War_Bonds
National War Savings CommitteeMcMaster University Libraries, Identifier: 00001792. Available at wikimedia commons

The First World War brought about an upheaval in British investment, forcing savers to repatriate billions of pounds held abroad and attracting new investors among those living far from London, this research finds. The study also points to declining inequality between Britain’s wealthiest classes and the middle class, and rising purchasing power among the lower middle classes.

The research is based on samples from ledgers of investors in successive War Loans. These are lodged in archives at the Bank of England and have been closed for a century. The research covers roughly 6,000 samples from three separate sets of ledgers of investors between 1914 and 1932.

While the First World War is recalled as a period of national sacrifice and suffering, the reality is that war boosted Britain’s output. Sampling from the ledgers points to the extent to which war unleashed the industrial and engineering innovations of British industry, creating and spreading wealth.

Britain needed capital to ensure it could outlast its enemies. As the world’s capital exporter by 1914, the nation imposed increasingly tight measures on investors to ensure capital was used exclusively for war.

While London was home to just over half the capital raised in the first War Loan in 1914, that had fallen to just under 10% of capital raised in the years after. In contrast, the North East, North West and Scotland – home to the mining, engineering and shipbuilding industries – provided 60% of the capital by 1932, up from a quarter of the total raised by the first War Loan.

The concentration of investor occupations also points to profound social changes fostered by war. Men describing themselves as ‘gentleman’ or ‘esquire’ – titles accorded those wealthy enough to live on investment returns – accounted for 55% of retail investors for the first issue of War Loan. By the post-war years, these were 37% of male investors.

In contrast, skilled labourers – blacksmiths, coal miners and railway signalmen among others– were 9.0% of male retail investors by the after-war years, up from 4.9% in the first sample.

Suppliers of war-related goods may not have been the main beneficiaries of newly-created wealth. The sample includes large investments by those supplying consumer goods sought by households made better off by higher wages, steady work and falling unemployment during the war.

During and after the war, these sectors were accused of ‘profiteering’, sparking national indignation. Nearly a quarter of investors in 5% War Loan listing their occupations as ‘manufacturer’ were producing boots and leather goods, a sector singled out during the war for excess profits. Manufacturers in the final sample produced mineral water, worsteds, jam and bread.

My findings show that War Loan was widely held by households likely to have had relatively modest wealth; while the largest concentration of capital remained in the hands of relatively few, larger numbers had a small stake in the fate of the War Loans.

In the post-war years, over half of male retail investors held £500 or less. This may help to explain why efforts to pay for war by taxing wealth as well as income – a debate that echoes today – proved so politically challenging. The rentier class on whom additional taxation would have been levied may have been more of a political construct by 1932 than an actual presence.

 

THE IMPACT OF MALARIA ON EARLY AFRICAN DEVELOPMENT: Evidence from the sickle cell trait

_Keep_out_malaria_mosquitoes_repair_your_torn_screen__-_NARA_-_514969
poster “Keep out malaria mosquitoes repair your torn screens”. U.S. Public Health Service, 1941–45

While malaria historically claimed millions of African lives, it did not hold back the continent’s economic development. That is one of the findings of new research by Emilio Depetris-Chauvin (Pontificia Universidad Católica de Chile) and David Weil (Brown University), published in the Economic Journal.

Their study uses data on the prevalence of the gene that causes sickle cell disease to estimate death rates from malaria for the period before the Second World War. They find that in parts of Africa with high malaria transmission, one in ten children died from malaria or sickle cell disease before reaching adulthood – a death rate more than twice the current burden of malaria in these regions.

 

According to the World Health Organization, the malaria mortality rate declined by 29% between 2010 and 2015. This was a major public health accomplishment, although with 429,000 annual deaths, the disease remains a terrible scourge.

Countries where malaria is endemic are also, on average, very poor. This correlation has led economists to speculate about whether malaria is a driver of poverty. But addressing that issue is difficult because of a lack of data. Poverty in the tropics has long historical roots, and while there are good data on malaria prevalence in the period since the Second World War, there is no World Malaria Report for 1900, 1800 or 1700.

Biologists only came to understand the nature of malaria in the late nineteenth century. Even today, trained medical personnel have trouble distinguishing between malaria and other diseases without the use of microscopy or diagnostic tests. Accounts from travellers and other historical records provide some evidence of the impact of malaria going back millennia, but these are hardly sufficient to draw firm conclusions. Akyeampong (2006), Mabogunje and Richards (1985)

This study addresses the lack of information on malaria’s impact historically by using genetic data. In the worst afflicted areas, malaria left an imprint on the human genome that can be read today.

Specifically, the researchers look at the prevalence of the gene that causes sickle cell disease. Carrying one copy of this gene provided individuals with a significant level of protection against malaria, but people who carried two copies of the gene died before reaching reproductive age.

Thus, the degree of selective pressure exerted by malaria determined the equilibrium prevalence of the gene in the population. By measuring the prevalence of the gene in modern populations, it is possible to back out estimates of the severity of malaria historically.

In areas of high malaria transmission, 20% of the population carries the sickle cell trait. The researchers’ estimate is that this implies that historically 10-11% of children died from malaria or sickle cell disease before reaching adulthood. Such a death rate is more than twice the current burden of malaria in these regions.

Comparing the most affected areas with those least affected, malaria may have been responsible for a ten percentage point difference in the probability of surviving to adulthood. In areas of high malaria transmission, the researchers’ estimate that life expectancy at birth was reduced by approximately five years.

Having established the magnitude of malaria’s mortality burden, the researchers then turn to its economic effects. Surprisingly, they find little reason to believe that malaria held back development. A simple life cycle model suggests that the disease was not very important, primarily because the vast majority of deaths that it caused were among the very young, in whom society had invested few resources.

This model-based finding is corroborated by the findings of a statistical examination. Within Africa, areas with higher malaria burden, as evidenced by the prevalence of the sickle cell trait, do not show lower levels of economic development or population density in the colonial era data examined in this study.

 

To contact the authors:  David Weil, david_weil@brown.edu