Recurring growth without industrialisation: occupational structures in northern Nigeria, 1921-2006

by Emiliano Travieso (University of Cambridge)

 

Nigeria2
InStove factory workers in Nigeria. Available at <http://www.instove.org/node/59&gt;

Despite recent decades of economic growth, absolute poverty is on the rise in Nigeria, as population increases continue to outpace the reduction in poverty rates. Manufacturing industries, which have the potential to absorb large numbers of workers into better paying jobs, have expanded only very modestly, and most workers remain employed in low productivity sectors (such as the informal urban economy and subsistence agriculture). 

 This scenario is particularly stark in the northern states, which concentrate more than half of the national population and where poverty rates are at their highest. As the largest region of the most populated nation in the continent (and itself three times as large as any other West African country), quantifying and qualifying northern Nigerias past economic development is crucial in order to discuss the perspectives for structural change and poverty alleviation in sub-Saharan Africa.  

 My research traces the major shifts in the economy of northern Nigeria during and since colonial rule through a detailed study of occupational structures, based on colonial and independence-era censuses and other primary sources. 

 While the region has a long history of handicraft production – under the nineteenth-century Sokoto Caliphate it became the largest textile producer in sub-Saharan Africa – northern Nigeria deindustrialised during British indirect rule. Partially as a result of the expansion of export agriculture (mainly of groundnuts and, to a lesser extent, cotton), the share of the workforce in manufacturing decreased from 18% to 7% in the last four decades of the colonial period. 

 After independence in 1960, growth episodes were led by transport, urban services and government expenditure fuelled by oil transfers from the southeast of the country, but did not spur significant structural change in favour of manufacturing. By 2006, the share of the workforce in manufacturing had risen only slightly: to 8%. 

 In global economic history, poverty alleviation has often resulted from a previous period of systematic movement of labour from low- to high-productivity sectors. The continued expansion of manufacturing achieved just that during the Industrial Revolution in the West and, in the twentieth century, in many parts of the Global South. 

 In large Asian and Latin American economies, late industrialisation sustained impressive achievements in terms of job creation and poverty alleviation. In cases such as Brazil, Mexico and China, large domestic markets, fast urbanisation and improvements in education contributed decisively to lifting millions of people out of poverty. 

Can northern Nigeria, with its large population, deep historical manufacturing roots and access to the largest national market in Africa, develop into a late industrialiser in the twenty-first century? My study suggests that rapid demographic growth will not necessarily result in structural change, but that, through improved market integration and continued expansion of education, the economy could harness the skills and energy of its rising population to produce a more impressive expansion of manufacturing than we have yet seen. 

Delusions of competence: the near-death of Lloyd’s of London 1980-2002

by Robin Pearson (University of Hull) 
This paper was presented at the EHS Annual Conference 2019 in Belfast. 

Lloyds-of-London
Rapid structural change resulting from system collapse seems to be a less common phenomenon in insurance than in the history of other financial services. One notably exception is the crisis that rocked Lloyd’s of London, the world’s oldest continuous insurance market, in the late twentieth century. 

Hitherto, explanations for the crisis have focused on catastrophic losses and problems of internal governance. My study argues that while these factors were important, they may not have resulted in institutional collapse had it not been for multiple delusions of competence among the various parties involved. 

Lloyd’s was a self-governing market that comprised investors – known as names – who put up their personal assets to back the insurance written on their behalf, and accepted unlimited individual liability for losses. Names were organised into syndicates led by an underwriter and a managing agency. Business could only be brought to syndicates by brokers licensed by Lloyd’s. Large broking firms owned most of the managing agencies and thereby controlled the syndicates, giving rise to serious conflicts of interest.  

 In 1970, Lloyd’s resolved to expand capacity by lowering property qualifications for new names. As a result, the membership exploded from 6,000 to over 32,000 by 1988. Many new names were less well-heeled than their predecessors and largely ignorant of the insurance business. Despite a series of scandals involving underwriters siphoning off syndicate funds for their own personal use, the number of entrants kept rising thanks to double digit investment returnsthe tax advantages of membership, and aggressive recruiting.  

While capacity was increasing, underwriters competed vigorously to write long-tail liability and catastrophe business in the form of excess loss (XL) reinsurance. Under these contracts, the reinsurer agreed to indemnify the reinsured in the event of the latter sustaining a loss in excess of a pre-determined figure. The reinsurer in turn usually retroceded (laid off) some of the amount reinsured to another insurer. 

Many Lloyd’s underwriters went into this market despite having little experience of the business. Some syndicates doing XL reinsurance retroceded to other XL syndicates, so that instead of the risks being dispersed, they circulated around the same market, becoming increasingly opaque and concentrated in a few syndicates. This became the infamous London Market Excess of Loss (LMX) spiral. 

By 1990, over one quarter of business at Lloyd’s was XL reinsurance. The spiral offered brokers, underwriters and managing agents the opportunity to earn commission and fees on every reinsurance and retrocession written. 

It also enabled underwriters to arbitrage the differential between the premiums they charged for the original insurance, and the lower premiums they paid for reinsurance and retrocessionsA later inquiry also showed that those writing at the top of the spiral accepted, out of ignorance or carelessness, premium rates that were far too low for the higher layers, in the belief that these were virtually risk-free.  

Unscrupulous underwriters could also offload the worst risks onto ‘dustbin’ syndicates of outsider names, while picking the best risks to be reinsured with so-called ‘baby’ syndicates of insiders. Poor information recording made it difficult to track the risks insured in the LMX spiral. 

Lloyd’s membership peaked in 1988, which also marked the first of five years of unprecedented losses. ‘Long-tail risks on liability insurance generated many of the losses, as well as a series of storms, earthquakes, hurricanes, oil industry disasters and the Gulf war. Asbestosis and industrial pollution claims in the United States poured in, some from policies dating as far back as the 1930s. 

The tsunami of claims overwhelmed Lloyd’s. Groups of names resisted calls and sued on the grounds that Lloyd’s market supervision had failed. Most political opinion moved towards accepting the need for fundamental reform, despite a fierce rearguard action from traditionalists.  

In 1993, for the first time in its history, Lloyd’s permitted the entry of corporate investors with limited liability, and these soon accounted for 80% of market capacity. The number of individual names collapsed. A vehicle was created – Equitas – to reinsure all liabilities incurred prior to 1993, funded by a levy on members. 

In 1996, Lloyd’s achieved a £3.1 billion settlement with its litigants. In 1998, the new Labour government announced that Lloyd’s would be independently regulated by the Financial Services Authority 

Studies of decision-making under uncertainty and the fallacies of experts are helpful in explaining behaviour at Lloyd’s revealed by the crisiswhich included arrogance, elitism, greed, corruption and stubborn resistance to reform in defence of vested interestsPolitically entrenched ideas about the virtues of self-regulation, and an exaggerated faith in the ability of insider experts to know what was best for the institution, also played a role. 

The practice of syndicate underwriters ‘following’ the premium rate set by a recognised ‘lead’ underwriter reinforced behavioural traits such as herding, the desire to avoid being an outlier in one’s predictions; ‘cognitive dissonance’, the inability to know the limits of one’s expertise; overconfidence and optimistic bias. 

The combined effect of these behaviours on XL underwriting at Lloyd’s was a heightened tendency to ignore ‘black swans’, the unknown or unimagined events that can deliver catastrophic losses. There are obvious parallels with the behaviour of investors in the market for sub-prime mortgage default risk, the collapse of which brought about the global financial crisis of 2007/08. 

 

An Efficient Market? Going Public in London, 1891-1911

by Sturla Fjesme (Oslo Metropolitan University), Neal Galpin (Monash University Melbourne), Lyndon Moore (University of Melbourne)

This article is published by The Economic History Review, and it is available on the EHS website

 

37829
Antique print of the London Stock Exchange. Available at <https://www.ashrare.com/stock_exchange_prints.html>

The British at a disadvantage?
It has been claimed that British capital markets were unwelcoming to new and technologically advanced companies in the late 1800s and early 1900s. Allegedly, markets in the U.S. and Germany were far more developed in providing capital for growing research and development (R&D) companies whereas British capital markets favored older companies in more mature industries, leaving new technology companies at a great disadvantage.
In the article An Efficient Market? Going Public in London, 1891-1911 we investigate this claim by obtaining detailed investment data on all the companies that listed publicly in the U.K. over the period 1891 to 1911. By combining company prospectuses, which provide issuer information such as industry, patenting activity, and company age with those company’s investors we investigate if certain company types were left at a disadvantage. For a total of 339 companies (out of 611 new listings) we obtain share prices, prospectuses, and detailed investor information on name and occupation.

A welcoming exchange
Contrary to prior expectations we find that the London Stock Exchange (LSE) was very welcoming to young, technologically advanced, and foreign companies from a great variety of industries. Table 1 shows that new companies were from a great variety of industries, were often categorized as new-technology, and almost half of the companies listed were foreign. We find that 81% and 84% of the new and old technology firms that applied for an official quotation of their shares were accepted by the LSE listing committee, respectively. Therefore, there is no evidence that the LSE treated new or foreign companies differently.

Table 1. IPOs by Industry

  IPOs Old-Tech New-Tech Domestic Foreign
Banks and Discount Companies 4 4 0 0 4
Breweries and Distilleries 13 13 0 12 1
Commercial, Industrial, &c. 155 137 18 125 30
Electric Lighting & Power 11 0 11 9 2
Financial Land and Investment 23 23 0 2 21
Financial Trusts 12 12 0 8 4
Insurance 7 7 0 7 0
Iron, Coal and Steel 20 20 0 20 0
Mines 8 8 0 0 8
Nitrate 3 3 0 0 3
Oil 11 11 0 0 11
Railways 10 9 1 5 5
Shipping 3 3 0 3 0
Tea, Coffee and Rubber 48 48 0 0 48
Telegraphs and Telephones 3 1 2 1 2
Tramways and Omnibus 6 0 6 5 1
Water Works 2 2 0 1 1
Total 339 301 38 198 141

Note: We group firms by industry, according to their classification by the Stock Exchange Daily Official List.

We also find that investors treated disparate companies similarly. We find British investors were willing to place their money in young and old, high and low technology, and domestic and foreign firms without charging large price discounts to do so. We do, however, find that investors who worked in the same industry or lived close to where the companies operated were able to use their superior information to obtain larger investments in well performing companies. Together our findings suggest that the market for newly listed companies in late Victorian Britain was efficient and welcoming to new companies. We find no evidence indicating that the LSE (or its investors) withheld support for foreign, young, or new-technology companies.

 

To contact Lyndon Moore:  Lyndon.moore@unimelb.edu.au

From LSE Business Review – “Turf wars? Placing geographical indications at the heart of international trade”

by David M. Higgins (Newcastle University), originally published on 09 October 2018 on the LSE Business Review

 

igpWhen doing your weekly shop have you ever observed the small blue/yellow and red/yellow circles that appear on the wrappers of Wensleydale cheese or Parma ham? Such indicia are examples of geographical indications (GIs), or appellations: they show that a product possesses certain attributes (taste, smell, texture) that are unique to a specific product and which can only be derived from a tightly demarcated and fiercely protected geographical region. The relationship between product attributes and geography can be summed up in one word: terroir. These GIs formed an important part of the EU’s agricultural policy, launched in 1992 and represented by the logos PDO and PGI, to insulate EU farmers from the effects of globalisation by encouraging them to produce ‘quality’ products that were unique.

GIs have a considerable lineage: legislation enacted in 1666 reserved the sole right to ‘Roquefort’ to cheese cured in the caves at Roquefort. Until the later nineteenth century domestic legislation was the primary means by which GIs were protected from misrepresentation. Thereafter, the rapid acceleration of international trade necessitated global protocols, of which the Paris Convention for the Protection of Industrial Property (1883) and its successors, including the Madrid Agreement for the Repression of False or Deceptive Indications of Source on Goods (1890).

Full article here: http://blogs.lse.ac.uk/businessreview/2018/10/09/turf-wars-placing-geographical-indications-at-the-heart-of-international-trade/

 

Britain’s post-Brexit trade: learning from the Edwardian origins of imperial preference

by Brian Varian (Swansea University)

798px-Imperial_Federation,_map_of_the_world_showing_the_extent_of_the_British_Empire_in_1886
Imperial Federation, map of the world showing the extent of the British Empire in 1886. Wikimedia Commons

In December 2017, Liam Fox, the Secretary of State for International Trade, stated that ‘as the United Kingdom negotiates its exit from the European Union, we have the opportunity to reinvigorate our Commonwealth partnerships, and usher in a new era where expertise, talent, goods, and capital can move unhindered between our nations in a way that they have not for a generation or more’.

As policy-makers and the public contemplate a return to the halcyon days of the British Empire, there is much to be learned from those past policies that attempted to cultivate trade along imperial lines. Let us consider the effect of the earliest policies of imperial preference: policies enacted during the Edwardian era.

In the late nineteenth century, Britain was the bastion of free trade, imposing tariffs on only a very narrow range of commodities. Consequently, Britain’s free trade policy afforded barely any scope for applying lower or ‘preferential’ duties to imports from the Empire.

The self-governing colonies of the Empire possessed autonomy in tariff-setting and, with the notable exception of New South Wales, did not emulate the mother country’s free trade policy. In the 1890s and 1900s, when the emergent industrial nations of Germany and the United States reduced Britain’s market share in these self-governing colonies, there was indeed scope for applying preferential duties to imports from Britain, in the hope of diverting trade back toward the Empire.

Trade policies of imperial preference were implemented in succession by Canada (1897), the South African Customs Union (1903), New Zealand (1903) and Australia (1907). By the close of the first era of globalisation in 1914, Britain enjoyed some margin of preference in all of the Dominions. Yet my research, a case study of New Zealand, casts doubt on the effectiveness of these polices at raising Britain’s share in the imports of the Dominions.

Unlike the policies of the other Dominions, New Zealand’s policy applied preferential duties to only selected commodity imports (44 out of 543). This cross-commodity variation in the application of preference is useful for estimating the effect of preference. I find that New Zealand’s Preferential and Reciprocal Trade Act of 1903 had no effect on the share of the Empire, or of Britain specifically, in New Zealand’s imports.

Why was the policy ineffective at raising Britain’s share of New Zealand’s imports? There are several likely reasons: that Britain’s share was already quite large; that some imported commodities were highly differentiated and certain varieties were only produced in other industrial countries; and, most importantly, that the margin of preference – the extent to which duties were lower for imports from Britain – was too small to effect any trade diversion.

As Britain considers future trade agreements, perhaps with Commonwealth countries, it should be remembered that a trade agreement does not necessarily entail a great, or even any, increase in trade. The original policies of imperial preference were rather symbolic measures and, at least in the case of New Zealand, economically inconsequential.

Brexit might well present an ‘opportunity to reinvigorate our Commonwealth partnerships’, but would that be a reinvigoration in substance or in appearance?

London fog: a century of pollution and mortality, 1866-1965

by Walker Hanlon (UCLA)

23695833473_b1b7c7cca2_b
Photogravure by Donald Macleish from Wonderful London by St John Adcock, 1927. Available at <https://www.flickr.com/photos/norfolkodyssey/23695833473&gt;

For more than a century, London struggled with some of the worst air pollution on earth. But how much did air pollution affect health in London? How did these effects change as the city developed? Can London’s long experience teach us lessons that are relevant for modern cities, from Beijing to New Delhi, that are currently struggling with their own air pollution problems?

To answer these questions, I study the effects of air pollution in London across a full century from 1866 to 1965. Using new data, I show that air pollution was a major contributor to mortality in London during this century – accounting for at least one out of every 200 deaths during this century.

As London developed, the impact of air pollution changed. In the nineteenth century, Londoners suffered from a range of infectious diseases, including respiratory diseases like measles and tuberculosis. I show that being exposed to high levels of air pollution made these diseases deadlier, while the presence of these diseases made air pollution more harmful. As a result, when public health and medical improvements reduced the prevalence of these infectious diseases, they also lowered the mortality cost of pollution exposure.

This finding has implications for modern developing countries. It tells us that air pollution is likely to be more deadly in the developing world, but also that investments that improve health in other ways can lower the health costs of pollution exposure.

An important challenge in studying air pollution in the past is that direct pollution measures were not collected in a consistent way until the mid-twentieth century. To overcome this challenge, this study takes advantage of London’s famous fog events, which trapped pollution in the city and substantially increased exposure levels.

While some famous fog events are well known – such as the Great Fog of 1952 or the Cattle Show Fog of 1873, which killed the Queen’s prize bull – London experienced hundreds of lesser-known events over the century I study. By reading weather reports from the Greenwich Observatory covering over 26,000 days, we identified every day in which heavy fog occurred.

To study how these fog events affected health, I collected detailed new mortality data describing deaths in London at the weekly level. Digitised from original sources, and covering over 350,000 observations, this new data set opens the door to a more detailed analysis of London’s mortality experience than has previously been possible.

These new mortality data allow me to analyse the effects of air pollution from a variety of different angles. I provide new evidence on how the effects of air pollution varied across age groups, how the effect on different age groups evolved over time, how pollution interacted with infectious diseases and other causes of death, etc. This enriches our understanding of London’s history while opening up a range of new possibilities for studying the impact of air pollution over the long run.

The UK’s unpaid war debts to the United States, 1917-1980

by David James Gill (University of Nottingham)

ww1fe-562830
Trenches in World War I. From <www.express.co.uk>

We all think we know the consequences of the Great War – from the millions of dead to the rise of Nazism – but the story of the UK’s war debts to the United States remains largely untold.

In 1934, the British government defaulted on these loans, leaving unpaid debts exceeding $4 billion. The UK decided to cease repayment 18 months after France had defaulted on its war debts, making one full and two token repayments prior to Congressional approval of the Johnson Act, which prohibited further partial contributions.

Economists and political scientists typically attribute such hesitation to concerns about economic reprisals or the costs of future borrowing. Historians have instead stressed that delay reflected either a desire to protect transatlantic relations or a naive hope for outright cancellation.

Archival research reveals that the British cabinet’s principal concern was that many states owing money to the UK might use its default on war loans as an excuse to cease repayment on their own debts. In addition, ministers feared that refusal to pay would profoundly shock a large section of public opinion, thereby undermining the popularity of the National government. Eighteen months of continued repayment therefore provided the British government with more time to manage these risks.

The consequences of the UK’s default have attracted curiously limited attention. Economists and political scientists tend to assume dire political costs to incumbent governments as well as significant short-term economic shocks in terms of external borrowing, international trade, and the domestic economy. None of these consequences apply to the National government or the UK in the years that followed.

Most historians consider these unpaid war debts to be largely irrelevant to the course of domestic and international politics within five years. Yet archival research reveals that they continued to play an important role in British and American policy-making for at least four more decades.

During the 1940s, the issue of the UK’s default arose on several occasions, most clearly during negotiations concerning Lend-Lease and the Anglo-American loan, fuelling Congressional resistance that limited the size and duration of American financial support.

Successive American administrations also struggled to resist growing Congressional pressure to use these unpaid debts as a diplomatic tool to address growing balance of payment deficits from the 1950s to the 1970s. In addition, British default presented a formidable legal obstacle for the UK’s return to the New York bond market in the late 1970s, threatening to undermine the efficient refinancing of the government’s recent loans from the International Monetary Fund.

The consequences of the UK’s default on its First World War debts to the United States were therefore longer lasting and more significant to policy-making on both sides of the Atlantic than widely assumed.

 

Judges and the death penalty in Nazi Germany: New research evidence on judicial discretion in authoritarian states

nazipeoplescourt
The German People’s Court. Available at https://www.foreignaffairs.com/reviews/review-essay/good-germans

Do judicial courts in authoritarian regimes act as puppets for the interests of a repressive state – or do judges act with greater independence? How much do judges draw on their political and ideological affiliations when imposing the death sentence?

A study of Nazi Germany’s notorious People’s Court, recently published in the Economic Journal, reveals direct empirical evidence of how the judiciary in one of the world’s most notoriously politicised courts were influenced in their life-and-death decisions.

The research provides important empirical evidence that the political and ideological affiliations of judges do come into play – a finding that has applications for modern authoritarian regimes and also for democracies that administer the death penalty.

The research team – Dr Wayne Geerling (University of Arizona), Prof Gary Magee, Prof Russell Smyth, and Dr Vinod Mishra (Monash Business School) – explore the factors influencing the likelihood of imposing the death sentence in Nazi Germany for crimes against the state – treason and high treason.

The authors examine data compiled from official records of individuals charged with treason and high treason who appeared before the People’s Courts up to the end of the Second World War.

Established by the Nazis in 1934 to hear cases of serious political offences, the People’s Courts have been vilified as ‘blood tribunals’ in which judges meted out pre-determined sentences.

But in recent years, while not contending that the People’s Court judgments were impartial or that its judges were not subservient to the wishes of the regime, a more nuanced assessment has emerged.

For the first time, the new study presents direct empirical evidence of the reasons behind the use of judicial discretion and why some judges appeared more willing to implement the will of the state than others.

The researchers find that judges with a deeper ideological commitment to Nazi values – typified by being members of the Alte Kampfer (‘Old Fighters’ or early members of the Nazi party) – were indeed more likely to impose the death penalty than those who did not share it.

These judges were more likely to hand down death penalties to members of the most organised opposition groups, those involved in violent resistance against the state and ‘defendants with characteristics repellent to core Nazi beliefs’:

‘The Alte Kampfer were thus more likely to sentence devout Roman Catholics (24.7 percentage points), defendants with partial Jewish ancestry (34.8 percentage points), juveniles (23.4 percentage points), the unemployed (4.9 percentage points) and foreigners (42.3 percentage points) to death.’

Judges who became adults during two distinct historical periods (the Revolution of 1918-19 and the period of hyperinflation from June 1921 to January 1924), which may have shaped these judges’ views with respect to Nazism, were more likely to impose the death sentence.

 Alte Kampfer members whose hometown or suburb lay near a centre of the Revolution of 1918-19 were more likely to sentence a defendant to death.

Previous economic research on sentencing in capital cases has focused mainly on gender and racial disparities, typically in the United States. But the understanding of what determines whether courts in modern authoritarian regimes outside the United States impose the death penalty is scant. By studying a politicised court in an historically important authoritarian state, the authors of the new study shed light on sentencing more generally in authoritarian states.

The findings are important because they provide insights into the practical realities of judicial empowerment by providing rare empirical evidence on how the exercise of judicial discretion in authoritarian states is reflected in sentencing outcomes.

To contact the authors:
Russell Smyth (russell.smyth@monash.edu)

Decimalising the pound: a victory for the gentlemanly City against the forces of modernity?

by Andy Cook (University of Huddersfield)

 

1813 guinea

Some media commentators have identified the decimalisation of the UK’s currency in 1971 as the start of a submerging of British identity. For example, writing in the Daily Mail, Dominic Sandbrook characterises it as ‘marking the end of a proud history of defiant insularity and the beginning of the creeping ­Europeanisation of ­Britain’s institutions.’

This research, based on Cabinet papers, Bank of England archives, Parliamentary records and other sources, reveals that this interpretation is spurious and reflects more modern preoccupations with the arguments that dominated much of the Brexit debate, rather than the actual motivation of key players at the time.

The research examines arguments made by the proponents of alternative systems based on either decimalising the pound, or creating a new unit worth the equivalent of 10 shillings. South Africa, Australia and New Zealand had all recently adopted a 10-shilling unit, and this system was favoured by a wide range of interest groups in the UK, representing consumers, retailers, small and large businesses, and media commentators.

Virtually a lone voice in lobbying for retention of the pound was the City of London, and its arguments, articulated by the Bank of England, were based on a traditional attachment to the international status of sterling. These arguments were accepted, both by the Committee of Enquiry on Decimal currency, which reported in 1963, and, in 1966, by a Labour government headed by Harold Wilson, who shared the City’s emotional attachment to the pound.

Yet by 1960, the UK had faced the imminent prospect of being virtually the only country retaining non-decimal coinage. Most key economic players agreed that decimalisation was necessary and the only significant bone of contention was the choice of system.

Most informed opinion favoured a new major unit equivalent to 10 shillings, as reflected in evidence given by retailers and other businesses to the Committee of Enquiry on Decimal Coinage, and the formation of a Decimal Action Committee by the Consumers Association to press for such a system.

The City, represented by the Bank of England, was implacably opposed to such a system, arguing that the pound’s international prestige was crucial to underpinning the position of the City as a leading financial centre. This assertion was not evidence-based, and internal Bank documents acknowledge that their argument was ‘to some extent based on sentiment’.

This sentiment was shared by Harold Wilson, whose government announced the decision to introduce decimal currency based on the pound in 1966. Five years earlier, he had made an emotional plea to keep the pound arguing that ‘the world will lose something if the pound disappears from the markets of the world’.

Far from being the end of ‘defiant insularity’, the decision to retain a higher-value basic currency unit of any major economy, rather than adopting one closer in value either to the US dollar or the even lower-value European currencies, reflected the desire of the City and government to maintain a distinctive symbol of Britishness, the pound, overcoming opposition from interests with more practical concerns.

British perceptions of German post-war industrial relations

By Colin Chamberlain (University of Cambridge)

Some 10,000 steel workers participate in a demonstration to demand a 10 per...
A demonstration in Stuttgart, 11th January 1962.  Picture alliance/AP Images, available at <https://www.gewerkschaftsgeschichte.de/1953-schwerpunkt-tarifpolitik.html&gt;

‘Almost idyllic’ – this was the view of one British commentator on the state of post-war industrial relations in West Germany. No one could say the same about British industrial relations. Here, industrial conflict grew inexorably from year to year, forcing governments to expend ever more effort on preserving industrial peace.

Deeply frustrated, successive governments alternated between appeasing trade unionists and threatening them with new legal sanctions in an effort to improve their behaviour, thereby avoiding tackling the fundamental issue of their institutional structure. If the British had only studied the German ‘model’ of industrial relations more closely, they would have understood better the reforms that needed to be made.

Britain’s poor state of industrial relations was a major, if not the major, factor holding back Britain’s economic growth, which was regularly less than half the rate in Germany, not to speak of the chronic inflation and balance of payments problems that only made matters worse. So, how come the British did not take a deeper look at the successful model of German industrial relations and learn any lessons?

Ironically, the British were in control of Germany at the time the trade union movement was re-establishing itself after the war. The Trades Union Congress and the British labour movement offered much goodwill and help to the Germans in their task.

But German trade unionists had very different ideas to the British trade unions on how to go about organising their industrial relations, ideas that the British were to ignore consistently over the post-war period. These included:

    • In Britain, there were hundreds of trade unions, but in Germany, there were only 16 re-established after the war, each representing one or more industries, thereby avoiding the demarcation disputes so common in Britain.
    • Terms and conditions were negotiated on this industry-basis by strong well-funded trade unions, which welcomed the fact that their two or three year long collective agreements were legally enforceable in Germany’s system of industrial courts.
    • Trade unions were not involved in workplace grievances and disputes. These were left to employees and managers meeting together in Germany’s highly successful works councils to resolve such issues informally along with engaging in consultative exercises on working practices and company reorganisations. As a result, German companies did not seek to lay-off staff as British companies did on any fall in demand, but rathet to retrain and reallocate them.

British trade unions pleaded that their very untidy institutional structure with hundreds of competing trade unions was what their members actually wanted and should therefore be outside any government interference. The trade unions jealously guarded their privileges and especially rejected any idea of industry-based unions, legally enforceable collective agreements and works councils.

A heavyweight Royal Commission was appointed, but after three years’ deliberation, it came up with little more than the status quo. It was reluctant to study any ideas emanating from Germany.

While the success of industrial relations in Germany was widely recognised in Britain, there was little understanding about why this was so or indeed much interest in it. The British were deeply conservative about the ‘institutional shape’ of industrial relations and had a fear of putting forward any radical German ideas. Britain was therefore at a big disadvantage as far as creating modern trade unions operating in a modern state.

So, what economic price the failure to sort out the institutional structure of the British trade unions?