Revisiting the changing body

by Bernard Harris (University of Strathclyde)

The Society has arranged with CUP that a 20% discount is available on this book, valid until the 11th November 2018. The discount page is:

The last century has witnessed unprecedented improvements in survivorship and life expectancy. In the United Kingdom alone, infant mortality fell from over 150 deaths per thousand births at the start of the last century to 3.9 deaths per thousand births in 2014 (see the Office for National Statistics  for further details). Average life expectancy at birth increased from 46.3 to 81.4 years over the same period (see the Human Mortality Database). These changes reflect fundamental improvements in diet and nutrition and environmental conditions.

The changing body: health, nutrition and human development in the western world since 1700 attempted to understand some of the underlying causes of these changes. It drew on a wide range of archival and other sources covering not only mortality but also height, weight and morbidity. One of our central themes was the extent to which long-term improvements in adult health reflected the beneficial effect of improvements in earlier life.

The changing body also outlined a very broad schema of ‘technophysio evolution’ to capture the intergenerational effects of investments in early life. This is represented in a very simple way in Figure 1. The Figure tries to show how improvements in the nutritional status of one generation increase its capacity to invest in the health and nutritional status of the next generation, and so on ‘ad infinitum’ (Floud et al. 2011: 4).

Figure 1. Technophysio evolution: a schema. Source: See Floud et al. 2011: 3-4.

We also looked at some of the underlying reasons for these changes, including the role of diet and ‘nutrition’. As part of this process, we included new estimates of the number of calories which could be derived from the amount of food available for human consumption in the United Kingdom between circa 1700 and 1913. However, our estimates contrasted sharply with others published at the same time (Muldrew 2011) and were challenged by a number of other authors subsequently. Broadberry et al. (2015) thought that our original estimates were too high, whereas both Kelly and Ó Gráda (2013) and Meredith and Oxley (2014) regarded them as too low.

Given the importance of these issues, we revisited our original calculations in 2015. We corrected an error in the original figures, used Overton and Campbell’s (1996) data on extraction rates to recalculate the number of calories, and included new information on the importation of food from Ireland to other parts of what became the UK. Our revised Estimate A suggested that the number of calories rose by just under 115 calories per head per day between 1700 and 1750 and by more than 230 calories between 1750 and 1800, with little changes between 1800 and 1850. Our revised Estimate B suggested that there was a much bigger increase during the first half of the eighteenth century, followed by a small decline between 1750 and 1800 and a bigger increase between 1800 and 1850 (see Figure 2). However, both sets of figures were still well below the estimates prepared by Kelly and Ó Gráda, Meredith and Oxley, and Muldrew for the years before 1800.

Source: Harris et al. 2015: 160.

These calculations have important implications for a number of recent debates in British economic and social history (Allen 2005, 2009). Our data do not necessarily resolve the debate over whether Britons were better fed than people in other countries, although they do compare quite favourably with relevant French estimates (see Floud et al. 2011: 55). However, they do suggest that a significant proportion of the eighteenth-century population was likely to have been underfed.
Our data also raise some important questions about the relationship between nutrition and mortality. Our revised Estimate A suggests that food availability rose slowly between 1700 and 1750 and then more rapidly between 1750 and 1800, before levelling off between 1800 and 1850. These figures are still broadly consistent with Wrigley et al.’s (1997) estimates of the main trends in life expectancy and our own figures for average stature. However, it is not enough simply to focus on averages; we also need to take account of possible changes in the distribution of foodstuffs within households and the population more generally (Harris 2015). Moreover, it is probably a mistake to examine the impact of diet and nutrition independently of other factors.

To contact the author:


Allen, R. (2005), ‘English and Welsh agriculture, 1300-1850: outputs, inputs and income’. URL:

Allen, R. (2009), The British industrial revolution in global perspective, Cambridge: Cambridge University Press.

Broadberry, S., Campbell, B., Klein, A., Overton, M. and Van Leeuwen, B. (2015), British economic growth, 1270-1870, Cambridge: Cambridge University Press.

Floud, R., Fogel, R., Harris, B. and Hong, S.C. (2011), The changing body: health, nutrition and human development in the western world since 1700, Cambridge: Cambridge University Press.

Harris, B. (2015), ‘Food supply, health and economic development in England and Wales during the eighteenth and nineteenth centuries’, Scientia Danica, Series H, Humanistica, 4 (7), 139-52.

Harris, B., Floud, R. and Hong, S.C. (2015), ‘How many calories? Food availability in England and Wales in the eighteenth and nineteenth centuries’, Research in Economic History, 31, 111-91.

Kelly, M. and Ó Gráda, C. (2013), ‘Numerare est errare: agricultural output and food supply in England before and during the industrial revolution’, Journal of Economic History, 73 (4), 1132-63.

Meredith, D. and Oxley, D. (2014), ‘Food and fodder: feeding England, 1700-1900’, Past and Present, 222, 163-214.

Muldrew, C. (2011), Food, energy and the creation of industriousness: work and material culture in agrarian England, 1550-1780, Cambridge: Cambridge University Press.

Overton, M. and Campbell, B. (1996), ‘Production et productivité dans l’agriculture anglaise, 1086-1871’, Histoire et Mésure, 1 (3-4), 255-97.

Wrigley, E.A., Davies, R., Oeppen, J. and Schofield, R. (1997), English population history from family reconstitution, Cambridge: Cambridge University Press.

Britain’s post-Brexit trade: learning from the Edwardian origins of imperial preference

by Brian Varian (Swansea University)

Imperial Federation, map of the world showing the extent of the British Empire in 1886. Wikimedia Commons

In December 2017, Liam Fox, the Secretary of State for International Trade, stated that ‘as the United Kingdom negotiates its exit from the European Union, we have the opportunity to reinvigorate our Commonwealth partnerships, and usher in a new era where expertise, talent, goods, and capital can move unhindered between our nations in a way that they have not for a generation or more’.

As policy-makers and the public contemplate a return to the halcyon days of the British Empire, there is much to be learned from those past policies that attempted to cultivate trade along imperial lines. Let us consider the effect of the earliest policies of imperial preference: policies enacted during the Edwardian era.

In the late nineteenth century, Britain was the bastion of free trade, imposing tariffs on only a very narrow range of commodities. Consequently, Britain’s free trade policy afforded barely any scope for applying lower or ‘preferential’ duties to imports from the Empire.

The self-governing colonies of the Empire possessed autonomy in tariff-setting and, with the notable exception of New South Wales, did not emulate the mother country’s free trade policy. In the 1890s and 1900s, when the emergent industrial nations of Germany and the United States reduced Britain’s market share in these self-governing colonies, there was indeed scope for applying preferential duties to imports from Britain, in the hope of diverting trade back toward the Empire.

Trade policies of imperial preference were implemented in succession by Canada (1897), the South African Customs Union (1903), New Zealand (1903) and Australia (1907). By the close of the first era of globalisation in 1914, Britain enjoyed some margin of preference in all of the Dominions. Yet my research, a case study of New Zealand, casts doubt on the effectiveness of these polices at raising Britain’s share in the imports of the Dominions.

Unlike the policies of the other Dominions, New Zealand’s policy applied preferential duties to only selected commodity imports (44 out of 543). This cross-commodity variation in the application of preference is useful for estimating the effect of preference. I find that New Zealand’s Preferential and Reciprocal Trade Act of 1903 had no effect on the share of the Empire, or of Britain specifically, in New Zealand’s imports.

Why was the policy ineffective at raising Britain’s share of New Zealand’s imports? There are several likely reasons: that Britain’s share was already quite large; that some imported commodities were highly differentiated and certain varieties were only produced in other industrial countries; and, most importantly, that the margin of preference – the extent to which duties were lower for imports from Britain – was too small to effect any trade diversion.

As Britain considers future trade agreements, perhaps with Commonwealth countries, it should be remembered that a trade agreement does not necessarily entail a great, or even any, increase in trade. The original policies of imperial preference were rather symbolic measures and, at least in the case of New Zealand, economically inconsequential.

Brexit might well present an ‘opportunity to reinvigorate our Commonwealth partnerships’, but would that be a reinvigoration in substance or in appearance?

Could fiscal policy still stimulate the economy?

by James Cloyne (University of California, Davis), Nicholas Dimsdale (University of Oxford), Natacha Postel-Vinay (London School of Economics)


No means test for these ‘unemployed’! by Maro.
1935 was the Silver Jubilee of King George V. There were celebrations and street parties across Britain. However with the country in a financial depression not everyone approved of the public expense associated with the Royal Family. Available at Wikimedia Commons

There has been a longstanding and unresolved debate over the fiscal multiplier, which is the change in economic growth resulting from a change in government spending or change in taxation. The issue became acute in the world recession of 2008-2010, when the International Monetary Fund led a spirited discussion about the contribution that fiscal policy could make to recovery.

In our research, fiscal policy is shown to have had positive impacts on growth, at least during the period surrounding the Great Depression in Britain. The implications for the potential benefits of fiscal policy in a high-debt, low-interest rate environment – and over a turbulent business cycle – may be significant.

The recent controversy follows the debate over the use of fiscal policy to counter the high level of unemployment in interwar Britain. Keynes argued that increased government spending would raise economic activity and reduce unemployment. In the General Theory (1936), he claimed that the multiplier for government expenditure was greater than unity.

A few more recent studies have confirmed that the multiplier effect is greater than unity for both the interwar and post-war period. But these results may be spurious since a rise in government expenditure that raises income may also result from a rise in income. Thus, changes in taxes and changes in income may not be independent. What we observe is a strong co-movement of GDP and fiscal measures in which it is hard to isolate the direction of causation.

What is needed is a source of exogenous variation, so that the impact of fiscal changes on GDP can be observed. Fiscal policy may take the form of changes in taxes or expenditure. The problems of endogeneity are generally greater for expenditure than for taxes, since it should be possible to find changes in taxes that are truly exogenous.

Romer and Romer (2010) have developed the so-called ‘narrative technique,’ which has been designed to overcome the problem of endogeneity of tax changes. This involves carefully distilling the historical record in order to infer Chancellors’ motivations behind each fiscal policy move, and isolate those that may be seen as more independent from the contemporaneous fluctuations of the economy.

One may thus be able to distinguish, for example, between taxes that arise from a direct will to stimulate the economy, as compared with changes that are more motivated by a Chancellor’s longstanding ideology. The latter may include, for example, a will to improve transport efficiency within the country, or a desire to make society less unequal.

Interwar Britain is a particularly appropriate period to apply this approach, since the potential for fiscal policy was great on account of the high level of unemployment. In addition, this was a period in which Keynesian countercyclical policies were generally not used, in contrast to the use of demand management policies in the post-war period.

By examining changes in taxes in interwar budgets, we have been able to produce a sample of 300 tax changes. These have been classified into changes in taxes that are endogenous or exogenous. We have been able to test the backward validity of our classification.

The outcome of this work has been to show that changes in taxes that are exogenous had a major impact on changes in GDP. The estimated value of the multiplier for these tax changes is greater than unity and as much as two to three. This is in accordance with results reported in post-war studies of the United States and a study of tax changes in post-war Britain (Cloyne, 2013).

In contrast to earlier work on measuring the multiplier, we concentrate on changes in taxes rather than changes in government expenditure. This is done to reduce problems of endogeneity.

While Keynes argued for using government spending to stimulate the economy, it was only when post-war fiscal policies were being formulated that the potential benefits of fiscal policies via changes in taxes were recognised. While this research does not argue in favour of tax changes over spending policies, it provides evidence that tax policy is a relevant part of the policy toolkit, especially in times of economic difficulty.

London fog: a century of pollution and mortality, 1866-1965

by Walker Hanlon (UCLA)

Photogravure by Donald Macleish from Wonderful London by St John Adcock, 1927. Available at <;

For more than a century, London struggled with some of the worst air pollution on earth. But how much did air pollution affect health in London? How did these effects change as the city developed? Can London’s long experience teach us lessons that are relevant for modern cities, from Beijing to New Delhi, that are currently struggling with their own air pollution problems?

To answer these questions, I study the effects of air pollution in London across a full century from 1866 to 1965. Using new data, I show that air pollution was a major contributor to mortality in London during this century – accounting for at least one out of every 200 deaths during this century.

As London developed, the impact of air pollution changed. In the nineteenth century, Londoners suffered from a range of infectious diseases, including respiratory diseases like measles and tuberculosis. I show that being exposed to high levels of air pollution made these diseases deadlier, while the presence of these diseases made air pollution more harmful. As a result, when public health and medical improvements reduced the prevalence of these infectious diseases, they also lowered the mortality cost of pollution exposure.

This finding has implications for modern developing countries. It tells us that air pollution is likely to be more deadly in the developing world, but also that investments that improve health in other ways can lower the health costs of pollution exposure.

An important challenge in studying air pollution in the past is that direct pollution measures were not collected in a consistent way until the mid-twentieth century. To overcome this challenge, this study takes advantage of London’s famous fog events, which trapped pollution in the city and substantially increased exposure levels.

While some famous fog events are well known – such as the Great Fog of 1952 or the Cattle Show Fog of 1873, which killed the Queen’s prize bull – London experienced hundreds of lesser-known events over the century I study. By reading weather reports from the Greenwich Observatory covering over 26,000 days, we identified every day in which heavy fog occurred.

To study how these fog events affected health, I collected detailed new mortality data describing deaths in London at the weekly level. Digitised from original sources, and covering over 350,000 observations, this new data set opens the door to a more detailed analysis of London’s mortality experience than has previously been possible.

These new mortality data allow me to analyse the effects of air pollution from a variety of different angles. I provide new evidence on how the effects of air pollution varied across age groups, how the effect on different age groups evolved over time, how pollution interacted with infectious diseases and other causes of death, etc. This enriches our understanding of London’s history while opening up a range of new possibilities for studying the impact of air pollution over the long run.

Cash Converter: The Liquidity of the Victorian Capital Market

by John Turner (Queen’s University Centre for Economic History)

Liquidity is the ease with which an asset such as a share or a bond can be converted into cash. It is important for financial systems because it enables investors to liquidate and diversify their assets at a low cost. Without liquid markets, portfolio diversification becomes very costly for the investor. As a result, firms and governments must pay a premium to induce investors to buy their bonds and shares. Liquid capital markets also spur firms and entrepreneurs to invest in long-run projects, which increases productivity and economic growth.

From an historical perspective, share liquidity in the UK played a major role in the widespread adoption of the company form in the second half of the nineteenth century. Famously, as I discuss in a recent book chapter published in the Research Handbook on the History of Corporate and Company Law, political and legal opposition to share liquidity held up the development of the company form in the UK.

However, given the economic and historical importance of liquidity, very little has been written on the liquidity of UK capital markets before 1913. Ron Alquist (2010) and Matthieu Chavaz and Marc Flandreau (2017) examine the liquidity risk and premia of various sovereign bonds which were traded on the London Stock Exchange during the late Victorian and early Edwardian eras. Along with Graeme Acheson (2008), I document the thinness of the market for bank shares in the nineteenth century, using the share trading records of a small number of banks.

In a major study, Gareth Campbell (Queen’s University Belfast), Qing Ye (Xi’an Jiaotong-Liverpool University) and I have recently attempted to understand more about the liquidity of the Victorian capital market. To this end, we have just published a paper in the Economic History Review which looks at the liquidity of the London share and bond markets from 1825 to 1870. The London capital market experienced considerable growth in this era. The liberalisation of incorporation law and Parliament’s liberalism in granting company status to railways and other public-good providers, resulted in the growth of the number of business enterprises having their shares and bonds traded on stock exchanges. In addition, from the 1850s onwards, there was an increase in the number of foreign countries and companies raising bond finance on the London market.

How do we measure the liquidity of the market for bonds and stocks in the 1825-70 era? Using end-of-month stock price data from a stockbroker list called the Course of the Exchange and end-of-month bond prices from newspaper sources, we calculate for each security, the number of months in the year where it had a zero return and divide that by the number of months it was listed in the year. Because zero returns are indicative of illiquidity (i.e., that a security has not been traded), one minus our illiquidity ratio gives us a liquidity measure for each security in our sample. We calculate the overall market liquidity for shares and bonds by taking averages. Figure 1 displays market liquidity for bonds and stocks for the period 1825-70.

Figure 01. Stock and bond liquidity on London Stock Exchange, 1825-1870. Source: Campbell, Turner and Ye (2018, p.829)

Figure 1 reveals that bond market liquidity was relatively high throughout this period but shows no strong trend over time. By way of contrast, there was a strong secular increase in stock liquidity from 1830 to 1870. This increase may have stimulated greater participation in the stock market by ordinary citizens. It may also have affected the growth and deepening of the overall stock market and resulted in higher economic growth.

We examine the cross-sectional differences in liquidity between stocks in order to understand the main determinants of stock liquidity in this era. Our main finding in this regard is that firm size and the number of issued shares were major correlates of liquidity, which suggests that larger firms and firms with a greater number of shares were more frequently traded. Our study also reveals that unusual features which were believed to impede liquidity, such as extended liability, uncalled capital or high share denominations, had little effect on stock liquidity.

We also examine whether asset illiquidity was priced by investors, resulting in higher costs of capital for firms and governments. We find little evidence that the illiquidity of stock or bonds was priced, suggesting that investors at the time did not put much emphasis on liquidity in their valuations. Indeed, this is consistent with J. B. Jefferys (1938), who argued that what mattered to investors during this era was not share liquidity, but the dividend or coupon they received.

In conclusion, the vast majority of stocks and bonds in this early capital market were illiquid. It is remarkable, however, that despite this illiquidity, the UK capital market grew substantially between 1825 and 1870. There was also an increase in investor participation, with investing becoming progressively democratised in this era.


To contact the author:
Twitter: @profjohnturner



Acheson, G.G., and Turner, J.D. “The Secondary Market for Bank Shares in Nineteenth-Century Britain.” Financial History Review 15, no. 2 (October 2008): 123–51. doi:10.1017/S0968565008000139.

Alquist, R. “How Important Is Liquidity Risk for Sovereign Bond Risk Premia? Evidence from the London Stock Exchange.” Journal of International Economics 82, no. 2 (November 1, 2010): 219–29. doi:10.1016/j.jinteco.2010.07.007.

Campbell, G., Turner, J.D., and Ye, Q. “The Liquidity of the London Capital Markets, 1825–70†.” The Economic History Review 71, no. 3 (August 1, 2018): 823–52. doi:10.1111/ehr.12530.

Chavaz, M., and Flandreau, M. “‘High & Dry’: The Liquidity and Credit of Colonial and Foreign Government Debt and the London Stock Exchange (1880–1910).” The Journal of Economic History 77, no. 3 (September 2017): 653–91. doi:10.1017/S0022050717000730.

Jefferys, J.B. Trends in Business Organisation in Great Britain Since 1856: With Special Reference to the Financial Structure of Companies, the Mechanism of Investment and the Relations Between the Shareholder and the Company. University of London, 1938.

Wages of sin: slavery and the banks, 1830-50

by Aaron Graham (University College London)


From the cartoon ‘Slave Emancipation; Or, John Bull Gulled Out Of Twenty Millions’ by C.J. Grant. In Richard Pound (UCL, 1998), C.J. Grant’s ‘Political Drama’, a radical satirist rediscovered‘. Available at <;

In 1834, the British Empire emancipated its slaves. This should have quickly triggered a major shift away from plantation labour and towards a free society where ex-slaves would bargain for better wages and force the planters to adopt new business models or go under. But the planters and plantation system survived, even if slavery did not. What went wrong?

This research follows the £20 million paid in compensation by the British government in 1834 (equivalent to about £20 billion today). This money was paid not to the slaves, but to the former slave-owners for the loss of their human property.

Thanks to the Legacies of British Slave-ownership project at University College London, we now know who received the money and how much. But until this study, we knew very little about how the former slave-owners used this money, or what effect this had on colonial societies in the West Indies or South Africa as they confronted the demands of this new world.

The study suggests why so little changed. It shows that slave-owners in places such as Jamaica, Guyana, South Africa and Mauritius used the money they received not just to pay off their debts, but also to set up new banks, which created credit by issuing bank notes and then supplied the planters with cash and credit.

Planters used the credit to improve their plantations and the cash to pay wages to their new free labourers, who therefore lacked the power to bargain for better conditions. Able to accommodate the social and economic pressures that would otherwise have forced them to reassess their business models and find new approaches that did not rely on the unremitting exploitation of black labour, planters could therefore resist the demands for broader economic and social change.

Tracking the ebb and flow of money shows that in Jamaica, for example, in 1836 about 200 planters chose to subscribe half the £450,000 they had received in compensation in the new Bank of Jamaica. By 1839, the bank had issued almost £300,000 in notes, enabling planters across the island to meet their workers’ wages without otherwise altering the plantation system.

When the Planters’ Bank was founded in 1839, it issued a further £100,000. ‘We congratulate the country on the prospects of a local institution of this kind’, the Jamaica Despatch commented in May 1839, ‘ … designed to aid and relieve those who are labouring under difficulties peculiar to the Jamaican planter at the present time’.

In other cases, the money even allowed farmers to expand the system of exploitation. In the Cape of Good Hope, the Eastern Province Bank at Grahamstown raised £26,000 with money from slavery compensation but provided the British settlers with £170,000 in short-term loans, helping them to dispossess native peoples of their land and use them as cheap labour to raise wool for Britain’s textile factories.

‘With united influence and energy’, the bank told its shareholders in 1840, for example, ‘the bank must become useful, as well to the residents at Grahamstown and our rapidly thriving agriculturists as prosperous itself’.

This study shows for the first time why planters could carry on after 1834 with business as usual. The new banks created after 1834 helped planters throughout the British Empire to evade the major social and economic changes that abolitionists had wanted and which their opponents had feared.

By investing their slavery compensation money in banks that then offered cash and credit, the planters could prolong and even expand their place in economies and societies built on the plantation system and the exploitation of black labour.


To contact the author:


The UK’s unpaid war debts to the United States, 1917-1980

by David James Gill (University of Nottingham)

Trenches in World War I. From <>

We all think we know the consequences of the Great War – from the millions of dead to the rise of Nazism – but the story of the UK’s war debts to the United States remains largely untold.

In 1934, the British government defaulted on these loans, leaving unpaid debts exceeding $4 billion. The UK decided to cease repayment 18 months after France had defaulted on its war debts, making one full and two token repayments prior to Congressional approval of the Johnson Act, which prohibited further partial contributions.

Economists and political scientists typically attribute such hesitation to concerns about economic reprisals or the costs of future borrowing. Historians have instead stressed that delay reflected either a desire to protect transatlantic relations or a naive hope for outright cancellation.

Archival research reveals that the British cabinet’s principal concern was that many states owing money to the UK might use its default on war loans as an excuse to cease repayment on their own debts. In addition, ministers feared that refusal to pay would profoundly shock a large section of public opinion, thereby undermining the popularity of the National government. Eighteen months of continued repayment therefore provided the British government with more time to manage these risks.

The consequences of the UK’s default have attracted curiously limited attention. Economists and political scientists tend to assume dire political costs to incumbent governments as well as significant short-term economic shocks in terms of external borrowing, international trade, and the domestic economy. None of these consequences apply to the National government or the UK in the years that followed.

Most historians consider these unpaid war debts to be largely irrelevant to the course of domestic and international politics within five years. Yet archival research reveals that they continued to play an important role in British and American policy-making for at least four more decades.

During the 1940s, the issue of the UK’s default arose on several occasions, most clearly during negotiations concerning Lend-Lease and the Anglo-American loan, fuelling Congressional resistance that limited the size and duration of American financial support.

Successive American administrations also struggled to resist growing Congressional pressure to use these unpaid debts as a diplomatic tool to address growing balance of payment deficits from the 1950s to the 1970s. In addition, British default presented a formidable legal obstacle for the UK’s return to the New York bond market in the late 1970s, threatening to undermine the efficient refinancing of the government’s recent loans from the International Monetary Fund.

The consequences of the UK’s default on its First World War debts to the United States were therefore longer lasting and more significant to policy-making on both sides of the Atlantic than widely assumed.


British perceptions of German post-war industrial relations

By Colin Chamberlain (University of Cambridge)

Some 10,000 steel workers participate in a demonstration to demand a 10 per...
A demonstration in Stuttgart, 11th January 1962.  Picture alliance/AP Images, available at <;

‘Almost idyllic’ – this was the view of one British commentator on the state of post-war industrial relations in West Germany. No one could say the same about British industrial relations. Here, industrial conflict grew inexorably from year to year, forcing governments to expend ever more effort on preserving industrial peace.

Deeply frustrated, successive governments alternated between appeasing trade unionists and threatening them with new legal sanctions in an effort to improve their behaviour, thereby avoiding tackling the fundamental issue of their institutional structure. If the British had only studied the German ‘model’ of industrial relations more closely, they would have understood better the reforms that needed to be made.

Britain’s poor state of industrial relations was a major, if not the major, factor holding back Britain’s economic growth, which was regularly less than half the rate in Germany, not to speak of the chronic inflation and balance of payments problems that only made matters worse. So, how come the British did not take a deeper look at the successful model of German industrial relations and learn any lessons?

Ironically, the British were in control of Germany at the time the trade union movement was re-establishing itself after the war. The Trades Union Congress and the British labour movement offered much goodwill and help to the Germans in their task.

But German trade unionists had very different ideas to the British trade unions on how to go about organising their industrial relations, ideas that the British were to ignore consistently over the post-war period. These included:

    • In Britain, there were hundreds of trade unions, but in Germany, there were only 16 re-established after the war, each representing one or more industries, thereby avoiding the demarcation disputes so common in Britain.
    • Terms and conditions were negotiated on this industry-basis by strong well-funded trade unions, which welcomed the fact that their two or three year long collective agreements were legally enforceable in Germany’s system of industrial courts.
    • Trade unions were not involved in workplace grievances and disputes. These were left to employees and managers meeting together in Germany’s highly successful works councils to resolve such issues informally along with engaging in consultative exercises on working practices and company reorganisations. As a result, German companies did not seek to lay-off staff as British companies did on any fall in demand, but rathet to retrain and reallocate them.

British trade unions pleaded that their very untidy institutional structure with hundreds of competing trade unions was what their members actually wanted and should therefore be outside any government interference. The trade unions jealously guarded their privileges and especially rejected any idea of industry-based unions, legally enforceable collective agreements and works councils.

A heavyweight Royal Commission was appointed, but after three years’ deliberation, it came up with little more than the status quo. It was reluctant to study any ideas emanating from Germany.

While the success of industrial relations in Germany was widely recognised in Britain, there was little understanding about why this was so or indeed much interest in it. The British were deeply conservative about the ‘institutional shape’ of industrial relations and had a fear of putting forward any radical German ideas. Britain was therefore at a big disadvantage as far as creating modern trade unions operating in a modern state.

So, what economic price the failure to sort out the institutional structure of the British trade unions?

Winning the capital, winning the war: retail investors in the First World War

by Norma Cohen (Queen Mary University of London)


National War Savings CommitteeMcMaster University Libraries, Identifier: 00001792. Available at wikimedia commons

The First World War brought about an upheaval in British investment, forcing savers to repatriate billions of pounds held abroad and attracting new investors among those living far from London, this research finds. The study also points to declining inequality between Britain’s wealthiest classes and the middle class, and rising purchasing power among the lower middle classes.

The research is based on samples from ledgers of investors in successive War Loans. These are lodged in archives at the Bank of England and have been closed for a century. The research covers roughly 6,000 samples from three separate sets of ledgers of investors between 1914 and 1932.

While the First World War is recalled as a period of national sacrifice and suffering, the reality is that war boosted Britain’s output. Sampling from the ledgers points to the extent to which war unleashed the industrial and engineering innovations of British industry, creating and spreading wealth.

Britain needed capital to ensure it could outlast its enemies. As the world’s capital exporter by 1914, the nation imposed increasingly tight measures on investors to ensure capital was used exclusively for war.

While London was home to just over half the capital raised in the first War Loan in 1914, that had fallen to just under 10% of capital raised in the years after. In contrast, the North East, North West and Scotland – home to the mining, engineering and shipbuilding industries – provided 60% of the capital by 1932, up from a quarter of the total raised by the first War Loan.

The concentration of investor occupations also points to profound social changes fostered by war. Men describing themselves as ‘gentleman’ or ‘esquire’ – titles accorded those wealthy enough to live on investment returns – accounted for 55% of retail investors for the first issue of War Loan. By the post-war years, these were 37% of male investors.

In contrast, skilled labourers – blacksmiths, coal miners and railway signalmen among others– were 9.0% of male retail investors by the after-war years, up from 4.9% in the first sample.

Suppliers of war-related goods may not have been the main beneficiaries of newly-created wealth. The sample includes large investments by those supplying consumer goods sought by households made better off by higher wages, steady work and falling unemployment during the war.

During and after the war, these sectors were accused of ‘profiteering’, sparking national indignation. Nearly a quarter of investors in 5% War Loan listing their occupations as ‘manufacturer’ were producing boots and leather goods, a sector singled out during the war for excess profits. Manufacturers in the final sample produced mineral water, worsteds, jam and bread.

My findings show that War Loan was widely held by households likely to have had relatively modest wealth; while the largest concentration of capital remained in the hands of relatively few, larger numbers had a small stake in the fate of the War Loans.

In the post-war years, over half of male retail investors held £500 or less. This may help to explain why efforts to pay for war by taxing wealth as well as income – a debate that echoes today – proved so politically challenging. The rentier class on whom additional taxation would have been levied may have been more of a political construct by 1932 than an actual presence.


EHS 2018 special: Ownership and control of land by women in nineteenth-century England

by Janet Casson (independent scholar)


A 19th Century English countryside landscape, oil on canvas, anonymous.

The HS2 train route between London and Birmingham has been modified in response to outrage from people concerned about the impact on their property. This is nothing new. Over 150 years ago, railways cut through the English countryside to provide new infrastructure for an expanding economy. Railway surveyors laying out a route made detailed maps and carefully recorded the usage and ownership of every affected property in books of reference.

The complexity of the laws governing the rights of women has meant that women’s land ownership in the nineteenth century has rarely been investigated. Indeed, it was widely believed that the law deterred women’s ownership of land.

These railway books of reference provide a unique insight into this rarely investigated topic and provide an insight into women’s control of land. Statistical analysis of the information reveals that women owned, either singly or jointly, about 12% of that land.

Detailed profiles of 348 women and their property give an insight not only into the ownership but also the control of land. They reveal if a woman shared ownership and if so, with whom; a woman owning alone had a higher degree of control than a woman owing with others. They indicate the amount of land, the woman’s wealth and her potential influence over other people. If she had a multi-plot portfolio, its geographical dispersal indicates whether her influence was local, regional or even national.

Women who owned with men were regarded as having little control over land. Before the 1882 Married Women’s Property Act, wives were constrained by common law: they could own real property, but lost independent control of its management and the use of any rents or profits unless they had a settlement or trust. Women who owned with an institution had least control given that institutions had statutory powers and often protracted decision-making.

Many women held their property as sole owners (average 35.5%) and were confident to own and control large portfolios. Where women shared ownership, it was usually with men (average 42.0%) rather than exclusively with other women.

There was a trade-off between exercising strong control over a few properties that could be self-managed or weaker control over more properties where co-owners shared the administration. Similarly, a trade-off existed between owning many local properties or fewer widely dispersed properties where, to maximise the economic return on the plots, co-owners were needed for their local knowledge.

The size of property portfolios varied across regions. They were smallest in London, possibly reflecting the high property prices and the significant number of single women living in the suburbs; and largest in Durham where several women owned large national portfolios.

An average of 24% of plots was held by single-plot-owing women. But the typical portfolio comprised 2-5 plots (37.6%). Larger portfolios of 10 or more were also fairly common (24.1%). Large portfolios were often geographically dispersed – across a county, region or nationally.

The picture that emerges from this analysis is that many women as sole owners enjoyed considerable autonomy in the control of their portfolios. Where they relied on others, they typically relied on men.

But as the diversity of their portfolios increased, women did not increase their dependence on men but chose to retain their autonomy instead. Women it appears, valued their autonomy, and did their best to maintain and protect it