Surprisingly gentle confinement

Tim Leunig (LSE), Jelle van Lottum (Huygens Institute) and Bo Poulsen (Aarlborg University) have been investigating the treatment of prisoners of war in the Napoleonic Wars.

 

index
Napoleonic Prisoner of War. Available at <https://blog.findmypast.com.au/explore-our-fascinating-new-napoleonic-prisoner-of-war-records-1406376311.html&gt;

For most of history, life as a prisoner of war was nasty, brutish and short. There were no regulations on the treatment of prisoners until the 1899 Hague convention, and the later Geneva conventions. Many prisoners were killed immediately, other enslaved to work in mines, and other undesirable places.

The poor treatment of prisoners of war was partly intentional – they were the hated enemy, after all. And partly it was economic. It costs money to feed and shelter prisoners. Countries in the past – especially in times of war and conflict – were much poorer than today.

Nineteenth century prisoner death rates were horrific. Between one-half and six-sevenths of Napoleon’s 17,000 troops surrendering to the Spanish in 1808 after the Battle of Balién died as prisoners of war. The American civil war saw death rates rise to 27%, even though the average prisoner was captive for less than a year.

The Napoleonic Wars saw the British capture 7,000 Danish and Norwegian sailors, military and merchant. Britain did not desire war with Denmark (which ruled Norway at the time), but did so to prevent Napoleon seizing the Danish fleet. Prisoners were incarcerated on old, unseaworthy “prison hulks”, moored in the Thames Estuary, near Rochester. Conditions were crowded: each man was given just 2 feet (60 cm) in width to hang his hammock.

Were these prison hulks floating tombs, as some contemporaries claimed? Our research shows otherwise. The Admiralty kept exemplary records, now held in the National Archive in Kew. These show the date of arrival in prison, and the date of release, exchange, escape – or death. They also tell us the age of the prisoner, where they came from, the type of ship they served on, and whether they were an officer, craftsman, or regular sailor. We can use these records to look at how many died, and why.

The prisoners ranged in age from 8 to 80, with half aged 22 to 35. The majority sailed on merchant vessels, with a sixth on military vessels, and a quarter on licenced pirate boats, permitted to harass British shipping. The amount of time in prison varied dramatically, from 3 days to over 7 years, with an average of 31 months. About two thirds were released before the end of the war.

Taken as a whole, 5% of prisoners died. This is a remarkably low number, given how long they were held, and given experience elsewhere in the nineteenth century. Being held prisoner for longer increased your chance of dying, but not by much: those who spent three years on a prison hulk had only a 1% greater chance of dying than those who served just one year.

Death was (almost) random. Being captured at the start of the war was neither better nor worse than being captured at the end. The number of prisoners held at any one time did not increase the death rate. The old were no more likely to die than the young – anyone fit enough to go to see was fit enough to withstand any rigours of prison life. Despite extra space and better rations, officers were no less likely to die, implying that conditions were reasonable for common sailors.

There is only one exception: sailors from licenced pirate boats were twice as likely to die as merchant or official navy sailors. We cannot know the reason. Perhaps they were treated less well by their guards, or other prisoners. Perhaps they were risk takers, who gambled away their rations. Even for this group, however, the death rates were very low compared with those captured in other places, and in other wars.

The British had rules on prisoners of war, for food and hygiene. Each prisoner was entitled to 2.5 lbs (~1 kg) of beef, 1 lb of fish, 10.5 lbs of bread, 2 lbs of potatoes, 2.5lbs of cabbage, and 14 pints (8 litres) of (very weak) beer a week. This is not far short of Danish naval rations, and prisoners are less active than sailors. We cannot be sure that they received their rations in full every week, but the death rates suggest that they were not hungry in any systematic way. The absence of epidemics suggests that hygiene was also good. Remarkably, and despite a national debt that peaked at a still unprecedented 250% of GDP, the British appear to have obeyed their own rules on how to treat prisoners.

Far from being floating tombs, therefore, this was a surprisingly gentle confinement for the Danish and Norwegian sailors captured by the British in the Napoleonic Wars.

Britain’s post-Brexit trade: learning from the Edwardian origins of imperial preference

by Brian Varian (Swansea University)

798px-Imperial_Federation,_map_of_the_world_showing_the_extent_of_the_British_Empire_in_1886
Imperial Federation, map of the world showing the extent of the British Empire in 1886. Wikimedia Commons

In December 2017, Liam Fox, the Secretary of State for International Trade, stated that ‘as the United Kingdom negotiates its exit from the European Union, we have the opportunity to reinvigorate our Commonwealth partnerships, and usher in a new era where expertise, talent, goods, and capital can move unhindered between our nations in a way that they have not for a generation or more’.

As policy-makers and the public contemplate a return to the halcyon days of the British Empire, there is much to be learned from those past policies that attempted to cultivate trade along imperial lines. Let us consider the effect of the earliest policies of imperial preference: policies enacted during the Edwardian era.

In the late nineteenth century, Britain was the bastion of free trade, imposing tariffs on only a very narrow range of commodities. Consequently, Britain’s free trade policy afforded barely any scope for applying lower or ‘preferential’ duties to imports from the Empire.

The self-governing colonies of the Empire possessed autonomy in tariff-setting and, with the notable exception of New South Wales, did not emulate the mother country’s free trade policy. In the 1890s and 1900s, when the emergent industrial nations of Germany and the United States reduced Britain’s market share in these self-governing colonies, there was indeed scope for applying preferential duties to imports from Britain, in the hope of diverting trade back toward the Empire.

Trade policies of imperial preference were implemented in succession by Canada (1897), the South African Customs Union (1903), New Zealand (1903) and Australia (1907). By the close of the first era of globalisation in 1914, Britain enjoyed some margin of preference in all of the Dominions. Yet my research, a case study of New Zealand, casts doubt on the effectiveness of these polices at raising Britain’s share in the imports of the Dominions.

Unlike the policies of the other Dominions, New Zealand’s policy applied preferential duties to only selected commodity imports (44 out of 543). This cross-commodity variation in the application of preference is useful for estimating the effect of preference. I find that New Zealand’s Preferential and Reciprocal Trade Act of 1903 had no effect on the share of the Empire, or of Britain specifically, in New Zealand’s imports.

Why was the policy ineffective at raising Britain’s share of New Zealand’s imports? There are several likely reasons: that Britain’s share was already quite large; that some imported commodities were highly differentiated and certain varieties were only produced in other industrial countries; and, most importantly, that the margin of preference – the extent to which duties were lower for imports from Britain – was too small to effect any trade diversion.

As Britain considers future trade agreements, perhaps with Commonwealth countries, it should be remembered that a trade agreement does not necessarily entail a great, or even any, increase in trade. The original policies of imperial preference were rather symbolic measures and, at least in the case of New Zealand, economically inconsequential.

Brexit might well present an ‘opportunity to reinvigorate our Commonwealth partnerships’, but would that be a reinvigoration in substance or in appearance?

Could fiscal policy still stimulate the economy?

by James Cloyne (University of California, Davis), Nicholas Dimsdale (University of Oxford), Natacha Postel-Vinay (London School of Economics)

 

(Anti)_Jubilee_Souvenir
No means test for these ‘unemployed’! by Maro.
1935 was the Silver Jubilee of King George V. There were celebrations and street parties across Britain. However with the country in a financial depression not everyone approved of the public expense associated with the Royal Family. Available at Wikimedia Commons

There has been a longstanding and unresolved debate over the fiscal multiplier, which is the change in economic growth resulting from a change in government spending or change in taxation. The issue became acute in the world recession of 2008-2010, when the International Monetary Fund led a spirited discussion about the contribution that fiscal policy could make to recovery.

In our research, fiscal policy is shown to have had positive impacts on growth, at least during the period surrounding the Great Depression in Britain. The implications for the potential benefits of fiscal policy in a high-debt, low-interest rate environment – and over a turbulent business cycle – may be significant.

The recent controversy follows the debate over the use of fiscal policy to counter the high level of unemployment in interwar Britain. Keynes argued that increased government spending would raise economic activity and reduce unemployment. In the General Theory (1936), he claimed that the multiplier for government expenditure was greater than unity.

A few more recent studies have confirmed that the multiplier effect is greater than unity for both the interwar and post-war period. But these results may be spurious since a rise in government expenditure that raises income may also result from a rise in income. Thus, changes in taxes and changes in income may not be independent. What we observe is a strong co-movement of GDP and fiscal measures in which it is hard to isolate the direction of causation.

What is needed is a source of exogenous variation, so that the impact of fiscal changes on GDP can be observed. Fiscal policy may take the form of changes in taxes or expenditure. The problems of endogeneity are generally greater for expenditure than for taxes, since it should be possible to find changes in taxes that are truly exogenous.

Romer and Romer (2010) have developed the so-called ‘narrative technique,’ which has been designed to overcome the problem of endogeneity of tax changes. This involves carefully distilling the historical record in order to infer Chancellors’ motivations behind each fiscal policy move, and isolate those that may be seen as more independent from the contemporaneous fluctuations of the economy.

One may thus be able to distinguish, for example, between taxes that arise from a direct will to stimulate the economy, as compared with changes that are more motivated by a Chancellor’s longstanding ideology. The latter may include, for example, a will to improve transport efficiency within the country, or a desire to make society less unequal.

Interwar Britain is a particularly appropriate period to apply this approach, since the potential for fiscal policy was great on account of the high level of unemployment. In addition, this was a period in which Keynesian countercyclical policies were generally not used, in contrast to the use of demand management policies in the post-war period.

By examining changes in taxes in interwar budgets, we have been able to produce a sample of 300 tax changes. These have been classified into changes in taxes that are endogenous or exogenous. We have been able to test the backward validity of our classification.

The outcome of this work has been to show that changes in taxes that are exogenous had a major impact on changes in GDP. The estimated value of the multiplier for these tax changes is greater than unity and as much as two to three. This is in accordance with results reported in post-war studies of the United States and a study of tax changes in post-war Britain (Cloyne, 2013).

In contrast to earlier work on measuring the multiplier, we concentrate on changes in taxes rather than changes in government expenditure. This is done to reduce problems of endogeneity.

While Keynes argued for using government spending to stimulate the economy, it was only when post-war fiscal policies were being formulated that the potential benefits of fiscal policies via changes in taxes were recognised. While this research does not argue in favour of tax changes over spending policies, it provides evidence that tax policy is a relevant part of the policy toolkit, especially in times of economic difficulty.

Lessons for the euro from Italian and German monetary unification in the nineteenth century

by Roger Vicquéry (London School of Economics)

Unificazione-Monetaria-Italiana-2012
Special euro-coin issued in 2012 to celebrate the 150th anniversary of the monetary unification of Italy. From Numismatica Pacchiega, available at <https://www.numismaticapacchiega.it/5-euro-annivesario-unificazione/&gt;

Is the euro area sustainable in its current membership form? My research provides new lessons from past examples of monetary integration, looking at the monetary unification of Italy and Germany in the second half of the nineteenth century.

 

Currency areas’ optimal membership has recently been at the forefront of the policy debate, as the original choice of letting peripheral countries join the euro was widely blamed for the common currency existential crisis. Academic work on ‘optimum currency areas’ (OCA) traditionally warned against the risk of adopting a ‘one size fits all’ monetary policy for regions with differing business cycles.

Krugman (1993) even argued that monetary unification in itself might increase its own costs over time, as regions are encouraged to specialise and thus become more different to one another. But those concerns were dismissed by Frankel and Rose’s (1998) influential ‘OCA endogeneity’ theory: once regions with ex-ante diverging paths join a common currency, they will see their business cycle synchronise progressively ex-post.

My findings question the consensus view in favour of ‘OCA endogeneity’ and raise the issue of the adverse effects of monetary integration on regional inequality. I argue that the Italian monetary unification played a role in the emergence of the regional divide between Italy’s Northern and Southern regions by the turn of the twentieth century.

I find that pre-unification Italian regions experienced largely asymmetric shocks, pointing to high economic costs stemming from the 1862 Italian monetary unification. While money markets in Northern Italy were synchronised with the core of the European monetary system, Southern Italian regions tended to move together with the European periphery.

The Italian unification is an exception in this respect, as I show that other major monetary arrangements in this period, particularly the German monetary union but also the Latin Monetary Convention and the Gold Standard, occurred among regions experiencing high shock synchronisation.

Contrary to what ‘OCA endogeneity’ would imply, shock asymmetry among Italian regions actually increased following monetary unification. I estimate that pairs of Italian provinces that came to be integrated following unification became, over four decades, up to 15% more dissimilar to one another in their economic structure compared to pairs of provinces that already belonged to the same monetary union. This means that, in line with Krugman’s pessimistic take on currency areas, economic integration in itself increased the likelihood of asymmetric shocks.

In this respect, the global grain crisis of the 1880s, disproportionally affecting the agricultural South while Italy pursued a restrictive monetary policy, might have laid the foundations for the Italian ‘Southern Question’. As pointed out by Krugman, asymmetric shocks in a currency area with low transaction costs can lead to permanent loss in regional income, as prices are unable to adjust fast enough to prevent factors of production to permanently leave the affected region.

The policy implications of this research are twofold.

First, the results caution against the prevalent view that cyclical symmetry within a currency area is bound to improve by itself over time. In particular, the role of specialisation and factor mobility in driving cyclical divergence needs to be reassessed. As the euro area moves towards more integration, additional specialisation of its regions could further magnify – by increasing the likelihood of asymmetric shocks – the challenges posed by the ‘one size fits all’ policy of the European Central Bank on the periphery.

Second, the Italian experience of monetary unification underlines how the sustainability of currency areas is chiefly related to political will rather than economic costs. Despite the fact that the Italian monetary union has been sub-optimal from the start and to a large extent remained so, it has managed to survive unscathed for the last century and a half. While the OCA framework is a good predictor of currency areas’ membership and economic performance, their sustainability is likely to be a matter of political integration.

London fog: a century of pollution and mortality, 1866-1965

by Walker Hanlon (UCLA)

23695833473_b1b7c7cca2_b
Photogravure by Donald Macleish from Wonderful London by St John Adcock, 1927. Available at <https://www.flickr.com/photos/norfolkodyssey/23695833473&gt;

For more than a century, London struggled with some of the worst air pollution on earth. But how much did air pollution affect health in London? How did these effects change as the city developed? Can London’s long experience teach us lessons that are relevant for modern cities, from Beijing to New Delhi, that are currently struggling with their own air pollution problems?

To answer these questions, I study the effects of air pollution in London across a full century from 1866 to 1965. Using new data, I show that air pollution was a major contributor to mortality in London during this century – accounting for at least one out of every 200 deaths during this century.

As London developed, the impact of air pollution changed. In the nineteenth century, Londoners suffered from a range of infectious diseases, including respiratory diseases like measles and tuberculosis. I show that being exposed to high levels of air pollution made these diseases deadlier, while the presence of these diseases made air pollution more harmful. As a result, when public health and medical improvements reduced the prevalence of these infectious diseases, they also lowered the mortality cost of pollution exposure.

This finding has implications for modern developing countries. It tells us that air pollution is likely to be more deadly in the developing world, but also that investments that improve health in other ways can lower the health costs of pollution exposure.

An important challenge in studying air pollution in the past is that direct pollution measures were not collected in a consistent way until the mid-twentieth century. To overcome this challenge, this study takes advantage of London’s famous fog events, which trapped pollution in the city and substantially increased exposure levels.

While some famous fog events are well known – such as the Great Fog of 1952 or the Cattle Show Fog of 1873, which killed the Queen’s prize bull – London experienced hundreds of lesser-known events over the century I study. By reading weather reports from the Greenwich Observatory covering over 26,000 days, we identified every day in which heavy fog occurred.

To study how these fog events affected health, I collected detailed new mortality data describing deaths in London at the weekly level. Digitised from original sources, and covering over 350,000 observations, this new data set opens the door to a more detailed analysis of London’s mortality experience than has previously been possible.

These new mortality data allow me to analyse the effects of air pollution from a variety of different angles. I provide new evidence on how the effects of air pollution varied across age groups, how the effect on different age groups evolved over time, how pollution interacted with infectious diseases and other causes of death, etc. This enriches our understanding of London’s history while opening up a range of new possibilities for studying the impact of air pollution over the long run.

Cash Converter: The Liquidity of the Victorian Capital Market

by John Turner (Queen’s University Centre for Economic History)

Liquidity is the ease with which an asset such as a share or a bond can be converted into cash. It is important for financial systems because it enables investors to liquidate and diversify their assets at a low cost. Without liquid markets, portfolio diversification becomes very costly for the investor. As a result, firms and governments must pay a premium to induce investors to buy their bonds and shares. Liquid capital markets also spur firms and entrepreneurs to invest in long-run projects, which increases productivity and economic growth.

From an historical perspective, share liquidity in the UK played a major role in the widespread adoption of the company form in the second half of the nineteenth century. Famously, as I discuss in a recent book chapter published in the Research Handbook on the History of Corporate and Company Law, political and legal opposition to share liquidity held up the development of the company form in the UK.

However, given the economic and historical importance of liquidity, very little has been written on the liquidity of UK capital markets before 1913. Ron Alquist (2010) and Matthieu Chavaz and Marc Flandreau (2017) examine the liquidity risk and premia of various sovereign bonds which were traded on the London Stock Exchange during the late Victorian and early Edwardian eras. Along with Graeme Acheson (2008), I document the thinness of the market for bank shares in the nineteenth century, using the share trading records of a small number of banks.

In a major study, Gareth Campbell (Queen’s University Belfast), Qing Ye (Xi’an Jiaotong-Liverpool University) and I have recently attempted to understand more about the liquidity of the Victorian capital market. To this end, we have just published a paper in the Economic History Review which looks at the liquidity of the London share and bond markets from 1825 to 1870. The London capital market experienced considerable growth in this era. The liberalisation of incorporation law and Parliament’s liberalism in granting company status to railways and other public-good providers, resulted in the growth of the number of business enterprises having their shares and bonds traded on stock exchanges. In addition, from the 1850s onwards, there was an increase in the number of foreign countries and companies raising bond finance on the London market.

How do we measure the liquidity of the market for bonds and stocks in the 1825-70 era? Using end-of-month stock price data from a stockbroker list called the Course of the Exchange and end-of-month bond prices from newspaper sources, we calculate for each security, the number of months in the year where it had a zero return and divide that by the number of months it was listed in the year. Because zero returns are indicative of illiquidity (i.e., that a security has not been traded), one minus our illiquidity ratio gives us a liquidity measure for each security in our sample. We calculate the overall market liquidity for shares and bonds by taking averages. Figure 1 displays market liquidity for bonds and stocks for the period 1825-70.

fig1
Figure 01. Stock and bond liquidity on London Stock Exchange, 1825-1870. Source: Campbell, Turner and Ye (2018, p.829)

Figure 1 reveals that bond market liquidity was relatively high throughout this period but shows no strong trend over time. By way of contrast, there was a strong secular increase in stock liquidity from 1830 to 1870. This increase may have stimulated greater participation in the stock market by ordinary citizens. It may also have affected the growth and deepening of the overall stock market and resulted in higher economic growth.

We examine the cross-sectional differences in liquidity between stocks in order to understand the main determinants of stock liquidity in this era. Our main finding in this regard is that firm size and the number of issued shares were major correlates of liquidity, which suggests that larger firms and firms with a greater number of shares were more frequently traded. Our study also reveals that unusual features which were believed to impede liquidity, such as extended liability, uncalled capital or high share denominations, had little effect on stock liquidity.

We also examine whether asset illiquidity was priced by investors, resulting in higher costs of capital for firms and governments. We find little evidence that the illiquidity of stock or bonds was priced, suggesting that investors at the time did not put much emphasis on liquidity in their valuations. Indeed, this is consistent with J. B. Jefferys (1938), who argued that what mattered to investors during this era was not share liquidity, but the dividend or coupon they received.

In conclusion, the vast majority of stocks and bonds in this early capital market were illiquid. It is remarkable, however, that despite this illiquidity, the UK capital market grew substantially between 1825 and 1870. There was also an increase in investor participation, with investing becoming progressively democratised in this era.

 

To contact the author: j.turner@qub.ac.uk
Twitter: @profjohnturner

 

Bibliography:

Acheson, G.G., and Turner, J.D. “The Secondary Market for Bank Shares in Nineteenth-Century Britain.” Financial History Review 15, no. 2 (October 2008): 123–51. doi:10.1017/S0968565008000139.

Alquist, R. “How Important Is Liquidity Risk for Sovereign Bond Risk Premia? Evidence from the London Stock Exchange.” Journal of International Economics 82, no. 2 (November 1, 2010): 219–29. doi:10.1016/j.jinteco.2010.07.007.

Campbell, G., Turner, J.D., and Ye, Q. “The Liquidity of the London Capital Markets, 1825–70†.” The Economic History Review 71, no. 3 (August 1, 2018): 823–52. doi:10.1111/ehr.12530.

Chavaz, M., and Flandreau, M. “‘High & Dry’: The Liquidity and Credit of Colonial and Foreign Government Debt and the London Stock Exchange (1880–1910).” The Journal of Economic History 77, no. 3 (September 2017): 653–91. doi:10.1017/S0022050717000730.

Jefferys, J.B. Trends in Business Organisation in Great Britain Since 1856: With Special Reference to the Financial Structure of Companies, the Mechanism of Investment and the Relations Between the Shareholder and the Company. University of London, 1938.

Global Trade and the Transformation of Consumer Cultures

by Beverly Lemire (University of Alberta)

The Society has arranged with CUP that a 20% discount is available on this book, valid until the 11th October 2018. The discount page is: www.cambridge.org/ehs20

 

Our ancestors knew the comfort of a pipe. But some may have preferred the functionality of cigarettes, an alternative to the rituals of nursing tobacco embers. Historic periods are defined by habits and fashions, manifesting economic and political systems, legal and illegal. These are the focus of my recent book. New networks of exchange, cross-cultural contact and material translation defined the period c. 1500-1820. Tobacco is one thematic focus. I trace how global societies domesticated a Native American herb and Native American forms of tobacco. Its spread distinguishes this period from all others, when the Americas were fully integrated into global systems. Native American knowledge, lands and communities then faced determined intervention from all quarters. This crop became commoditized within decades, eluding censure to become an essential component of sociability, whether in Japan or Southeast Asia, the West Coast of Africa or the courts of Europe. [Figure 1]

pic01.png
Figure 1. Malayan and his wife in Batavia, with pipe.

 

Tobacco is a denominator of the early global era, grown in almost every context by 1600 and incorporated into diverse cultural and material modes. Importantly, its capacity to ease fatigue was quickly noted by military and imperial administrations and soon used to discipline or encourage essential labour. A sacred herb was transposed into a worldly good. Modes of coercive consumption were notable in the western slave trade, as well as on plantations. Tobacco also served disciplinary roles among workers essential to the movement of cargoes; deep-sea long-distance sailors and riverine paddlers in the North American fur trade were vulnerable to exploitation on account of their need and dependence on tobacco during long stints of back-breaking labour.

Early global trade built on established commercial patterns – most importantly the textile trade including the long-standing exchange of fabric for fur. The fabric / fur dynamic linked northern and southern Eurasia and north Africa, a pattern of elite and non-elite consumption that surged after the late 1500s, especially with the establishment of the Qing dynasty in China (1636-1912), with their deep cultural preference for furs. Equally important, deepening trade on the northeast coast of North America formalized Indigenous Americans’ appetite for cloth, willingly bartered for furs. The fabric / fur exchange preceded and continued with western colonization in the Americas. Meanwhile, on both sides of the Bering Strait and along the northwest coast of America, Indigenous communities were pulled more fully into the Qing economic orbit, with its boundless demand for peltry. Russian imperial expansion also served this commerce. The ecologies touched by this capacious trade extended worldwide, memorialized in surviving Qing fur garments and secondhand beaver hats traded for slaves in West Africa.

I routinely incorporate object study in my analysis, an essential way to assess the dynamism of consumer practice. I trawled museum collections as commonly as archives and libraries, where I found essential evidence of globalized fads and fashions. Strategies of a Qing-era man are revealed, as he navigated Chinese sumptuary laws while attempting to demonstrate fashion (on a budget). His seeming mink-lined robe used this costly fur only where it was visible. Sheepskin lined all the hidden areas. His concern for thrift is laid bare, along with his love of style.

Elsewhere in the book, I trace responses to early globalism through translations and interpretations of early global Asian designs, in needlework. The movement of people, as well as vast cargoes, stimulated these expressive fashions, ones that required minimal investment and gave voice to the widest range of women and men. The flow of Asian patterned goods and (often forced) relocation of Asian embroiderers to Europe began this tale – both increased the clamour for floral-patterned wares. This analysis culminates in North America with the turn from geometric to floral patterning among Indigenous embroiderers. They, too, responded to the influx of Asian floriated things. Europeans were intermediaries in this stage of the global process.

Human desires and shifting tastes are recurring themes, expressed in efforts to acquire new goods through various entrepreneurial channels. ‘Industriousness’ was manifest by women of many ethnicities through petty market-oriented trade, as well as waged employment, often working at the margins of formal commerce. Industriousness, legal and extralegal, large and small, flourished in conjunction with large-scale enterprise. Extralegal activities irritated administrators, however, who wanted only regulated and measurable business. Nonetheless, extralegal activities were ubiquitous in every imperial realm and an important vein of entrepreneurship. My case studies in extralegal ventures range from the traffic in tropical shells in Kirkcudbright, Scotland; the lucrative smuggling of European wool cloth to Qing China, a new mode among urban cognoscenti; and the harvesting of peppercorns from a Kentish beach, illustrating the importance of shipwrecks in redistributing cargoes to coastal communities everywhere. [Figure 2] Coastal peoples were schooled in the materials of globalism, cast up by the tides, though some authorities might call them criminal. Ultimately, the shifting materials of daily life marked this dynamic history.

pic02
Figure 2. Shipwreck of the DEGRAVE, East Indiaman. The Adventures of Robert Drury, During Fifteen Years Captivity on the Island of Madagascar … (London: W. Meadows, 1807). Library of Congress, Digital Prints and Photographs, Washington, D.C.

 

To contact the author: Lemire@ualberta.ca

Small Bills and Petty Finance: co-creating the history of the Old Poor Law

by Alannah Tomkins (Keele University) 

Alannah Tomkins and Professor Tim Hitchcock (University of Sussex), won an AHRC award to investigate ‘Small Bills and Petty Finance: co-creating the history of the Old Poor Law’.  It is a three-year project from January 2018. The application was for £728K, which has been raised, through indexing, to £740K.  The project website can be found at: thepoorlaw.org.

 

Twice in my career I’ve been surprised by a brick – or more precisely by bricks, hurtling into my research agenda. In the first instance I found myself supervising a PhD student working on the historic use of brick as a building material in Staffordshire (from the sixteenth to the eighteenth centuries). The second time, the bricks snagged my interest independently.

The AHRC-funded project ‘Small bills and petty finance’ did not set out to look for bricks. Instead it promises to explore a little-used source for local history, the receipts and ‘vouchers’ gathered by parish authorities as they relieved or punished the poor, to write multiple biographies of the tradesmen and others who serviced the poor law. A parish workhouse, for example, exerted a considerable influence over a local economy when it routinely (and reliably) paid for foodstuffs, clothing, fuel and other necessaries. This influence or profit-motive has not been studied in any detail for the poor law before 1834, and vouchers’ innovative content is matched by an exciting methodology. The AHRC project calls on the time and expertise of archival volunteers to unfold and record the contents of thousands of vouchers surviving in the three target counties of Cumbria, East Sussex and Staffordshire. So where do the bricks come in?

The project started life in Staffordshire as a pilot in advance of AHRC funding. The volunteers met at the Stafford archives and started by calendaring the contents of vouchers for the market town of Uttoxeter, near the Staffordshire/Derbyshire border. And the Uttoxeter workhouse did not confine itself to accommodating and feeding the poor. Instead in the 1820s it managed two going concerns: a workhouse garden producing vegetables for use and sale, and a parish brickyard. Many parishes under the poor law embedded make-work schemes in their management of the resident poor, but no others that I’m aware of channelled pauper labour into the manufacture of bricks.

pic01

The workhouse and brickyard were located just to the north of the town of Uttoxeter, in an area known as The Heath. The land was subsequently used to build the Uttoxeter Union workhouse in 1837-8 (after the reform of the poor law in 1834) so no signs of the brickyard remain in the twenty-first century. It was, however, one of several such yards identified at The Heath in the tithe map for Uttoxeter of 1842, and probably made use of a fixed kiln rather than a temporary clamp. This can be deduced from the parish’s sale of both bricks and tiles to brickyard customers. Tiles were more refined products than bricks and require more control over the firing process, whereas clamp firings were more difficult to regulate. The yard provided periodic employment to the adult male poor of the Uttoxeter workhouse, in accordance with the seasonal pattern imposed on all brick manufacture at the time. Firings typically began in March or April each year, and continued until September or October depending on the weather.

This is important because the variety of vouchers relating to the parish brickyard allow us to understand something of its place in the town’s economy, both as a producer and as a consumer of other products and services. Brickyards needed coal, so it is no surprise that one of the major expenses for the support of the yard lay in bringing coal to the town from elsewhere via the canal. The Uttoxeter canal wharf was also at The Heath, and access to transport by water may explain the development of a number of brickyards in its proximity. The yard also required wood and other raw materials in addition to clay, and specific products to protect the bricks after cutting but before firing. The parish bought quantities of archangel mats, rough woven pieces that could be used like a modern protective fleece to protect against frost damage. We are surmising that Uttoxeter used the mats to cover both the bricks and any tender plants in the workhouse garden.

screen-shot-2018-09-04-at-18-00-20.png

Similarly the bricks were sold chiefly to local purchasers, including members of the parish vestry. Some men who were owed money by the parish for their work as suppliers allowed the debt to be offset by bricks. Finally the employment of workhouse men as brickyard labourers gives us, when combined with some genealogical research, a rare glimpse of the place of workhouse work in the life-cycle of the adult poor. More than one man employed at the yard in the 1820s and 1830s went on to independence as a lodging-house keeper in the town by the time of the 1841 census.

As I say, I’ve been surprised by brick. I had no idea that such a mundane product would prove so engaging. All this goes to show that it’s not the stolidity of the brick but its deployment that matters, historically speaking.

 

To contact the author: a.e.tomkins@keele.ac.uk

 

 

 

 

Wages of sin: slavery and the banks, 1830-50

by Aaron Graham (University College London)

 

jon-bull
From the cartoon ‘Slave Emancipation; Or, John Bull Gulled Out Of Twenty Millions’ by C.J. Grant. In Richard Pound (UCL, 1998), C.J. Grant’s ‘Political Drama’, a radical satirist rediscovered‘. Available at <https://www.ucl.ac.uk/lbs/project/logo/&gt;

In 1834, the British Empire emancipated its slaves. This should have quickly triggered a major shift away from plantation labour and towards a free society where ex-slaves would bargain for better wages and force the planters to adopt new business models or go under. But the planters and plantation system survived, even if slavery did not. What went wrong?

This research follows the £20 million paid in compensation by the British government in 1834 (equivalent to about £20 billion today). This money was paid not to the slaves, but to the former slave-owners for the loss of their human property.

Thanks to the Legacies of British Slave-ownership project at University College London, we now know who received the money and how much. But until this study, we knew very little about how the former slave-owners used this money, or what effect this had on colonial societies in the West Indies or South Africa as they confronted the demands of this new world.

The study suggests why so little changed. It shows that slave-owners in places such as Jamaica, Guyana, South Africa and Mauritius used the money they received not just to pay off their debts, but also to set up new banks, which created credit by issuing bank notes and then supplied the planters with cash and credit.

Planters used the credit to improve their plantations and the cash to pay wages to their new free labourers, who therefore lacked the power to bargain for better conditions. Able to accommodate the social and economic pressures that would otherwise have forced them to reassess their business models and find new approaches that did not rely on the unremitting exploitation of black labour, planters could therefore resist the demands for broader economic and social change.

Tracking the ebb and flow of money shows that in Jamaica, for example, in 1836 about 200 planters chose to subscribe half the £450,000 they had received in compensation in the new Bank of Jamaica. By 1839, the bank had issued almost £300,000 in notes, enabling planters across the island to meet their workers’ wages without otherwise altering the plantation system.

When the Planters’ Bank was founded in 1839, it issued a further £100,000. ‘We congratulate the country on the prospects of a local institution of this kind’, the Jamaica Despatch commented in May 1839, ‘ … designed to aid and relieve those who are labouring under difficulties peculiar to the Jamaican planter at the present time’.

In other cases, the money even allowed farmers to expand the system of exploitation. In the Cape of Good Hope, the Eastern Province Bank at Grahamstown raised £26,000 with money from slavery compensation but provided the British settlers with £170,000 in short-term loans, helping them to dispossess native peoples of their land and use them as cheap labour to raise wool for Britain’s textile factories.

‘With united influence and energy’, the bank told its shareholders in 1840, for example, ‘the bank must become useful, as well to the residents at Grahamstown and our rapidly thriving agriculturists as prosperous itself’.

This study shows for the first time why planters could carry on after 1834 with business as usual. The new banks created after 1834 helped planters throughout the British Empire to evade the major social and economic changes that abolitionists had wanted and which their opponents had feared.

By investing their slavery compensation money in banks that then offered cash and credit, the planters could prolong and even expand their place in economies and societies built on the plantation system and the exploitation of black labour.

 

To contact the author: aaron.graham@ucl.ac.uk

 

The UK’s unpaid war debts to the United States, 1917-1980

by David James Gill (University of Nottingham)

ww1fe-562830
Trenches in World War I. From <www.express.co.uk>

We all think we know the consequences of the Great War – from the millions of dead to the rise of Nazism – but the story of the UK’s war debts to the United States remains largely untold.

In 1934, the British government defaulted on these loans, leaving unpaid debts exceeding $4 billion. The UK decided to cease repayment 18 months after France had defaulted on its war debts, making one full and two token repayments prior to Congressional approval of the Johnson Act, which prohibited further partial contributions.

Economists and political scientists typically attribute such hesitation to concerns about economic reprisals or the costs of future borrowing. Historians have instead stressed that delay reflected either a desire to protect transatlantic relations or a naive hope for outright cancellation.

Archival research reveals that the British cabinet’s principal concern was that many states owing money to the UK might use its default on war loans as an excuse to cease repayment on their own debts. In addition, ministers feared that refusal to pay would profoundly shock a large section of public opinion, thereby undermining the popularity of the National government. Eighteen months of continued repayment therefore provided the British government with more time to manage these risks.

The consequences of the UK’s default have attracted curiously limited attention. Economists and political scientists tend to assume dire political costs to incumbent governments as well as significant short-term economic shocks in terms of external borrowing, international trade, and the domestic economy. None of these consequences apply to the National government or the UK in the years that followed.

Most historians consider these unpaid war debts to be largely irrelevant to the course of domestic and international politics within five years. Yet archival research reveals that they continued to play an important role in British and American policy-making for at least four more decades.

During the 1940s, the issue of the UK’s default arose on several occasions, most clearly during negotiations concerning Lend-Lease and the Anglo-American loan, fuelling Congressional resistance that limited the size and duration of American financial support.

Successive American administrations also struggled to resist growing Congressional pressure to use these unpaid debts as a diplomatic tool to address growing balance of payment deficits from the 1950s to the 1970s. In addition, British default presented a formidable legal obstacle for the UK’s return to the New York bond market in the late 1970s, threatening to undermine the efficient refinancing of the government’s recent loans from the International Monetary Fund.

The consequences of the UK’s default on its First World War debts to the United States were therefore longer lasting and more significant to policy-making on both sides of the Atlantic than widely assumed.