This blog aims to encourage discussion of economic and social history, broadly defined. We live in a time of major social and economic change, and research in social science is showing more and more that a historical and long-term approach to current issues is the key to understanding our times.
by David M. Higgins (Newcastle University), originally published on 09 October 2018 on the LSE Business Review
When doing your weekly shop have you ever observed the small blue/yellow and red/yellow circles that appear on the wrappers of Wensleydale cheese or Parma ham? Such indicia are examples of geographical indications (GIs), or appellations: they show that a product possesses certain attributes (taste, smell, texture) that are unique to a specific product and which can only be derived from a tightly demarcated and fiercely protected geographical region. The relationship between product attributes and geography can be summed up in one word: terroir. These GIs formed an important part of the EU’s agricultural policy, launched in 1992 and represented by the logos PDO and PGI, to insulate EU farmers from the effects of globalisation by encouraging them to produce ‘quality’ products that were unique.
GIs have a considerable lineage: legislation enacted in 1666 reserved the sole right to ‘Roquefort’ to cheese cured in the caves at Roquefort. Until the later nineteenth century domestic legislation was the primary means by which GIs were protected from misrepresentation. Thereafter, the rapid acceleration of international trade necessitated global protocols, of which the Paris Convention for the Protection of Industrial Property (1883) and its successors, including the Madrid Agreement for the Repression of False or Deceptive Indications of Source on Goods (1890).
The last century has witnessed unprecedented improvements in survivorship and life expectancy. In the United Kingdom alone, infant mortality fell from over 150 deaths per thousand births at the start of the last century to 3.9 deaths per thousand births in 2014 (see the Office for National Statistics for further details). Average life expectancy at birth increased from 46.3 to 81.4 years over the same period (see the Human Mortality Database). These changes reflect fundamental improvements in diet and nutrition and environmental conditions.
The changing body: health, nutrition and human development in the western world since 1700 attempted to understand some of the underlying causes of these changes. It drew on a wide range of archival and other sources covering not only mortality but also height, weight and morbidity. One of our central themes was the extent to which long-term improvements in adult health reflected the beneficial effect of improvements in earlier life.
The changing body also outlined a very broad schema of ‘technophysio evolution’ to capture the intergenerational effects of investments in early life. This is represented in a very simple way in Figure 1. The Figure tries to show how improvements in the nutritional status of one generation increase its capacity to invest in the health and nutritional status of the next generation, and so on ‘ad infinitum’ (Floud et al. 2011: 4).
We also looked at some of the underlying reasons for these changes, including the role of diet and ‘nutrition’. As part of this process, we included new estimates of the number of calories which could be derived from the amount of food available for human consumption in the United Kingdom between circa 1700 and 1913. However, our estimates contrasted sharply with others published at the same time (Muldrew 2011) and were challenged by a number of other authors subsequently. Broadberry et al. (2015) thought that our original estimates were too high, whereas both Kelly and Ó Gráda (2013) and Meredith and Oxley (2014) regarded them as too low.
Given the importance of these issues, we revisited our original calculations in 2015. We corrected an error in the original figures, used Overton and Campbell’s (1996) data on extraction rates to recalculate the number of calories, and included new information on the importation of food from Ireland to other parts of what became the UK. Our revised Estimate A suggested that the number of calories rose by just under 115 calories per head per day between 1700 and 1750 and by more than 230 calories between 1750 and 1800, with little changes between 1800 and 1850. Our revised Estimate B suggested that there was a much bigger increase during the first half of the eighteenth century, followed by a small decline between 1750 and 1800 and a bigger increase between 1800 and 1850 (see Figure 2). However, both sets of figures were still well below the estimates prepared by Kelly and Ó Gráda, Meredith and Oxley, and Muldrew for the years before 1800.
These calculations have important implications for a number of recent debates in British economic and social history (Allen 2005, 2009). Our data do not necessarily resolve the debate over whether Britons were better fed than people in other countries, although they do compare quite favourably with relevant French estimates (see Floud et al. 2011: 55). However, they do suggest that a significant proportion of the eighteenth-century population was likely to have been underfed.
Our data also raise some important questions about the relationship between nutrition and mortality. Our revised Estimate A suggests that food availability rose slowly between 1700 and 1750 and then more rapidly between 1750 and 1800, before levelling off between 1800 and 1850. These figures are still broadly consistent with Wrigley et al.’s (1997) estimates of the main trends in life expectancy and our own figures for average stature. However, it is not enough simply to focus on averages; we also need to take account of possible changes in the distribution of foodstuffs within households and the population more generally (Harris 2015). Moreover, it is probably a mistake to examine the impact of diet and nutrition independently of other factors.
Tim Leunig (LSE), Jelle van Lottum (Huygens Institute) and Bo Poulsen (Aarlborg University) have been investigating the treatment of prisoners of war in the Napoleonic Wars.
For most of history, life as a prisoner of war was nasty, brutish and short. There were no regulations on the treatment of prisoners until the 1899 Hague convention, and the later Geneva conventions. Many prisoners were killed immediately, other enslaved to work in mines, and other undesirable places.
The poor treatment of prisoners of war was partly intentional – they were the hated enemy, after all. And partly it was economic. It costs money to feed and shelter prisoners. Countries in the past – especially in times of war and conflict – were much poorer than today.
Nineteenth century prisoner death rates were horrific. Between one-half and six-sevenths of Napoleon’s 17,000 troops surrendering to the Spanish in 1808 after the Battle of Balién died as prisoners of war. The American civil war saw death rates rise to 27%, even though the average prisoner was captive for less than a year.
The Napoleonic Wars saw the British capture 7,000 Danish and Norwegian sailors, military and merchant. Britain did not desire war with Denmark (which ruled Norway at the time), but did so to prevent Napoleon seizing the Danish fleet. Prisoners were incarcerated on old, unseaworthy “prison hulks”, moored in the Thames Estuary, near Rochester. Conditions were crowded: each man was given just 2 feet (60 cm) in width to hang his hammock.
Were these prison hulks floating tombs, as some contemporaries claimed? Our research shows otherwise. The Admiralty kept exemplary records, now held in the National Archive in Kew. These show the date of arrival in prison, and the date of release, exchange, escape – or death. They also tell us the age of the prisoner, where they came from, the type of ship they served on, and whether they were an officer, craftsman, or regular sailor. We can use these records to look at how many died, and why.
The prisoners ranged in age from 8 to 80, with half aged 22 to 35. The majority sailed on merchant vessels, with a sixth on military vessels, and a quarter on licenced pirate boats, permitted to harass British shipping. The amount of time in prison varied dramatically, from 3 days to over 7 years, with an average of 31 months. About two thirds were released before the end of the war.
Taken as a whole, 5% of prisoners died. This is a remarkably low number, given how long they were held, and given experience elsewhere in the nineteenth century. Being held prisoner for longer increased your chance of dying, but not by much: those who spent three years on a prison hulk had only a 1% greater chance of dying than those who served just one year.
Death was (almost) random. Being captured at the start of the war was neither better nor worse than being captured at the end. The number of prisoners held at any one time did not increase the death rate. The old were no more likely to die than the young – anyone fit enough to go to see was fit enough to withstand any rigours of prison life. Despite extra space and better rations, officers were no less likely to die, implying that conditions were reasonable for common sailors.
There is only one exception: sailors from licenced pirate boats were twice as likely to die as merchant or official navy sailors. We cannot know the reason. Perhaps they were treated less well by their guards, or other prisoners. Perhaps they were risk takers, who gambled away their rations. Even for this group, however, the death rates were very low compared with those captured in other places, and in other wars.
The British had rules on prisoners of war, for food and hygiene. Each prisoner was entitled to 2.5 lbs (~1 kg) of beef, 1 lb of fish, 10.5 lbs of bread, 2 lbs of potatoes, 2.5lbs of cabbage, and 14 pints (8 litres) of (very weak) beer a week. This is not far short of Danish naval rations, and prisoners are less active than sailors. We cannot be sure that they received their rations in full every week, but the death rates suggest that they were not hungry in any systematic way. The absence of epidemics suggests that hygiene was also good. Remarkably, and despite a national debt that peaked at a still unprecedented 250% of GDP, the British appear to have obeyed their own rules on how to treat prisoners.
Far from being floating tombs, therefore, this was a surprisingly gentle confinement for the Danish and Norwegian sailors captured by the British in the Napoleonic Wars.
In December 2017, Liam Fox, the Secretary of State for International Trade, stated that ‘as the United Kingdom negotiates its exit from the European Union, we have the opportunity to reinvigorate our Commonwealth partnerships, and usher in a new era where expertise, talent, goods, and capital can move unhindered between our nations in a way that they have not for a generation or more’.
As policy-makers and the public contemplate a return to the halcyon days of the British Empire, there is much to be learned from those past policies that attempted to cultivate trade along imperial lines. Let us consider the effect of the earliest policies of imperial preference: policies enacted during the Edwardian era.
In the late nineteenth century, Britain was the bastion of free trade, imposing tariffs on only a very narrow range of commodities. Consequently, Britain’s free trade policy afforded barely any scope for applying lower or ‘preferential’ duties to imports from the Empire.
The self-governing colonies of the Empire possessed autonomy in tariff-setting and, with the notable exception of New South Wales, did not emulate the mother country’s free trade policy. In the 1890s and 1900s, when the emergent industrial nations of Germany and the United States reduced Britain’s market share in these self-governing colonies, there was indeed scope for applying preferential duties to imports from Britain, in the hope of diverting trade back toward the Empire.
Trade policies of imperial preference were implemented in succession by Canada (1897), the South African Customs Union (1903), New Zealand (1903) and Australia (1907). By the close of the first era of globalisation in 1914, Britain enjoyed some margin of preference in all of the Dominions. Yet my research, a case study of New Zealand, casts doubt on the effectiveness of these polices at raising Britain’s share in the imports of the Dominions.
Unlike the policies of the other Dominions, New Zealand’s policy applied preferential duties to only selected commodity imports (44 out of 543). This cross-commodity variation in the application of preference is useful for estimating the effect of preference. I find that New Zealand’s Preferential and Reciprocal Trade Act of 1903 had no effect on the share of the Empire, or of Britain specifically, in New Zealand’s imports.
Why was the policy ineffective at raising Britain’s share of New Zealand’s imports? There are several likely reasons: that Britain’s share was already quite large; that some imported commodities were highly differentiated and certain varieties were only produced in other industrial countries; and, most importantly, that the margin of preference – the extent to which duties were lower for imports from Britain – was too small to effect any trade diversion.
As Britain considers future trade agreements, perhaps with Commonwealth countries, it should be remembered that a trade agreement does not necessarily entail a great, or even any, increase in trade. The original policies of imperial preference were rather symbolic measures and, at least in the case of New Zealand, economically inconsequential.
Brexit might well present an ‘opportunity to reinvigorate our Commonwealth partnerships’, but would that be a reinvigoration in substance or in appearance?
by James Cloyne (University of California, Davis), Nicholas Dimsdale (University of Oxford), Natacha Postel-Vinay (London School of Economics)
There has been a longstanding and unresolved debate over the fiscal multiplier, which is the change in economic growth resulting from a change in government spending or change in taxation. The issue became acute in the world recession of 2008-2010, when the International Monetary Fund led a spirited discussion about the contribution that fiscal policy could make to recovery.
In our research, fiscal policy is shown to have had positive impacts on growth, at least during the period surrounding the Great Depression in Britain. The implications for the potential benefits of fiscal policy in a high-debt, low-interest rate environment – and over a turbulent business cycle – may be significant.
The recent controversy follows the debate over the use of fiscal policy to counter the high level of unemployment in interwar Britain. Keynes argued that increased government spending would raise economic activity and reduce unemployment. In the General Theory (1936), he claimed that the multiplier for government expenditure was greater than unity.
A few more recent studies have confirmed that the multiplier effect is greater than unity for both the interwar and post-war period. But these results may be spurious since a rise in government expenditure that raises income may also result from a rise in income. Thus, changes in taxes and changes in income may not be independent. What we observe is a strong co-movement of GDP and fiscal measures in which it is hard to isolate the direction of causation.
What is needed is a source of exogenous variation, so that the impact of fiscal changes on GDP can be observed. Fiscal policy may take the form of changes in taxes or expenditure. The problems of endogeneity are generally greater for expenditure than for taxes, since it should be possible to find changes in taxes that are truly exogenous.
Romer and Romer (2010) have developed the so-called ‘narrative technique,’ which has been designed to overcome the problem of endogeneity of tax changes. This involves carefully distilling the historical record in order to infer Chancellors’ motivations behind each fiscal policy move, and isolate those that may be seen as more independent from the contemporaneous fluctuations of the economy.
One may thus be able to distinguish, for example, between taxes that arise from a direct will to stimulate the economy, as compared with changes that are more motivated by a Chancellor’s longstanding ideology. The latter may include, for example, a will to improve transport efficiency within the country, or a desire to make society less unequal.
Interwar Britain is a particularly appropriate period to apply this approach, since the potential for fiscal policy was great on account of the high level of unemployment. In addition, this was a period in which Keynesian countercyclical policies were generally not used, in contrast to the use of demand management policies in the post-war period.
By examining changes in taxes in interwar budgets, we have been able to produce a sample of 300 tax changes. These have been classified into changes in taxes that are endogenous or exogenous. We have been able to test the backward validity of our classification.
The outcome of this work has been to show that changes in taxes that are exogenous had a major impact on changes in GDP. The estimated value of the multiplier for these tax changes is greater than unity and as much as two to three. This is in accordance with results reported in post-war studies of the United States and a study of tax changes in post-war Britain (Cloyne, 2013).
In contrast to earlier work on measuring the multiplier, we concentrate on changes in taxes rather than changes in government expenditure. This is done to reduce problems of endogeneity.
While Keynes argued for using government spending to stimulate the economy, it was only when post-war fiscal policies were being formulated that the potential benefits of fiscal policies via changes in taxes were recognised. While this research does not argue in favour of tax changes over spending policies, it provides evidence that tax policy is a relevant part of the policy toolkit, especially in times of economic difficulty.
Is the euro area sustainable in its current membership form? My research provides new lessons from past examples of monetary integration, looking at the monetary unification of Italy and Germany in the second half of the nineteenth century.
Currency areas’ optimal membership has recently been at the forefront of the policy debate, as the original choice of letting peripheral countries join the euro was widely blamed for the common currency existential crisis. Academic work on ‘optimum currency areas’ (OCA) traditionally warned against the risk of adopting a ‘one size fits all’ monetary policy for regions with differing business cycles.
Krugman (1993) even argued that monetary unification in itself might increase its own costs over time, as regions are encouraged to specialise and thus become more different to one another. But those concerns were dismissed by Frankel and Rose’s (1998) influential ‘OCA endogeneity’ theory: once regions with ex-ante diverging paths join a common currency, they will see their business cycle synchronise progressively ex-post.
My findings question the consensus view in favour of ‘OCA endogeneity’ and raise the issue of the adverse effects of monetary integration on regional inequality. I argue that the Italian monetary unification played a role in the emergence of the regional divide between Italy’s Northern and Southern regions by the turn of the twentieth century.
I find that pre-unification Italian regions experienced largely asymmetric shocks, pointing to high economic costs stemming from the 1862 Italian monetary unification. While money markets in Northern Italy were synchronised with the core of the European monetary system, Southern Italian regions tended to move together with the European periphery.
The Italian unification is an exception in this respect, as I show that other major monetary arrangements in this period, particularly the German monetary union but also the Latin Monetary Convention and the Gold Standard, occurred among regions experiencing high shock synchronisation.
Contrary to what ‘OCA endogeneity’ would imply, shock asymmetry among Italian regions actually increased following monetary unification. I estimate that pairs of Italian provinces that came to be integrated following unification became, over four decades, up to 15% more dissimilar to one another in their economic structure compared to pairs of provinces that already belonged to the same monetary union. This means that, in line with Krugman’s pessimistic take on currency areas, economic integration in itself increased the likelihood of asymmetric shocks.
In this respect, the global grain crisis of the 1880s, disproportionally affecting the agricultural South while Italy pursued a restrictive monetary policy, might have laid the foundations for the Italian ‘Southern Question’. As pointed out by Krugman, asymmetric shocks in a currency area with low transaction costs can lead to permanent loss in regional income, as prices are unable to adjust fast enough to prevent factors of production to permanently leave the affected region.
The policy implications of this research are twofold.
First, the results caution against the prevalent view that cyclical symmetry within a currency area is bound to improve by itself over time. In particular, the role of specialisation and factor mobility in driving cyclical divergence needs to be reassessed. As the euro area moves towards more integration, additional specialisation of its regions could further magnify – by increasing the likelihood of asymmetric shocks – the challenges posed by the ‘one size fits all’ policy of the European Central Bank on the periphery.
Second, the Italian experience of monetary unification underlines how the sustainability of currency areas is chiefly related to political will rather than economic costs. Despite the fact that the Italian monetary union has been sub-optimal from the start and to a large extent remained so, it has managed to survive unscathed for the last century and a half. While the OCA framework is a good predictor of currency areas’ membership and economic performance, their sustainability is likely to be a matter of political integration.
For more than a century, London struggled with some of the worst air pollution on earth. But how much did air pollution affect health in London? How did these effects change as the city developed? Can London’s long experience teach us lessons that are relevant for modern cities, from Beijing to New Delhi, that are currently struggling with their own air pollution problems?
To answer these questions, I study the effects of air pollution in London across a full century from 1866 to 1965. Using new data, I show that air pollution was a major contributor to mortality in London during this century – accounting for at least one out of every 200 deaths during this century.
As London developed, the impact of air pollution changed. In the nineteenth century, Londoners suffered from a range of infectious diseases, including respiratory diseases like measles and tuberculosis. I show that being exposed to high levels of air pollution made these diseases deadlier, while the presence of these diseases made air pollution more harmful. As a result, when public health and medical improvements reduced the prevalence of these infectious diseases, they also lowered the mortality cost of pollution exposure.
This finding has implications for modern developing countries. It tells us that air pollution is likely to be more deadly in the developing world, but also that investments that improve health in other ways can lower the health costs of pollution exposure.
An important challenge in studying air pollution in the past is that direct pollution measures were not collected in a consistent way until the mid-twentieth century. To overcome this challenge, this study takes advantage of London’s famous fog events, which trapped pollution in the city and substantially increased exposure levels.
While some famous fog events are well known – such as the Great Fog of 1952 or the Cattle Show Fog of 1873, which killed the Queen’s prize bull – London experienced hundreds of lesser-known events over the century I study. By reading weather reports from the Greenwich Observatory covering over 26,000 days, we identified every day in which heavy fog occurred.
To study how these fog events affected health, I collected detailed new mortality data describing deaths in London at the weekly level. Digitised from original sources, and covering over 350,000 observations, this new data set opens the door to a more detailed analysis of London’s mortality experience than has previously been possible.
These new mortality data allow me to analyse the effects of air pollution from a variety of different angles. I provide new evidence on how the effects of air pollution varied across age groups, how the effect on different age groups evolved over time, how pollution interacted with infectious diseases and other causes of death, etc. This enriches our understanding of London’s history while opening up a range of new possibilities for studying the impact of air pollution over the long run.
Liquidity is the ease with which an asset such as a share or a bond can be converted into cash. It is important for financial systems because it enables investors to liquidate and diversify their assets at a low cost. Without liquid markets, portfolio diversification becomes very costly for the investor. As a result, firms and governments must pay a premium to induce investors to buy their bonds and shares. Liquid capital markets also spur firms and entrepreneurs to invest in long-run projects, which increases productivity and economic growth.
From an historical perspective, share liquidity in the UK played a major role in the widespread adoption of the company form in the second half of the nineteenth century. Famously, as I discuss in a recent book chapter published in the Research Handbook on the History of Corporate and Company Law, political and legal opposition to share liquidity held up the development of the company form in the UK.
However, given the economic and historical importance of liquidity, very little has been written on the liquidity of UK capital markets before 1913. Ron Alquist (2010) and Matthieu Chavaz and Marc Flandreau (2017) examine the liquidity risk and premia of various sovereign bonds which were traded on the London Stock Exchange during the late Victorian and early Edwardian eras. Along with Graeme Acheson (2008), I document the thinness of the market for bank shares in the nineteenth century, using the share trading records of a small number of banks.
In a major study, Gareth Campbell (Queen’s University Belfast), Qing Ye (Xi’an Jiaotong-Liverpool University) and I have recently attempted to understand more about the liquidity of the Victorian capital market. To this end, we have just published a paper in the Economic History Review which looks at the liquidity of the London share and bond markets from 1825 to 1870. The London capital market experienced considerable growth in this era. The liberalisation of incorporation law and Parliament’s liberalism in granting company status to railways and other public-good providers, resulted in the growth of the number of business enterprises having their shares and bonds traded on stock exchanges. In addition, from the 1850s onwards, there was an increase in the number of foreign countries and companies raising bond finance on the London market.
How do we measure the liquidity of the market for bonds and stocks in the 1825-70 era? Using end-of-month stock price data from a stockbroker list called the Course of the Exchange and end-of-month bond prices from newspaper sources, we calculate for each security, the number of months in the year where it had a zero return and divide that by the number of months it was listed in the year. Because zero returns are indicative of illiquidity (i.e., that a security has not been traded), one minus our illiquidity ratio gives us a liquidity measure for each security in our sample. We calculate the overall market liquidity for shares and bonds by taking averages. Figure 1 displays market liquidity for bonds and stocks for the period 1825-70.
Figure 1 reveals that bond market liquidity was relatively high throughout this period but shows no strong trend over time. By way of contrast, there was a strong secular increase in stock liquidity from 1830 to 1870. This increase may have stimulated greater participation in the stock market by ordinary citizens. It may also have affected the growth and deepening of the overall stock market and resulted in higher economic growth.
We examine the cross-sectional differences in liquidity between stocks in order to understand the main determinants of stock liquidity in this era. Our main finding in this regard is that firm size and the number of issued shares were major correlates of liquidity, which suggests that larger firms and firms with a greater number of shares were more frequently traded. Our study also reveals that unusual features which were believed to impede liquidity, such as extended liability, uncalled capital or high share denominations, had little effect on stock liquidity.
We also examine whether asset illiquidity was priced by investors, resulting in higher costs of capital for firms and governments. We find little evidence that the illiquidity of stock or bonds was priced, suggesting that investors at the time did not put much emphasis on liquidity in their valuations. Indeed, this is consistent with J. B. Jefferys (1938), who argued that what mattered to investors during this era was not share liquidity, but the dividend or coupon they received.
In conclusion, the vast majority of stocks and bonds in this early capital market were illiquid. It is remarkable, however, that despite this illiquidity, the UK capital market grew substantially between 1825 and 1870. There was also an increase in investor participation, with investing becoming progressively democratised in this era.
Acheson, G.G., and Turner, J.D. “The Secondary Market for Bank Shares in Nineteenth-Century Britain.” Financial History Review 15, no. 2 (October 2008): 123–51. doi:10.1017/S0968565008000139.
Alquist, R. “How Important Is Liquidity Risk for Sovereign Bond Risk Premia? Evidence from the London Stock Exchange.” Journal of International Economics 82, no. 2 (November 1, 2010): 219–29. doi:10.1016/j.jinteco.2010.07.007.
Campbell, G., Turner, J.D., and Ye, Q. “The Liquidity of the London Capital Markets, 1825–70†.” The Economic History Review 71, no. 3 (August 1, 2018): 823–52. doi:10.1111/ehr.12530.
Chavaz, M., and Flandreau, M. “‘High & Dry’: The Liquidity and Credit of Colonial and Foreign Government Debt and the London Stock Exchange (1880–1910).” The Journal of Economic History 77, no. 3 (September 2017): 653–91. doi:10.1017/S0022050717000730.
Jefferys, J.B. Trends in Business Organisation in Great Britain Since 1856: With Special Reference to the Financial Structure of Companies, the Mechanism of Investment and the Relations Between the Shareholder and the Company. University of London, 1938.
The Society has arranged with CUP that a 20% discount is available on this book, valid until the 11th October 2018. The discount page is: www.cambridge.org/ehs20
Our ancestors knew the comfort of a pipe. But some may have preferred the functionality of cigarettes, an alternative to the rituals of nursing tobacco embers. Historic periods are defined by habits and fashions, manifesting economic and political systems, legal and illegal. These are the focus of my recent book. New networks of exchange, cross-cultural contact and material translation defined the period c. 1500-1820. Tobacco is one thematic focus. I trace how global societies domesticated a Native American herb and Native American forms of tobacco. Its spread distinguishes this period from all others, when the Americas were fully integrated into global systems. Native American knowledge, lands and communities then faced determined intervention from all quarters. This crop became commoditized within decades, eluding censure to become an essential component of sociability, whether in Japan or Southeast Asia, the West Coast of Africa or the courts of Europe. [Figure 1]
Tobacco is a denominator of the early global era, grown in almost every context by 1600 and incorporated into diverse cultural and material modes. Importantly, its capacity to ease fatigue was quickly noted by military and imperial administrations and soon used to discipline or encourage essential labour. A sacred herb was transposed into a worldly good. Modes of coercive consumption were notable in the western slave trade, as well as on plantations. Tobacco also served disciplinary roles among workers essential to the movement of cargoes; deep-sea long-distance sailors and riverine paddlers in the North American fur trade were vulnerable to exploitation on account of their need and dependence on tobacco during long stints of back-breaking labour.
Early global trade built on established commercial patterns – most importantly the textile trade including the long-standing exchange of fabric for fur. The fabric / fur dynamic linked northern and southern Eurasia and north Africa, a pattern of elite and non-elite consumption that surged after the late 1500s, especially with the establishment of the Qing dynasty in China (1636-1912), with their deep cultural preference for furs. Equally important, deepening trade on the northeast coast of North America formalized Indigenous Americans’ appetite for cloth, willingly bartered for furs. The fabric / fur exchange preceded and continued with western colonization in the Americas. Meanwhile, on both sides of the Bering Strait and along the northwest coast of America, Indigenous communities were pulled more fully into the Qing economic orbit, with its boundless demand for peltry. Russian imperial expansion also served this commerce. The ecologies touched by this capacious trade extended worldwide, memorialized in surviving Qing fur garments and secondhand beaver hats traded for slaves in West Africa.
I routinely incorporate object study in my analysis, an essential way to assess the dynamism of consumer practice. I trawled museum collections as commonly as archives and libraries, where I found essential evidence of globalized fads and fashions. Strategies of a Qing-era man are revealed, as he navigated Chinese sumptuary laws while attempting to demonstrate fashion (on a budget). His seeming mink-lined robe used this costly fur only where it was visible. Sheepskin lined all the hidden areas. His concern for thrift is laid bare, along with his love of style.
Elsewhere in the book, I trace responses to early globalism through translations and interpretations of early global Asian designs, in needlework. The movement of people, as well as vast cargoes, stimulated these expressive fashions, ones that required minimal investment and gave voice to the widest range of women and men. The flow of Asian patterned goods and (often forced) relocation of Asian embroiderers to Europe began this tale – both increased the clamour for floral-patterned wares. This analysis culminates in North America with the turn from geometric to floral patterning among Indigenous embroiderers. They, too, responded to the influx of Asian floriated things. Europeans were intermediaries in this stage of the global process.
Human desires and shifting tastes are recurring themes, expressed in efforts to acquire new goods through various entrepreneurial channels. ‘Industriousness’ was manifest by women of many ethnicities through petty market-oriented trade, as well as waged employment, often working at the margins of formal commerce. Industriousness, legal and extralegal, large and small, flourished in conjunction with large-scale enterprise. Extralegal activities irritated administrators, however, who wanted only regulated and measurable business. Nonetheless, extralegal activities were ubiquitous in every imperial realm and an important vein of entrepreneurship. My case studies in extralegal ventures range from the traffic in tropical shells in Kirkcudbright, Scotland; the lucrative smuggling of European wool cloth to Qing China, a new mode among urban cognoscenti; and the harvesting of peppercorns from a Kentish beach, illustrating the importance of shipwrecks in redistributing cargoes to coastal communities everywhere. [Figure 2] Coastal peoples were schooled in the materials of globalism, cast up by the tides, though some authorities might call them criminal. Ultimately, the shifting materials of daily life marked this dynamic history.
Alannah Tomkins and Professor Tim Hitchcock (University of Sussex), won an AHRC award to investigate ‘Small Bills and Petty Finance: co-creating the history of the Old Poor Law’. It is a three-year project from January 2018. The application was for £728K, which has been raised, through indexing, to £740K. The project website can be found at: thepoorlaw.org.
Twice in my career I’ve been surprised by a brick – or more precisely by bricks, hurtling into my research agenda. In the first instance I found myself supervising a PhD student working on the historic use of brick as a building material in Staffordshire (from the sixteenth to the eighteenth centuries). The second time, the bricks snagged my interest independently.
The AHRC-funded project ‘Small bills and petty finance’ did not set out to look for bricks. Instead it promises to explore a little-used source for local history, the receipts and ‘vouchers’ gathered by parish authorities as they relieved or punished the poor, to write multiple biographies of the tradesmen and others who serviced the poor law. A parish workhouse, for example, exerted a considerable influence over a local economy when it routinely (and reliably) paid for foodstuffs, clothing, fuel and other necessaries. This influence or profit-motive has not been studied in any detail for the poor law before 1834, and vouchers’ innovative content is matched by an exciting methodology. The AHRC project calls on the time and expertise of archival volunteers to unfold and record the contents of thousands of vouchers surviving in the three target counties of Cumbria, East Sussex and Staffordshire. So where do the bricks come in?
The project started life in Staffordshire as a pilot in advance of AHRC funding. The volunteers met at the Stafford archives and started by calendaring the contents of vouchers for the market town of Uttoxeter, near the Staffordshire/Derbyshire border. And the Uttoxeter workhouse did not confine itself to accommodating and feeding the poor. Instead in the 1820s it managed two going concerns: a workhouse garden producing vegetables for use and sale, and a parish brickyard. Many parishes under the poor law embedded make-work schemes in their management of the resident poor, but no others that I’m aware of channelled pauper labour into the manufacture of bricks.
The workhouse and brickyard were located just to the north of the town of Uttoxeter, in an area known as The Heath. The land was subsequently used to build the Uttoxeter Union workhouse in 1837-8 (after the reform of the poor law in 1834) so no signs of the brickyard remain in the twenty-first century. It was, however, one of several such yards identified at The Heath in the tithe map for Uttoxeter of 1842, and probably made use of a fixed kiln rather than a temporary clamp. This can be deduced from the parish’s sale of both bricks and tiles to brickyard customers. Tiles were more refined products than bricks and require more control over the firing process, whereas clamp firings were more difficult to regulate. The yard provided periodic employment to the adult male poor of the Uttoxeter workhouse, in accordance with the seasonal pattern imposed on all brick manufacture at the time. Firings typically began in March or April each year, and continued until September or October depending on the weather.
This is important because the variety of vouchers relating to the parish brickyard allow us to understand something of its place in the town’s economy, both as a producer and as a consumer of other products and services. Brickyards needed coal, so it is no surprise that one of the major expenses for the support of the yard lay in bringing coal to the town from elsewhere via the canal. The Uttoxeter canal wharf was also at The Heath, and access to transport by water may explain the development of a number of brickyards in its proximity. The yard also required wood and other raw materials in addition to clay, and specific products to protect the bricks after cutting but before firing. The parish bought quantities of archangel mats, rough woven pieces that could be used like a modern protective fleece to protect against frost damage. We are surmising that Uttoxeter used the mats to cover both the bricks and any tender plants in the workhouse garden.
Similarly the bricks were sold chiefly to local purchasers, including members of the parish vestry. Some men who were owed money by the parish for their work as suppliers allowed the debt to be offset by bricks. Finally the employment of workhouse men as brickyard labourers gives us, when combined with some genealogical research, a rare glimpse of the place of workhouse work in the life-cycle of the adult poor. More than one man employed at the yard in the 1820s and 1830s went on to independence as a lodging-house keeper in the town by the time of the 1841 census.
As I say, I’ve been surprised by brick. I had no idea that such a mundane product would prove so engaging. All this goes to show that it’s not the stolidity of the brick but its deployment that matters, historically speaking.