Welcome to The Long Run

On behalf of the Economic History Society (EHS), it is a pleasure to welcome you to The Long Run, the EHS blog.

This blog aims to encourage discussion of economic and social history, broadly defined. We live in a time of major social and economic change, and research in social science is showing more and more that a historical and long-term approach to current issues is the key to understanding our times.

We welcome any contribution or suggestion – please contact us at ehs.thelongrun@gmail.com

 

How well off were the occupants of early modern almshouses?

by Angela Nicholls (University of Warwick).

Almhouses in Early Modern England is published by Boydell Press. SAVE 25% when you order direct from the publisher – offer ends on the 13th December 2018. See below for details.

pic00

Almshouses, charitable foundations providing accommodation for poor people, are a feature of many towns and villages. Some are very old, with their roots in medieval England as monastic infirmaries for the sick, pilgrims and travellers, or as chantries offering prayers for the souls of their benefactors. Many survived the Reformation to be joined by a remarkable number of new foundations between around 1560 and 1730. For many of them their principal purpose was as sites of memorialisation and display, tangible representations of the philanthropy of their wealthy donors. But they are also some of the few examples of poor people’s housing to have survived from the early modern period, so can they tell us anything about the material lives of the people who lived in them?

Paul Slack famously referred to almspeople as ‘respectable, gowned Trollopian worthies’, and there are many examples to justify that view, for instance Holy Cross Hospital, Winchester, refounded in 1445 as the House of Noble Poverty. But these are not typical. Nevertheless, many early modern almshouse buildings are instantly recognisable, with the ubiquitous row of chimneys often the first indication of the identity of the building.

 

pic01
Burghley Almshouses, Stamford (1597)

 

Individual chimneys and, often, separate front doors are evidence of private domestic space, far removed from the communal halls of the earlier medieval period, or the institutional dormitories of the nineteenth century workhouses which came later. Accommodating almspeople in their own rooms was not just a reflection of general changes in domestic architecture at the time, which placed greater emphasis on comfort and privacy, but represented a change in how almspeople were viewed and how they were expected to live their lives. Instead of living communally with meals provided, in the majority of post-Reformation almshouses the residents would have lived independently, buying their own food, cooking it themselves on their own hearth and eating it by themselves in their rooms. The importance of the hearth was not only as the practical means of heating and cooking, but was central to questions of identity and social status. Together with individual front doors, these features gave occupants a degree of independence and autonomy; they enabled almspeople to live independently despite their economic dependence, and to adopt the appearance if not the reality of independent householders.

 

Screen Shot 2018-11-13 at 16.40.44
Stoneleigh Old Almshouses, Warwickshire (1576)

 

The retreat from communal living also meant that almspeople had to support themselves rather than have all their needs met by the almshouse. This was achieved in many places by a transition to monetary allowances or stipends with which almspeople could purchase their own food and necessities, but the existence and level of these stipends varied considerably. Late medieval almshouses often specified an allowance of a penny a day, which would have provided a basic but adequate living in the fifteenth century, but was seriously eroded by sixteenth-century inflation. Thus when Lawrence Sheriff, a London mercer, established in 1567 an almshouse for four poor men in his home town of Rugby, his will gave each of them the traditional penny a day, or £1 10s 4d a year. Yet with inflation, if these stipends were to match the real value of their late-fifteenth-century counterparts, his almsmen would actually have needed £4 5s 5d a year.[1]

The nationwide system of poor relief established by the Tudor Poor Laws, and the survival of poor relief accounts from many parishes by the late seventeenth century, provide an opportunity to see the actual amounts disbursed in relief by overseers of the poor to parish paupers. From the level of payments made to elderly paupers no longer capable of work it is possible to calculate the barest minimum which an elderly person living rent free in an almshouse might have needed to feed and clothe themself and keep warm.[2] Such a subsistence level in the 1690s equates to an annual sum of £3 17s which can be adjusted for inflation and used to compare with a range of known almshouse stipends from the late sixteenth and seventeenth centuries.

The results of this comparison are interesting, even surprising. Using data from 147 known almshouse stipends in six different counties (Durham, Yorkshire, Norfolk, Warwickshire, Buckinghamshire and Kent) it seems that less than half of early modern almshouses provided their occupants with stipends which were sufficient to live on. Many provided no financial assistance at all.

pic03

The inescapable conclusion is that the benefits provided to early modern almspeople were in many cases only a contribution towards their subsistence. In this respect almshouse occupants were no different from the recipients of parish poor relief, who rarely had their living costs met in full.

Yet, even in one of the poorer establishments, almshouse residents had distinct advantages over other poor people. Principally these were the security of their accommodation, the permanence and regularity of any financial allowance, no matter how small, and the autonomy this gave them. Almshouse residents may also have had an enhanced status as ‘approved’, deserving poor. The location of many almshouses, beside the church, in the high street, or next to the guildhall, seems to have been purposely designed to solicit alms from passers-by, at a time when begging was officially discouraged.

SAVE 25% when you order direct from the publisher. Discount applies to print and eBook editions. Click the link, add to basket and enter offer code BB500 in the box at the checkout. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291 and quote the same code. Offer ends one month after the date of upload. Any queries please email marketing@boydell.co.uk

 

NOTES

[1] Inflation index derived from H. Phelps Brown and S. V. Hopkins, A Perspective of Wages and Prices (London and New York, 1981) pp. 13-59.

[2] L. A. Botelho, Old Age and the English Poor Law, 1500 – 1700 (Woodbridge, 2004) pp. 147-8.

Wheels of change: skill-biased factor endowments and industrialisation in eighteenth century England

by Joel Mokyr (Northwestern University), Assaf Sarid (Haifa University), Karine van der Beek (Ben-Gurion University)

Shorrocks_Lancashire_Loom_with_a_weft_stop_MOSI_6406
Shorrocks Lancashire Loom with a weft stop, The Museum of Science and Industry in Manchester. Available at Wikimedia Commons

The main manifestation of an industrial revolution taking place in Britain in the second half of the eighteenth century was the shift of textile production (that is, the spinning process), from a cottage-based manual system, to a factory-based capital-intensive system, with machinery driven by waterpower and later on by steam.

The initial shift in production technology in the 1740s took place in all the main textile centres (the Cotswolds, East Anglia, and in the middle Pennines in Lancashire and the West-Riding). But towards the end of the century, as the intensity of production and the application of Watt’s steam engine increased, the supremacy of the cotton industry of the northwestern parts of the country began to show, and this is where the industrial revolution eventually took place and persisted.

Our research examines the role of factor endowments in determining the location of technology adoption in the English textile industry and its persistence since the Middle Ages. In line with recent research on economic growth, which emphasises the role of factor endowments on long run economic development, we claim that the geographical and institutional environment determined the location of watermill technology adoption in the production of foodstuffs.

In turn, the adoption of the watermill for grain grinding (around the tenth and eleventh centuries), affected the area’s path of development by determining the specialisation and skills that evolved, and as a result, its suitability for the adoption of new textile technologies, textile fulling (thirteenth and fourteenth centuries) and, later on, spinning (eighteenth century).

The explanation for this path dependence is that all these machines, including other machinery that was developed in various production processes (such as sawing mills, forge mills, paper mills, etc.), were all based on similar mechanical principles as the grinding watermills. Thus, their implementation did not require additional resources or skills and it was therefore more profitable to invest in them and expand textile production, in places that were specialised and experienced in the construction and maintenance of grinding watermills.

As textile exports expanded in the second half of the eighteenth century (both woollen and cotton textiles), Watt’s steam engine was introduced. The watermills that operated the newly introduced spinning machinery began to be replaced with the more efficient steam engines, and almost disappeared by the beginning of the nineteenth century. This stage of technological change took place in Lancashire’s textile centre, which enjoyed both the proximity of coal as well as of strong water flows, and was therefore suitable for the implementation of steam engine technology.

We use information from a variety of sources, including the Apprenticeship Stamp-Tax Records (eighteenth century), Domesday Book (eleventh century), as well as geographical databases, and show that the important English textile centres of the eighteenth century, evolved in places that had more grinding watermills during the Domesday Survey (1086).

To be more precise, we find that on average, there was an additional textile merchant in 1710 in areas that had three more watermills in 1086. The magnitude of this effect is important given that there were on average 1.2 textile cloth merchants in an area (the maximum was 34 merchants).

We also find that textile centres in these areas persisted well into the eighteenth century and specialised in skilled mechanical human capital (measured by the number of apprentices to masters specialising in watermill technology, that is, wrights, in the eighteenth century), which was essential for the development, implementation and maintenance of waterpower as well as mechanical machinery.

The number of this type of worker increased in the 1750s in all the main textile centres until the 1780s, when their number was declining in Lancashire as it was adopting a new technology that was no longer dependent on their skills.

Paul Romer: The view from economic history

by Nuno Palma (University of Manchester and CEPR). Republished with minor adjustments from the blog “Economic Growth in History”

 

In this post I write about the connections between Paul Romer’s work, which is essentially applied theory, and the empirical work on long-run economic growth done by economic historians. How was Romer influenced by the work of economic historians? has he influenced economic history? and have his theories been confirmed by the recent work of economic historians? (preview: I will argue that the answers are: yes; not much; and no). Nevertheless, my point above is not that Romer is wrong in general; in fact some of his ideas *about ideas* are fundamental for us to think about growth in the past (read on if this isn’t clear yet.)

 

449px-Paul_Romer_in_2005
Paul Romer in 2005. Wikimedia Commons

Paul Romer’s was a well-deserved and long-anticipated prize. Many predicted he would eventually win, including myself in my very first academic article, written when I was a undergradute and published in 2008. (ungated version here). I now find it mildly amusing how assertive I was when I wrote: “Paul Romer is going to win the Nobel Prize in economics”. I continue to believe that this was a good choice.

Many have written about the nature of his main contributions, all of which, as I have said, have been on applied theory; see for instance, see the posts in Dietrich Vollrath’s blog, here and here, or in Paul Romer’s own blog.

Romer’s work had some influence on economic history, but not much. There is, for instance, a 1995 article by Nick Crafts which looks at the Industrial Revolution from a New Growth perspective, but it is fair to say that economic historians were perhaps not quick to pick up the New Growth theory train. Part of this was surely because its implications seemed to apply mostly to frontier economies and did not seem to apply to much of human history, a limitation which Unified Growth Theory would later attempt to overcome.

And yet, Romer himself has often spoken about economic history and relied on the data of economic historians. He now seems to have won mostly due to his 1990 article, but his earlier work on increasing returns (ungated version here) had a graph from Maddison, for instance.

One of the most empirical papers Romer has written is “Why indeed in America?”,  which was the culmination of much of what he had done before. It was also one of the last papers he wrote before entering a writing hiatus. In this paper he explicitly argues for the complementarity of economic history and growth theory. He argues that the USA achieved economic supremacy after 1870 due to having the largest integrated market in the world. He writes:

“differences in saving and education do not explain why growth was so much faster in the United States than it was in Britain around the turn of this century. In 1870, per capita income in the United States was 75 percent of per capita income in Britain. By 1929, it had increased to 130 percent. In the intervening decades, years of education per worker increased by a factor of 2.2 in Britain and by a nearly identical factor of 2.3 in the United States. In 1929, this variable remained slightly lower in the United States. (Data are taken from Angus Maddison [1995].”

Notice that there are three empirical statements here. Romer’s story builds on these facts, so if the facts change, the story must too. Theory depends on facts.

The first fact (according to Romer) is that the US only converged to British per capita GDP levels after 1870. Second, that this was not due to matters such as education or savings. Third, the reason was market size. As economic historians, we have made much progress in measuring each of these matters since 1995. Let me consider each in turn.

Timing of convergence of the USA to Britain

The important thing to keep in mind here is that it is by no means certain that the USA had not catched up earlier. The methodological issues are complicated and in fact today’s other (and equally deserving) Nobel prize, Nordhaus, wrote a fascinating paper about the problems involved in these types of measurements. (A popular description of this work can be read here) As far as USA vs Britain is concerned, though, Marianne Ward and John Devereux summarize the debate as follows:

“Prados De la Escosura (2000) and Ward and Devereux (2003, 2004, 2005) argue for an early US income lead using current price estimates. Broadberry (2003) and Broadberry and Irwin (2006) defend the Maddison projections while Woltjer (2015) hews to a middle ground. The literature has recently taken an unexpected turn as Peter Lindert and Jeffrey Williamson, Lindert and Williamson (2016), find a larger US lead before 1870 and one that stretches further back in time than claimed by either Prados De la Escosura (2000) or Ward and Devereux (2005).”

Comparative levels of education

Recent evidence suggests that the average years of post-primary education actually declined in Britain after about 1700 (ungated version here). This was not the case at all in the USA, where it is well-known that the state invested in high schools, so it seems unlikely that average human capital grew at similar levels in the latter part of the 19th century, as Maddison/Romer claimed.

Market size

I used to believe this part of Romer’s story. That was until I read this brilliant paper by Leslie Hannah: “Logistics, Market Size, and Giant Plants in the Early Twentieth Century: A Global View” (ungated version here). Notice that Hannah does not refer to Romer’s argument or even cite him. What he does instead is he destroys the commonly held idea that USA’s market size was larger that Europe’s already before the Great War (aka World War I). It is true that the USA had more railroads, but it also had much longer distances. In Northwestern Europe, transportation by a mix of ships, trains and horses was cheaper, especially once we consider the much denser (and highly urbanized) population. It is important to remember that prior to WWI, Europe was living the “first age of globalization”, with high levels of integration and relatively low tariffs.

So, this part of Romer’s story cannot be right.

Conclusion

In conclusion, what does this all mean? will these new facts affect where growth theory will go? only time will tell, and growth theory itself is by no means moving much these days, as Paul Romer himself has addmited in recent interviews. What these facts suggest, though, is that other things must have mattered.

As I said in the beginning, I believe that Paul Romer’s applied theory work is important (as it is that of others that might have won, such as Aghion and Howitt). The natural complementaries between the work of economic historians and applied theorists suggest that we need to listen to each other in order for science to move forward. Hopefully, new generations of economists will do a bit of both, as have some people who now work on Unified Growth.

But in the future, it is fair that the Nobel committee gives more prizes to empirical work as well. Because theory can’t live without facts, but economics Nobels have been highly biased towards theorists (whether pure or applied).

 

To contact the author: @nunopgpalma

 

Voting rights and financial systems: Evidence from two centuries of suffrage reforms

by Thomas Lambert (Erasmus University Rotterdam)

Extending voting rights to broader segments of the population considerably affects the way countries finance their economies. This is the key finding of our new research recently published in the Economic Journal, available here

2cfd739c2f27f50f6ee569926837b785
Suffragette Mrs Banks and banker Mr Banks in a scene from the classic Disney movie Mary Poppins (1964), directed by Robert Stevenson

Financial systems fulfil a number of key functions in the economy, thereby contributing to its growth. By transferring funds from savers/investors to borrowers such as households and firms, financial systems are the oil for the wheels that keep the economy turning.

Therefore, it is vital to have a clear understanding of the fundamental factors that support the development of financial systems. The research shows that suffrage institutions – that is, the institutions defining who holds the right to vote in the population – play a critical role.

Financial systems encompass financial institutions (such as banks) and financial markets (such as stock markets). However, the population is composed of corporate stakeholders (workers, investors, managers) and thus is not indifferent about whether governments should promote – through their policy choices – the development of stock markets or the banking sector. Indeed, both fulfil similar functions in the economy, but have a different impact on corporate stakeholders as they affect differently the degree to which each corporate stakeholder bears corporate risk. Stock markets lead to riskier but more profitable investments, at the cost of potentially higher labor-risk bearing. In contrast, banks have a tendency to limit risk-taking behavior of corporate managers because, as debtholders, they do not benefit from the upside potential of riskier investments.

The voting population will prefer politically to support bank finance if it relies more, in the aggregate, on labor income. However, the voting population will prefer relying on stock market finance if it has a sufficient amount of capital income relative labor income. By defining who has the right to vote in the population, suffrage institutions thus play a pivotal role in the way countries finance their economies.

Our research analyzes the gradual extensions of suffrage to various segments of the population over the nineteenth and twentieth centuries in 18 countries. It demonstrates that suffrage extensions change the political preferences of the voting population and, thereby, policy choices supporting either stock market finance or bank finance. Specifically, it provides empirical evidence that extending suffrage to broader segments of the population hampers the development of stock markets. In contrast, broadening suffrage is conducive to banking sector development.

Further evidence reveals longer-term effects produced by the extension of suffrage: A 25-year delay in the introduction of women universal suffrage increase today’s importance of stock markets relative to the banking sector by 17.5%.

Overall, these findings are consistent with the insight that small elites pursue economic opportunities by promoting capital raised on stock markets. In contrast, a broader political participation empowers a middle class which prefers bank finance as it is composed of voters with proportionally more exposure to labor income relative to capital income.

The research has broader implications. The scope of voting rights may drive the adoption and content of financial regulation shaping the way that financial intermediation takes place. This ultimately determines the long-term performance of economies.

 

To contact the author: t.lambert@rsm.nl

From LSE Business Review – “Turf wars? Placing geographical indications at the heart of international trade”

by David M. Higgins (Newcastle University), originally published on 09 October 2018 on the LSE Business Review

 

igpWhen doing your weekly shop have you ever observed the small blue/yellow and red/yellow circles that appear on the wrappers of Wensleydale cheese or Parma ham? Such indicia are examples of geographical indications (GIs), or appellations: they show that a product possesses certain attributes (taste, smell, texture) that are unique to a specific product and which can only be derived from a tightly demarcated and fiercely protected geographical region. The relationship between product attributes and geography can be summed up in one word: terroir. These GIs formed an important part of the EU’s agricultural policy, launched in 1992 and represented by the logos PDO and PGI, to insulate EU farmers from the effects of globalisation by encouraging them to produce ‘quality’ products that were unique.

GIs have a considerable lineage: legislation enacted in 1666 reserved the sole right to ‘Roquefort’ to cheese cured in the caves at Roquefort. Until the later nineteenth century domestic legislation was the primary means by which GIs were protected from misrepresentation. Thereafter, the rapid acceleration of international trade necessitated global protocols, of which the Paris Convention for the Protection of Industrial Property (1883) and its successors, including the Madrid Agreement for the Repression of False or Deceptive Indications of Source on Goods (1890).

Full article here: http://blogs.lse.ac.uk/businessreview/2018/10/09/turf-wars-placing-geographical-indications-at-the-heart-of-international-trade/

 

Revisiting the changing body

by Bernard Harris (University of Strathclyde)

The Society has arranged with CUP that a 20% discount is available on this book, valid until the 11th November 2018. The discount page is: www.cambridge.org/wm-ecommerce-web/academic/landingPage/EHS20

The last century has witnessed unprecedented improvements in survivorship and life expectancy. In the United Kingdom alone, infant mortality fell from over 150 deaths per thousand births at the start of the last century to 3.9 deaths per thousand births in 2014 (see the Office for National Statistics  for further details). Average life expectancy at birth increased from 46.3 to 81.4 years over the same period (see the Human Mortality Database). These changes reflect fundamental improvements in diet and nutrition and environmental conditions.

The changing body: health, nutrition and human development in the western world since 1700 attempted to understand some of the underlying causes of these changes. It drew on a wide range of archival and other sources covering not only mortality but also height, weight and morbidity. One of our central themes was the extent to which long-term improvements in adult health reflected the beneficial effect of improvements in earlier life.

The changing body also outlined a very broad schema of ‘technophysio evolution’ to capture the intergenerational effects of investments in early life. This is represented in a very simple way in Figure 1. The Figure tries to show how improvements in the nutritional status of one generation increase its capacity to invest in the health and nutritional status of the next generation, and so on ‘ad infinitum’ (Floud et al. 2011: 4).

fig01
Figure 1. Technophysio evolution: a schema. Source: See Floud et al. 2011: 3-4.

We also looked at some of the underlying reasons for these changes, including the role of diet and ‘nutrition’. As part of this process, we included new estimates of the number of calories which could be derived from the amount of food available for human consumption in the United Kingdom between circa 1700 and 1913. However, our estimates contrasted sharply with others published at the same time (Muldrew 2011) and were challenged by a number of other authors subsequently. Broadberry et al. (2015) thought that our original estimates were too high, whereas both Kelly and Ó Gráda (2013) and Meredith and Oxley (2014) regarded them as too low.

Given the importance of these issues, we revisited our original calculations in 2015. We corrected an error in the original figures, used Overton and Campbell’s (1996) data on extraction rates to recalculate the number of calories, and included new information on the importation of food from Ireland to other parts of what became the UK. Our revised Estimate A suggested that the number of calories rose by just under 115 calories per head per day between 1700 and 1750 and by more than 230 calories between 1750 and 1800, with little changes between 1800 and 1850. Our revised Estimate B suggested that there was a much bigger increase during the first half of the eighteenth century, followed by a small decline between 1750 and 1800 and a bigger increase between 1800 and 1850 (see Figure 2). However, both sets of figures were still well below the estimates prepared by Kelly and Ó Gráda, Meredith and Oxley, and Muldrew for the years before 1800.

fig02
Source: Harris et al. 2015: 160.

These calculations have important implications for a number of recent debates in British economic and social history (Allen 2005, 2009). Our data do not necessarily resolve the debate over whether Britons were better fed than people in other countries, although they do compare quite favourably with relevant French estimates (see Floud et al. 2011: 55). However, they do suggest that a significant proportion of the eighteenth-century population was likely to have been underfed.
Our data also raise some important questions about the relationship between nutrition and mortality. Our revised Estimate A suggests that food availability rose slowly between 1700 and 1750 and then more rapidly between 1750 and 1800, before levelling off between 1800 and 1850. These figures are still broadly consistent with Wrigley et al.’s (1997) estimates of the main trends in life expectancy and our own figures for average stature. However, it is not enough simply to focus on averages; we also need to take account of possible changes in the distribution of foodstuffs within households and the population more generally (Harris 2015). Moreover, it is probably a mistake to examine the impact of diet and nutrition independently of other factors.

To contact the author: bernard.harris@strath.ac.uk

References

Allen, R. (2005), ‘English and Welsh agriculture, 1300-1850: outputs, inputs and income’. URL: https://www.nuffield.ox.ac.uk/media/2161/allen-eandw.pdf.

Allen, R. (2009), The British industrial revolution in global perspective, Cambridge: Cambridge University Press.

Broadberry, S., Campbell, B., Klein, A., Overton, M. and Van Leeuwen, B. (2015), British economic growth, 1270-1870, Cambridge: Cambridge University Press.

Floud, R., Fogel, R., Harris, B. and Hong, S.C. (2011), The changing body: health, nutrition and human development in the western world since 1700, Cambridge: Cambridge University Press.

Harris, B. (2015), ‘Food supply, health and economic development in England and Wales during the eighteenth and nineteenth centuries’, Scientia Danica, Series H, Humanistica, 4 (7), 139-52.

Harris, B., Floud, R. and Hong, S.C. (2015), ‘How many calories? Food availability in England and Wales in the eighteenth and nineteenth centuries’, Research in Economic History, 31, 111-91.

Kelly, M. and Ó Gráda, C. (2013), ‘Numerare est errare: agricultural output and food supply in England before and during the industrial revolution’, Journal of Economic History, 73 (4), 1132-63.

Meredith, D. and Oxley, D. (2014), ‘Food and fodder: feeding England, 1700-1900’, Past and Present, 222, 163-214.

Muldrew, C. (2011), Food, energy and the creation of industriousness: work and material culture in agrarian England, 1550-1780, Cambridge: Cambridge University Press.

Overton, M. and Campbell, B. (1996), ‘Production et productivité dans l’agriculture anglaise, 1086-1871’, Histoire et Mésure, 1 (3-4), 255-97.

Wrigley, E.A., Davies, R., Oeppen, J. and Schofield, R. (1997), English population history from family reconstitution, Cambridge: Cambridge University Press.

Surprisingly gentle confinement

Tim Leunig (LSE), Jelle van Lottum (Huygens Institute) and Bo Poulsen (Aarlborg University) have been investigating the treatment of prisoners of war in the Napoleonic Wars.

 

index
Napoleonic Prisoner of War. Available at <https://blog.findmypast.com.au/explore-our-fascinating-new-napoleonic-prisoner-of-war-records-1406376311.html&gt;

For most of history, life as a prisoner of war was nasty, brutish and short. There were no regulations on the treatment of prisoners until the 1899 Hague convention, and the later Geneva conventions. Many prisoners were killed immediately, other enslaved to work in mines, and other undesirable places.

The poor treatment of prisoners of war was partly intentional – they were the hated enemy, after all. And partly it was economic. It costs money to feed and shelter prisoners. Countries in the past – especially in times of war and conflict – were much poorer than today.

Nineteenth century prisoner death rates were horrific. Between one-half and six-sevenths of Napoleon’s 17,000 troops surrendering to the Spanish in 1808 after the Battle of Balién died as prisoners of war. The American civil war saw death rates rise to 27%, even though the average prisoner was captive for less than a year.

The Napoleonic Wars saw the British capture 7,000 Danish and Norwegian sailors, military and merchant. Britain did not desire war with Denmark (which ruled Norway at the time), but did so to prevent Napoleon seizing the Danish fleet. Prisoners were incarcerated on old, unseaworthy “prison hulks”, moored in the Thames Estuary, near Rochester. Conditions were crowded: each man was given just 2 feet (60 cm) in width to hang his hammock.

Were these prison hulks floating tombs, as some contemporaries claimed? Our research shows otherwise. The Admiralty kept exemplary records, now held in the National Archive in Kew. These show the date of arrival in prison, and the date of release, exchange, escape – or death. They also tell us the age of the prisoner, where they came from, the type of ship they served on, and whether they were an officer, craftsman, or regular sailor. We can use these records to look at how many died, and why.

The prisoners ranged in age from 8 to 80, with half aged 22 to 35. The majority sailed on merchant vessels, with a sixth on military vessels, and a quarter on licenced pirate boats, permitted to harass British shipping. The amount of time in prison varied dramatically, from 3 days to over 7 years, with an average of 31 months. About two thirds were released before the end of the war.

Taken as a whole, 5% of prisoners died. This is a remarkably low number, given how long they were held, and given experience elsewhere in the nineteenth century. Being held prisoner for longer increased your chance of dying, but not by much: those who spent three years on a prison hulk had only a 1% greater chance of dying than those who served just one year.

Death was (almost) random. Being captured at the start of the war was neither better nor worse than being captured at the end. The number of prisoners held at any one time did not increase the death rate. The old were no more likely to die than the young – anyone fit enough to go to see was fit enough to withstand any rigours of prison life. Despite extra space and better rations, officers were no less likely to die, implying that conditions were reasonable for common sailors.

There is only one exception: sailors from licenced pirate boats were twice as likely to die as merchant or official navy sailors. We cannot know the reason. Perhaps they were treated less well by their guards, or other prisoners. Perhaps they were risk takers, who gambled away their rations. Even for this group, however, the death rates were very low compared with those captured in other places, and in other wars.

The British had rules on prisoners of war, for food and hygiene. Each prisoner was entitled to 2.5 lbs (~1 kg) of beef, 1 lb of fish, 10.5 lbs of bread, 2 lbs of potatoes, 2.5lbs of cabbage, and 14 pints (8 litres) of (very weak) beer a week. This is not far short of Danish naval rations, and prisoners are less active than sailors. We cannot be sure that they received their rations in full every week, but the death rates suggest that they were not hungry in any systematic way. The absence of epidemics suggests that hygiene was also good. Remarkably, and despite a national debt that peaked at a still unprecedented 250% of GDP, the British appear to have obeyed their own rules on how to treat prisoners.

Far from being floating tombs, therefore, this was a surprisingly gentle confinement for the Danish and Norwegian sailors captured by the British in the Napoleonic Wars.

Britain’s post-Brexit trade: learning from the Edwardian origins of imperial preference

by Brian Varian (Swansea University)

798px-Imperial_Federation,_map_of_the_world_showing_the_extent_of_the_British_Empire_in_1886
Imperial Federation, map of the world showing the extent of the British Empire in 1886. Wikimedia Commons

In December 2017, Liam Fox, the Secretary of State for International Trade, stated that ‘as the United Kingdom negotiates its exit from the European Union, we have the opportunity to reinvigorate our Commonwealth partnerships, and usher in a new era where expertise, talent, goods, and capital can move unhindered between our nations in a way that they have not for a generation or more’.

As policy-makers and the public contemplate a return to the halcyon days of the British Empire, there is much to be learned from those past policies that attempted to cultivate trade along imperial lines. Let us consider the effect of the earliest policies of imperial preference: policies enacted during the Edwardian era.

In the late nineteenth century, Britain was the bastion of free trade, imposing tariffs on only a very narrow range of commodities. Consequently, Britain’s free trade policy afforded barely any scope for applying lower or ‘preferential’ duties to imports from the Empire.

The self-governing colonies of the Empire possessed autonomy in tariff-setting and, with the notable exception of New South Wales, did not emulate the mother country’s free trade policy. In the 1890s and 1900s, when the emergent industrial nations of Germany and the United States reduced Britain’s market share in these self-governing colonies, there was indeed scope for applying preferential duties to imports from Britain, in the hope of diverting trade back toward the Empire.

Trade policies of imperial preference were implemented in succession by Canada (1897), the South African Customs Union (1903), New Zealand (1903) and Australia (1907). By the close of the first era of globalisation in 1914, Britain enjoyed some margin of preference in all of the Dominions. Yet my research, a case study of New Zealand, casts doubt on the effectiveness of these polices at raising Britain’s share in the imports of the Dominions.

Unlike the policies of the other Dominions, New Zealand’s policy applied preferential duties to only selected commodity imports (44 out of 543). This cross-commodity variation in the application of preference is useful for estimating the effect of preference. I find that New Zealand’s Preferential and Reciprocal Trade Act of 1903 had no effect on the share of the Empire, or of Britain specifically, in New Zealand’s imports.

Why was the policy ineffective at raising Britain’s share of New Zealand’s imports? There are several likely reasons: that Britain’s share was already quite large; that some imported commodities were highly differentiated and certain varieties were only produced in other industrial countries; and, most importantly, that the margin of preference – the extent to which duties were lower for imports from Britain – was too small to effect any trade diversion.

As Britain considers future trade agreements, perhaps with Commonwealth countries, it should be remembered that a trade agreement does not necessarily entail a great, or even any, increase in trade. The original policies of imperial preference were rather symbolic measures and, at least in the case of New Zealand, economically inconsequential.

Brexit might well present an ‘opportunity to reinvigorate our Commonwealth partnerships’, but would that be a reinvigoration in substance or in appearance?

Could fiscal policy still stimulate the economy?

by James Cloyne (University of California, Davis), Nicholas Dimsdale (University of Oxford), Natacha Postel-Vinay (London School of Economics)

 

(Anti)_Jubilee_Souvenir
No means test for these ‘unemployed’! by Maro.
1935 was the Silver Jubilee of King George V. There were celebrations and street parties across Britain. However with the country in a financial depression not everyone approved of the public expense associated with the Royal Family. Available at Wikimedia Commons

There has been a longstanding and unresolved debate over the fiscal multiplier, which is the change in economic growth resulting from a change in government spending or change in taxation. The issue became acute in the world recession of 2008-2010, when the International Monetary Fund led a spirited discussion about the contribution that fiscal policy could make to recovery.

In our research, fiscal policy is shown to have had positive impacts on growth, at least during the period surrounding the Great Depression in Britain. The implications for the potential benefits of fiscal policy in a high-debt, low-interest rate environment – and over a turbulent business cycle – may be significant.

The recent controversy follows the debate over the use of fiscal policy to counter the high level of unemployment in interwar Britain. Keynes argued that increased government spending would raise economic activity and reduce unemployment. In the General Theory (1936), he claimed that the multiplier for government expenditure was greater than unity.

A few more recent studies have confirmed that the multiplier effect is greater than unity for both the interwar and post-war period. But these results may be spurious since a rise in government expenditure that raises income may also result from a rise in income. Thus, changes in taxes and changes in income may not be independent. What we observe is a strong co-movement of GDP and fiscal measures in which it is hard to isolate the direction of causation.

What is needed is a source of exogenous variation, so that the impact of fiscal changes on GDP can be observed. Fiscal policy may take the form of changes in taxes or expenditure. The problems of endogeneity are generally greater for expenditure than for taxes, since it should be possible to find changes in taxes that are truly exogenous.

Romer and Romer (2010) have developed the so-called ‘narrative technique,’ which has been designed to overcome the problem of endogeneity of tax changes. This involves carefully distilling the historical record in order to infer Chancellors’ motivations behind each fiscal policy move, and isolate those that may be seen as more independent from the contemporaneous fluctuations of the economy.

One may thus be able to distinguish, for example, between taxes that arise from a direct will to stimulate the economy, as compared with changes that are more motivated by a Chancellor’s longstanding ideology. The latter may include, for example, a will to improve transport efficiency within the country, or a desire to make society less unequal.

Interwar Britain is a particularly appropriate period to apply this approach, since the potential for fiscal policy was great on account of the high level of unemployment. In addition, this was a period in which Keynesian countercyclical policies were generally not used, in contrast to the use of demand management policies in the post-war period.

By examining changes in taxes in interwar budgets, we have been able to produce a sample of 300 tax changes. These have been classified into changes in taxes that are endogenous or exogenous. We have been able to test the backward validity of our classification.

The outcome of this work has been to show that changes in taxes that are exogenous had a major impact on changes in GDP. The estimated value of the multiplier for these tax changes is greater than unity and as much as two to three. This is in accordance with results reported in post-war studies of the United States and a study of tax changes in post-war Britain (Cloyne, 2013).

In contrast to earlier work on measuring the multiplier, we concentrate on changes in taxes rather than changes in government expenditure. This is done to reduce problems of endogeneity.

While Keynes argued for using government spending to stimulate the economy, it was only when post-war fiscal policies were being formulated that the potential benefits of fiscal policies via changes in taxes were recognised. While this research does not argue in favour of tax changes over spending policies, it provides evidence that tax policy is a relevant part of the policy toolkit, especially in times of economic difficulty.

Lessons for the euro from Italian and German monetary unification in the nineteenth century

by Roger Vicquéry (London School of Economics)

Unificazione-Monetaria-Italiana-2012
Special euro-coin issued in 2012 to celebrate the 150th anniversary of the monetary unification of Italy. From Numismatica Pacchiega, available at <https://www.numismaticapacchiega.it/5-euro-annivesario-unificazione/&gt;

Is the euro area sustainable in its current membership form? My research provides new lessons from past examples of monetary integration, looking at the monetary unification of Italy and Germany in the second half of the nineteenth century.

 

Currency areas’ optimal membership has recently been at the forefront of the policy debate, as the original choice of letting peripheral countries join the euro was widely blamed for the common currency existential crisis. Academic work on ‘optimum currency areas’ (OCA) traditionally warned against the risk of adopting a ‘one size fits all’ monetary policy for regions with differing business cycles.

Krugman (1993) even argued that monetary unification in itself might increase its own costs over time, as regions are encouraged to specialise and thus become more different to one another. But those concerns were dismissed by Frankel and Rose’s (1998) influential ‘OCA endogeneity’ theory: once regions with ex-ante diverging paths join a common currency, they will see their business cycle synchronise progressively ex-post.

My findings question the consensus view in favour of ‘OCA endogeneity’ and raise the issue of the adverse effects of monetary integration on regional inequality. I argue that the Italian monetary unification played a role in the emergence of the regional divide between Italy’s Northern and Southern regions by the turn of the twentieth century.

I find that pre-unification Italian regions experienced largely asymmetric shocks, pointing to high economic costs stemming from the 1862 Italian monetary unification. While money markets in Northern Italy were synchronised with the core of the European monetary system, Southern Italian regions tended to move together with the European periphery.

The Italian unification is an exception in this respect, as I show that other major monetary arrangements in this period, particularly the German monetary union but also the Latin Monetary Convention and the Gold Standard, occurred among regions experiencing high shock synchronisation.

Contrary to what ‘OCA endogeneity’ would imply, shock asymmetry among Italian regions actually increased following monetary unification. I estimate that pairs of Italian provinces that came to be integrated following unification became, over four decades, up to 15% more dissimilar to one another in their economic structure compared to pairs of provinces that already belonged to the same monetary union. This means that, in line with Krugman’s pessimistic take on currency areas, economic integration in itself increased the likelihood of asymmetric shocks.

In this respect, the global grain crisis of the 1880s, disproportionally affecting the agricultural South while Italy pursued a restrictive monetary policy, might have laid the foundations for the Italian ‘Southern Question’. As pointed out by Krugman, asymmetric shocks in a currency area with low transaction costs can lead to permanent loss in regional income, as prices are unable to adjust fast enough to prevent factors of production to permanently leave the affected region.

The policy implications of this research are twofold.

First, the results caution against the prevalent view that cyclical symmetry within a currency area is bound to improve by itself over time. In particular, the role of specialisation and factor mobility in driving cyclical divergence needs to be reassessed. As the euro area moves towards more integration, additional specialisation of its regions could further magnify – by increasing the likelihood of asymmetric shocks – the challenges posed by the ‘one size fits all’ policy of the European Central Bank on the periphery.

Second, the Italian experience of monetary unification underlines how the sustainability of currency areas is chiefly related to political will rather than economic costs. Despite the fact that the Italian monetary union has been sub-optimal from the start and to a large extent remained so, it has managed to survive unscathed for the last century and a half. While the OCA framework is a good predictor of currency areas’ membership and economic performance, their sustainability is likely to be a matter of political integration.