How well off were the occupants of early modern almshouses?

by Angela Nicholls (University of Warwick).

Almhouses in Early Modern England is published by Boydell Press. SAVE 25% when you order direct from the publisher – offer ends on the 13th December 2018. See below for details.

pic00

Almshouses, charitable foundations providing accommodation for poor people, are a feature of many towns and villages. Some are very old, with their roots in medieval England as monastic infirmaries for the sick, pilgrims and travellers, or as chantries offering prayers for the souls of their benefactors. Many survived the Reformation to be joined by a remarkable number of new foundations between around 1560 and 1730. For many of them their principal purpose was as sites of memorialisation and display, tangible representations of the philanthropy of their wealthy donors. But they are also some of the few examples of poor people’s housing to have survived from the early modern period, so can they tell us anything about the material lives of the people who lived in them?

Paul Slack famously referred to almspeople as ‘respectable, gowned Trollopian worthies’, and there are many examples to justify that view, for instance Holy Cross Hospital, Winchester, refounded in 1445 as the House of Noble Poverty. But these are not typical. Nevertheless, many early modern almshouse buildings are instantly recognisable, with the ubiquitous row of chimneys often the first indication of the identity of the building.

 

pic01
Burghley Almshouses, Stamford (1597)

 

Individual chimneys and, often, separate front doors are evidence of private domestic space, far removed from the communal halls of the earlier medieval period, or the institutional dormitories of the nineteenth century workhouses which came later. Accommodating almspeople in their own rooms was not just a reflection of general changes in domestic architecture at the time, which placed greater emphasis on comfort and privacy, but represented a change in how almspeople were viewed and how they were expected to live their lives. Instead of living communally with meals provided, in the majority of post-Reformation almshouses the residents would have lived independently, buying their own food, cooking it themselves on their own hearth and eating it by themselves in their rooms. The importance of the hearth was not only as the practical means of heating and cooking, but was central to questions of identity and social status. Together with individual front doors, these features gave occupants a degree of independence and autonomy; they enabled almspeople to live independently despite their economic dependence, and to adopt the appearance if not the reality of independent householders.

 

Screen Shot 2018-11-13 at 16.40.44
Stoneleigh Old Almshouses, Warwickshire (1576)

 

The retreat from communal living also meant that almspeople had to support themselves rather than have all their needs met by the almshouse. This was achieved in many places by a transition to monetary allowances or stipends with which almspeople could purchase their own food and necessities, but the existence and level of these stipends varied considerably. Late medieval almshouses often specified an allowance of a penny a day, which would have provided a basic but adequate living in the fifteenth century, but was seriously eroded by sixteenth-century inflation. Thus when Lawrence Sheriff, a London mercer, established in 1567 an almshouse for four poor men in his home town of Rugby, his will gave each of them the traditional penny a day, or £1 10s 4d a year. Yet with inflation, if these stipends were to match the real value of their late-fifteenth-century counterparts, his almsmen would actually have needed £4 5s 5d a year.[1]

The nationwide system of poor relief established by the Tudor Poor Laws, and the survival of poor relief accounts from many parishes by the late seventeenth century, provide an opportunity to see the actual amounts disbursed in relief by overseers of the poor to parish paupers. From the level of payments made to elderly paupers no longer capable of work it is possible to calculate the barest minimum which an elderly person living rent free in an almshouse might have needed to feed and clothe themself and keep warm.[2] Such a subsistence level in the 1690s equates to an annual sum of £3 17s which can be adjusted for inflation and used to compare with a range of known almshouse stipends from the late sixteenth and seventeenth centuries.

The results of this comparison are interesting, even surprising. Using data from 147 known almshouse stipends in six different counties (Durham, Yorkshire, Norfolk, Warwickshire, Buckinghamshire and Kent) it seems that less than half of early modern almshouses provided their occupants with stipends which were sufficient to live on. Many provided no financial assistance at all.

pic03

The inescapable conclusion is that the benefits provided to early modern almspeople were in many cases only a contribution towards their subsistence. In this respect almshouse occupants were no different from the recipients of parish poor relief, who rarely had their living costs met in full.

Yet, even in one of the poorer establishments, almshouse residents had distinct advantages over other poor people. Principally these were the security of their accommodation, the permanence and regularity of any financial allowance, no matter how small, and the autonomy this gave them. Almshouse residents may also have had an enhanced status as ‘approved’, deserving poor. The location of many almshouses, beside the church, in the high street, or next to the guildhall, seems to have been purposely designed to solicit alms from passers-by, at a time when begging was officially discouraged.

SAVE 25% when you order direct from the publisher. Discount applies to print and eBook editions. Click the link, add to basket and enter offer code BB500 in the box at the checkout. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291 and quote the same code. Offer ends one month after the date of upload. Any queries please email marketing@boydell.co.uk

 

NOTES

[1] Inflation index derived from H. Phelps Brown and S. V. Hopkins, A Perspective of Wages and Prices (London and New York, 1981) pp. 13-59.

[2] L. A. Botelho, Old Age and the English Poor Law, 1500 – 1700 (Woodbridge, 2004) pp. 147-8.

Wheels of change: skill-biased factor endowments and industrialisation in eighteenth century England

by Joel Mokyr (Northwestern University), Assaf Sarid (Haifa University), Karine van der Beek (Ben-Gurion University)

Shorrocks_Lancashire_Loom_with_a_weft_stop_MOSI_6406
Shorrocks Lancashire Loom with a weft stop, The Museum of Science and Industry in Manchester. Available at Wikimedia Commons

The main manifestation of an industrial revolution taking place in Britain in the second half of the eighteenth century was the shift of textile production (that is, the spinning process), from a cottage-based manual system, to a factory-based capital-intensive system, with machinery driven by waterpower and later on by steam.

The initial shift in production technology in the 1740s took place in all the main textile centres (the Cotswolds, East Anglia, and in the middle Pennines in Lancashire and the West-Riding). But towards the end of the century, as the intensity of production and the application of Watt’s steam engine increased, the supremacy of the cotton industry of the northwestern parts of the country began to show, and this is where the industrial revolution eventually took place and persisted.

Our research examines the role of factor endowments in determining the location of technology adoption in the English textile industry and its persistence since the Middle Ages. In line with recent research on economic growth, which emphasises the role of factor endowments on long run economic development, we claim that the geographical and institutional environment determined the location of watermill technology adoption in the production of foodstuffs.

In turn, the adoption of the watermill for grain grinding (around the tenth and eleventh centuries), affected the area’s path of development by determining the specialisation and skills that evolved, and as a result, its suitability for the adoption of new textile technologies, textile fulling (thirteenth and fourteenth centuries) and, later on, spinning (eighteenth century).

The explanation for this path dependence is that all these machines, including other machinery that was developed in various production processes (such as sawing mills, forge mills, paper mills, etc.), were all based on similar mechanical principles as the grinding watermills. Thus, their implementation did not require additional resources or skills and it was therefore more profitable to invest in them and expand textile production, in places that were specialised and experienced in the construction and maintenance of grinding watermills.

As textile exports expanded in the second half of the eighteenth century (both woollen and cotton textiles), Watt’s steam engine was introduced. The watermills that operated the newly introduced spinning machinery began to be replaced with the more efficient steam engines, and almost disappeared by the beginning of the nineteenth century. This stage of technological change took place in Lancashire’s textile centre, which enjoyed both the proximity of coal as well as of strong water flows, and was therefore suitable for the implementation of steam engine technology.

We use information from a variety of sources, including the Apprenticeship Stamp-Tax Records (eighteenth century), Domesday Book (eleventh century), as well as geographical databases, and show that the important English textile centres of the eighteenth century, evolved in places that had more grinding watermills during the Domesday Survey (1086).

To be more precise, we find that on average, there was an additional textile merchant in 1710 in areas that had three more watermills in 1086. The magnitude of this effect is important given that there were on average 1.2 textile cloth merchants in an area (the maximum was 34 merchants).

We also find that textile centres in these areas persisted well into the eighteenth century and specialised in skilled mechanical human capital (measured by the number of apprentices to masters specialising in watermill technology, that is, wrights, in the eighteenth century), which was essential for the development, implementation and maintenance of waterpower as well as mechanical machinery.

The number of this type of worker increased in the 1750s in all the main textile centres until the 1780s, when their number was declining in Lancashire as it was adopting a new technology that was no longer dependent on their skills.

Revisiting the changing body

by Bernard Harris (University of Strathclyde)

The Society has arranged with CUP that a 20% discount is available on this book, valid until the 11th November 2018. The discount page is: www.cambridge.org/wm-ecommerce-web/academic/landingPage/EHS20

The last century has witnessed unprecedented improvements in survivorship and life expectancy. In the United Kingdom alone, infant mortality fell from over 150 deaths per thousand births at the start of the last century to 3.9 deaths per thousand births in 2014 (see the Office for National Statistics  for further details). Average life expectancy at birth increased from 46.3 to 81.4 years over the same period (see the Human Mortality Database). These changes reflect fundamental improvements in diet and nutrition and environmental conditions.

The changing body: health, nutrition and human development in the western world since 1700 attempted to understand some of the underlying causes of these changes. It drew on a wide range of archival and other sources covering not only mortality but also height, weight and morbidity. One of our central themes was the extent to which long-term improvements in adult health reflected the beneficial effect of improvements in earlier life.

The changing body also outlined a very broad schema of ‘technophysio evolution’ to capture the intergenerational effects of investments in early life. This is represented in a very simple way in Figure 1. The Figure tries to show how improvements in the nutritional status of one generation increase its capacity to invest in the health and nutritional status of the next generation, and so on ‘ad infinitum’ (Floud et al. 2011: 4).

fig01
Figure 1. Technophysio evolution: a schema. Source: See Floud et al. 2011: 3-4.

We also looked at some of the underlying reasons for these changes, including the role of diet and ‘nutrition’. As part of this process, we included new estimates of the number of calories which could be derived from the amount of food available for human consumption in the United Kingdom between circa 1700 and 1913. However, our estimates contrasted sharply with others published at the same time (Muldrew 2011) and were challenged by a number of other authors subsequently. Broadberry et al. (2015) thought that our original estimates were too high, whereas both Kelly and Ó Gráda (2013) and Meredith and Oxley (2014) regarded them as too low.

Given the importance of these issues, we revisited our original calculations in 2015. We corrected an error in the original figures, used Overton and Campbell’s (1996) data on extraction rates to recalculate the number of calories, and included new information on the importation of food from Ireland to other parts of what became the UK. Our revised Estimate A suggested that the number of calories rose by just under 115 calories per head per day between 1700 and 1750 and by more than 230 calories between 1750 and 1800, with little changes between 1800 and 1850. Our revised Estimate B suggested that there was a much bigger increase during the first half of the eighteenth century, followed by a small decline between 1750 and 1800 and a bigger increase between 1800 and 1850 (see Figure 2). However, both sets of figures were still well below the estimates prepared by Kelly and Ó Gráda, Meredith and Oxley, and Muldrew for the years before 1800.

fig02
Source: Harris et al. 2015: 160.

These calculations have important implications for a number of recent debates in British economic and social history (Allen 2005, 2009). Our data do not necessarily resolve the debate over whether Britons were better fed than people in other countries, although they do compare quite favourably with relevant French estimates (see Floud et al. 2011: 55). However, they do suggest that a significant proportion of the eighteenth-century population was likely to have been underfed.
Our data also raise some important questions about the relationship between nutrition and mortality. Our revised Estimate A suggests that food availability rose slowly between 1700 and 1750 and then more rapidly between 1750 and 1800, before levelling off between 1800 and 1850. These figures are still broadly consistent with Wrigley et al.’s (1997) estimates of the main trends in life expectancy and our own figures for average stature. However, it is not enough simply to focus on averages; we also need to take account of possible changes in the distribution of foodstuffs within households and the population more generally (Harris 2015). Moreover, it is probably a mistake to examine the impact of diet and nutrition independently of other factors.

To contact the author: bernard.harris@strath.ac.uk

References

Allen, R. (2005), ‘English and Welsh agriculture, 1300-1850: outputs, inputs and income’. URL: https://www.nuffield.ox.ac.uk/media/2161/allen-eandw.pdf.

Allen, R. (2009), The British industrial revolution in global perspective, Cambridge: Cambridge University Press.

Broadberry, S., Campbell, B., Klein, A., Overton, M. and Van Leeuwen, B. (2015), British economic growth, 1270-1870, Cambridge: Cambridge University Press.

Floud, R., Fogel, R., Harris, B. and Hong, S.C. (2011), The changing body: health, nutrition and human development in the western world since 1700, Cambridge: Cambridge University Press.

Harris, B. (2015), ‘Food supply, health and economic development in England and Wales during the eighteenth and nineteenth centuries’, Scientia Danica, Series H, Humanistica, 4 (7), 139-52.

Harris, B., Floud, R. and Hong, S.C. (2015), ‘How many calories? Food availability in England and Wales in the eighteenth and nineteenth centuries’, Research in Economic History, 31, 111-91.

Kelly, M. and Ó Gráda, C. (2013), ‘Numerare est errare: agricultural output and food supply in England before and during the industrial revolution’, Journal of Economic History, 73 (4), 1132-63.

Meredith, D. and Oxley, D. (2014), ‘Food and fodder: feeding England, 1700-1900’, Past and Present, 222, 163-214.

Muldrew, C. (2011), Food, energy and the creation of industriousness: work and material culture in agrarian England, 1550-1780, Cambridge: Cambridge University Press.

Overton, M. and Campbell, B. (1996), ‘Production et productivité dans l’agriculture anglaise, 1086-1871’, Histoire et Mésure, 1 (3-4), 255-97.

Wrigley, E.A., Davies, R., Oeppen, J. and Schofield, R. (1997), English population history from family reconstitution, Cambridge: Cambridge University Press.

Surprisingly gentle confinement

Tim Leunig (LSE), Jelle van Lottum (Huygens Institute) and Bo Poulsen (Aarlborg University) have been investigating the treatment of prisoners of war in the Napoleonic Wars.

 

index
Napoleonic Prisoner of War. Available at <https://blog.findmypast.com.au/explore-our-fascinating-new-napoleonic-prisoner-of-war-records-1406376311.html&gt;

For most of history, life as a prisoner of war was nasty, brutish and short. There were no regulations on the treatment of prisoners until the 1899 Hague convention, and the later Geneva conventions. Many prisoners were killed immediately, other enslaved to work in mines, and other undesirable places.

The poor treatment of prisoners of war was partly intentional – they were the hated enemy, after all. And partly it was economic. It costs money to feed and shelter prisoners. Countries in the past – especially in times of war and conflict – were much poorer than today.

Nineteenth century prisoner death rates were horrific. Between one-half and six-sevenths of Napoleon’s 17,000 troops surrendering to the Spanish in 1808 after the Battle of Balién died as prisoners of war. The American civil war saw death rates rise to 27%, even though the average prisoner was captive for less than a year.

The Napoleonic Wars saw the British capture 7,000 Danish and Norwegian sailors, military and merchant. Britain did not desire war with Denmark (which ruled Norway at the time), but did so to prevent Napoleon seizing the Danish fleet. Prisoners were incarcerated on old, unseaworthy “prison hulks”, moored in the Thames Estuary, near Rochester. Conditions were crowded: each man was given just 2 feet (60 cm) in width to hang his hammock.

Were these prison hulks floating tombs, as some contemporaries claimed? Our research shows otherwise. The Admiralty kept exemplary records, now held in the National Archive in Kew. These show the date of arrival in prison, and the date of release, exchange, escape – or death. They also tell us the age of the prisoner, where they came from, the type of ship they served on, and whether they were an officer, craftsman, or regular sailor. We can use these records to look at how many died, and why.

The prisoners ranged in age from 8 to 80, with half aged 22 to 35. The majority sailed on merchant vessels, with a sixth on military vessels, and a quarter on licenced pirate boats, permitted to harass British shipping. The amount of time in prison varied dramatically, from 3 days to over 7 years, with an average of 31 months. About two thirds were released before the end of the war.

Taken as a whole, 5% of prisoners died. This is a remarkably low number, given how long they were held, and given experience elsewhere in the nineteenth century. Being held prisoner for longer increased your chance of dying, but not by much: those who spent three years on a prison hulk had only a 1% greater chance of dying than those who served just one year.

Death was (almost) random. Being captured at the start of the war was neither better nor worse than being captured at the end. The number of prisoners held at any one time did not increase the death rate. The old were no more likely to die than the young – anyone fit enough to go to see was fit enough to withstand any rigours of prison life. Despite extra space and better rations, officers were no less likely to die, implying that conditions were reasonable for common sailors.

There is only one exception: sailors from licenced pirate boats were twice as likely to die as merchant or official navy sailors. We cannot know the reason. Perhaps they were treated less well by their guards, or other prisoners. Perhaps they were risk takers, who gambled away their rations. Even for this group, however, the death rates were very low compared with those captured in other places, and in other wars.

The British had rules on prisoners of war, for food and hygiene. Each prisoner was entitled to 2.5 lbs (~1 kg) of beef, 1 lb of fish, 10.5 lbs of bread, 2 lbs of potatoes, 2.5lbs of cabbage, and 14 pints (8 litres) of (very weak) beer a week. This is not far short of Danish naval rations, and prisoners are less active than sailors. We cannot be sure that they received their rations in full every week, but the death rates suggest that they were not hungry in any systematic way. The absence of epidemics suggests that hygiene was also good. Remarkably, and despite a national debt that peaked at a still unprecedented 250% of GDP, the British appear to have obeyed their own rules on how to treat prisoners.

Far from being floating tombs, therefore, this was a surprisingly gentle confinement for the Danish and Norwegian sailors captured by the British in the Napoleonic Wars.

BAD LOCATIONS: Many French towns have been trapped in obsolete places for centuries

Beaumaris,_1610
John Speed (1610), 17th century map of Beaumaris. Available on Wiki Commons.

Only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. That is one of the findings of research by Guy Michaels (London School of Economics) and Ferdinand Rauch (University of Oxford), which uses the contrasting experiences of British and French cities after the fall of the Roman Empire as a natural experiment to explore the impact of history on economic geography – and what leads cities to get stuck in undesirable locations, a big issue for modern urban planners.

The study, published in the February 2018 issue of the Economic Journal, notes that in France, post-Roman urban life became a shadow of its former self, but in Britain it completely disappeared. As a result, medieval towns in France were much more likely to be located near Roman towns than their British counterparts. But many of these places were obsolete because the best locations in Roman times weren’t the same as in the Middle Ages, when access to water transport was key.

The world is rapidly urbanising, but some of its growing cities seem to be misplaced. Their locations are hampered by poor access to world markets, shortages of water or vulnerability to flooding, earthquakes, volcanoes and other natural disasters. This outcome – cities stuck in the wrong places – has potentially dire economic and social consequences.

When thinking about policy responses, it is worth looking at the past to see how historical events can leave cities trapped in locations that are far from ideal. The new study does that by comparing the evolution of two initially similar urban networks following a historical calamity that wiped out one, while leaving the other largely intact.

The setting for the analysis of urban persistence is north-western Europe, where the authors trace the effects of the collapse of the Western Roman Empire more than 1,500 years ago through to the present day. Around the dawn of the first millennium, Rome conquered, and subsequently urbanised, areas including those that make up present day France and Britain (as far north as Hadrian’s Wall). Under the Romans, towns in the two places developed similarly in terms of their institutions, organisation and size.

But around the middle of the fourth century, their fates diverged. Roman Britain suffered invasions, usurpations and reprisals against its elite. Around 410CE, when Rome itself was first sacked, Roman Britain’s last remaining legions, which had maintained order and security, departed permanently. Consequently, Roman Britain’s political, social and economic order collapsed. Between 450CE and 600CE, its towns no longer functioned.

Although some Roman towns in France also suffered when the Western Roman Empire fell, many of them survived and were taken over by Franks. So while the urban network in Britain effectively ended with the fall of the Western Roman Empire, there was much more urban continuity in France.

The divergent paths of these two urban networks makes it possible to study the spatial consequences of the ‘resetting’ of an urban network, as towns across Western Europe re-emerged and grew during the Middle Ages. During the High Middle Ages, both Britain and France were again ruled by a common elite (Norman rather than Roman) and had access to similar production technologies. Both features make is possible to compare the effects of the collapse of the Roman Empire on the evolution of town locations.

Following the asymmetric calamity and subsequent re-emergence of towns in Britain and France, one of three scenarios can be imagined:

  • First, if locational fundamentals, such as coastlines, mountains and rivers, consistently favour a fixed set of places, then those locations would be home to both surviving and re-emerging towns. In this case, there would be high persistence of locations from the Roman era onwards in both British and French urban networks.
  • Second, if locational fundamentals or their value change over time (for example, if coastal access becomes more important) and if these fundamentals affect productivity more than the concentration of human activity, then both urban networks would similarly shift towards locations with improved fundamentals. In this case, there would be less persistence of locations in both British and French urban networks relative to the Roman era.
  • Third, if locational fundamentals or their value change, but these fundamentals affect productivity less than the concentration of human activity, then there would be ‘path-dependence’ in the location of towns. The British urban network, which was reset, would shift away from Roman-era locations towards places that are more suited to the changing economic conditions. But French towns would tend to remain in their original Roman locations.

The authors’ empirical investigation finds support for the third scenario, where town locations are path-dependent. Medieval towns in France were much more likely to be located near Roman towns than their British counterparts.

These differences in urban persistence are still visible today; for example, only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. This finding suggests that the urban network in Britain shifted towards newly advantageous locations between the Roman and medieval eras, while towns in France remained in locations that may have become obsolete.

But did it really matter for future economic development that medieval French towns remained in Roman-era locations? To shed light on this question, the researchers focus on a particular dimension of each town’s location: its accessibility to transport networks.

During Roman times, roads connected major towns, facilitating movements of the occupying army. But during the Middle Ages, technical improvements in water transport made coastal access more important. This technological change meant that having coastal access mattered more for medieval towns in Britain and France than for Roman ones.

The study finds that during the Middle Ages, towns in Britain were roughly two and a half times more likely to have coastal access – either directly or via a navigable river – than during the Roman era. In contrast, in France, there was little change in the urban network’s coastal access.

The researchers also show that having coastal access did matter for towns’ subsequent population growth, which is a key indicator of their economic viability. Specifically, they find that towns with coastal access grew faster between 1200 and 1700, and for towns with poor coastal access, access to canals was associated with faster population growth. The investments in the costly building and maintenance of these canals provide further evidence of the value of access to water transport networks.

The conclusion is that many French towns were stuck in the wrong places for centuries, since their locations were designed for the demands of Roman times and not those of the Middle Ages. They could not take full advantage of the improved transport technologies because they had poor coastal access.

Taken together, these findings show that urban networks may reconfigure around locational fundamentals that become more valuable over time. But this reconfiguration is not inevitable, and towns and cities may remain trapped in bad locations over many centuries and even millennia. This spatial misallocation of economic activity over hundreds of years has almost certainly induced considerable economic costs.

‘Our findings suggest lessons for today’s policy-makers – conclude the authors – The conclusion that cities may be misplaced still matters as the world’s population becomes ever more concentrated in urban areas. For example, parts of Africa, including some of its cities, are hampered by poor access to world markets due to their landlocked position and poor land transport infrastructure. Our research suggests that path-dependence in city locations can still have significant costs.’

‘‘Resetting the Urban Network: 117-2012’ by Guy Michaels and Ferdinand Rauch was published in the February 2018 issue of the Economic Journal.

To contact the authors:
Guy Michaels (G.Michaels@lse.ac.uk)
Ferdinand Rauch (ferdinand.rauch@economics.ox.ac.uk)

THE FINANCIAL POWER OF THE POWERLESS: Evidence from Ottoman Istanbul on socio-economic status, legal protection and the cost of borrowing

In Ottoman Istanbul, privileged groups such as men, Muslims and other elites paid more for credit than the under-privileged – the exact opposite of what happens in a modern economy.

New research by Professors Timur Kuran (Duke University) and Jared Rubin (Chapman University), published in the March 2018 issue of the Economic Journal, explains why: a key influence on the cost of borrowing is the rule of law and in particular the extent to which courts will enforce a credit contract.

In pre-modern Turkey, it was the wealthy who could benefit from judicial bias to evade their creditors – and who, because of this default risk, faced higher interest rates on loans. Nowadays, it is under-privileged people who face higher borrowing costs because there are various institutions through which they can escape loan repayment, including bankruptcy options and organisations that will defend poor defaulters as victims of exploitation.

In the modern world, we take it for granted that the under-privileged incur higher borrowing costs than the upper socio-economic classes. Indeed, Americans in the bottom quartile of the US income distribution usually borrow through pawnshops and payday lenders at rates of around 450% per annum, while those in the top quartile take out short-term loans through credit cards at 13-16%. Unlike the under-privileged, the wealthy also have access to long-term credit through home equity loans at rates of around 4%.

The logic connecting socio-economic status to borrowing costs will seem obvious to anyone familiar with basic economics: the higher costs of the poor reflect higher default risk, for which the lender must be compensated.

The new study sets out to test whether the classic negative correlation between socio-economic status and borrowing cost holds in a pre-modern setting outside the industrialised West. To this end, the authors built a data set of private loans issued in Ottoman Istanbul during the period from 1602 to 1799.

These data reveal the exact opposite of what happens in a modern economy: the privileged paid more for credit than the under-privileged. In a society where the average real interest rate was around 19%, men paid an interest surcharge of around 3.4 percentage points; Muslims paid a surcharge of 1.9 percentage points; and elites paid a surcharge of about 2.3 percentage points (see Figure 1).

pic

What might explain this reversal of relative borrowing costs? Why did socially advantaged groups pay more for credit, not less?

The data led the authors to consider a second factor contributing to the price of credit, often taken for granted: the partiality of the law. Implicit in the logic that explains relative credit costs in modern lending markets is that financial contracts are enforceable impartially when the borrower is able to pay. Thus, the rich pay less for credit because they are relatively unlikely to default and because, if they do, lenders can force repayment through courts whose verdicts are more or less impartial.

But in settings where the courts are biased in favour of the wealthy, creditors will expect compensation for the risk of being unable to obtain restitution. The wealth and judicial partiality effects thus work against each other. The former lowers the credit cost for the rich; the latter raises it.

Islamic Ottoman courts served all Ottoman subjects through procedures that were manifestly biased in favour of clearly defined groups. These courts gave Muslims rights that they denied to Christians and Jews. They privileged men over women.

Moreover, because the courts lacked independence from the state, Ottoman subjects connected to the sultan enjoyed favourable treatment. Theory developed in the new study explains why their weak legal power may translate into strong financial power.

More generally, this research suggests that in a free financial market, any hindrance to the enforcement of a credit contract will raise the borrower’s credit cost. Just as judicial biases in favour of the wealthy raise their interest rates on loans, institutions that allow the poor to escape loan repayment – bankruptcy options, shielding of assets from creditors, organisations that defend poor defaulters as victims of exploitation – raise interest rates charged to the poor.

Today, wealth and credit cost are negatively correlated for multiple reasons. The rich benefit both from a higher capacity to post collateral and from better enforcement of their credit obligations relative to those of the poor.

 

To contact the authors:
Timur Kuran (t.kuran@duke.edu); Jared Rubin (jrubin@chapman.edu)

Medieval origins of Spain’s economic geography

The frontier of medieval warfare between Christian and Muslim armies in southern Spain provides a surprisingly powerful explanation of current low-density settlement patterns in those regions. This is the central finding of research by Daniel Oto-Peralías (University of Saint-Andrews), recently presented at the Royal Economic Society’s annual conference in March 2018.

 His study notes that Southern Spain is one of the most deserted areas in Europe in terms of population density, only surpassed by parts of Iceland and the northern part of Scandinavia. It turns out that this outcome has roots going back to medieval times when Spain’s southern plateau was a battlefield between Christian and Muslim armies.

The study documents that Spain stands out in Europe with an anomalous settlement pattern characterised by a very low density in its southern half. Among the ten European regions with the lowest settlement density, six are from southern Spain (while the other four are from Iceland, Norway, Sweden and Finland).

pro

On average only 29.8% of 10km2 grid cells are inhabited in southern Spain, which is a much lower percentage than in the rest of Europe (with an average of 74.4%). Extreme geographical and climatic conditions do not seem to be the reason for this low settlement density, which the author refers to as ‘Spanish anomaly’.

After ruling out geography as the main explanatory factor for the ‘Spanish anomaly’, the research investigates its historical roots by focusing on the Middle Ages, when the territory was retaken by the Christian kingdoms from Muslim rule.

The hypothesis is that the region’s character as a militarily insecure frontier conditioned the colonisation of the territory, which is tested by taking advantage of the geographical discontinuity in military insecurity created by the Tagus River in central Spain. Historical ‘accidents’ made the colonisation of the area south of the Tagus River very different from colonisation north of it.

The invasions of North Africa’s Almoravid and Almohad empires converted the territory south of the Tagus into a battlefield for a century and a half, this river being a natural defensive border. Continuous warfare and insecurity heavily conditioned the nature of the colonisation process in this frontier region, which was characterised by the leading role of the military orders as agents of colonisation, scarcity of population and a livestock-oriented economy. It resulted in the prominence of castles and the absence of villages, and consequently, a spatial distribution of the population characterised by a very low density of settlements.

The empirical analysis reveals a large difference in settlement density across the River Tagus, whereas there are no differences in geographical and climatic variables across it. In addition, it is shown that the discontinuity in settlement density already existed in the 16th and 18th centuries, and is not therefore the result of migration movements and urban developments taking place recently. Preliminary evidence also indicates that the territory exposed to the medieval ranching frontier is relatively poorer today.

Thus, the study shows that historical frontiers can decisively shape the economic geography of countries. Using Medieval Spain as a case study, it illustrates how the exposure to warfare and insecurity – typical in medieval frontiers– creates incentives for a militarised colonisation based on a few fortified settlements and a livestock-oriented economy, conditioning the occupation of a territory to such an extent to convert it into one of the most deserted areas in Europe. Given the ubiquity of frontiers in history, the mechanisms underlined in the analysis are of general interest and may operate in other contexts.

EHS 2018 special: Foreign sailors in Nelson’s Navy: a forgotten story

by Sara Caputo (University of Cambridge) 

 

midshipman20by20e20fane
Nelson as a Midshipman, 1775. Available at <http://www.admiralnelson.info/Timeline.htm&gt;

Few aspects of British history have attracted more patriotic enthusiasm than the nation’s naval exploits at the time of Nelson and Trafalgar. A less-known fact is that during the Revolutionary and Napoleonic Wars against France (1793-1815), the Royal Navy recruited thousands of foreign sailors.

My doctoral research, co-funded by the Arts and Humanities Research Council and Robinson College, Cambridge, aims to reconstruct these men’s experiences for the first time, as well as giving an indication of the size of the phenomenon.

A quantitative study conducted on a sample of crews, chosen among those serving the furthest away from Britain – and thus most likely to include foreigners – revealed that 14.03% of the seamen sampled (616 out of 4,392) were born outside Britain or Ireland. Aboard one of the ships stationed in Jamaica in 1813, the proportion rose to 22.83%.

These sailors came from every corner of the world, and their numbers oscillated depending on the British state’s need for skilled seafarers in times of crisis. But their presence is often forgotten in favour of nationalistic narratives of British glory. Quantitative analysis of this kind helps to confirm that the British Navy of the Age of Sail, of Nelson and Trafalgar, was far from being manned only by ‘True Britons’. If Britannia ruled the waves, it was not always entirely by her own devices.

Americans were the largest group found in the sample (176 men), followed by natives of what today is Germany, West Indians, Swedes, Danes and Norwegians, Dutchmen, Portuguese and East Indians. Italians, Frenchmen (even though they were nominally the enemy), Africans and Spaniards were also well represented, and other smaller groups included Poles, South Americans, Russians, Maltese, Finns, one Greek and even – quite surprisingly – a Swiss, an Austrian, a Hungarian and a Chinese.

Previous studies have analysed the composition of crews in the eighteenth century Navy, but because no one has focused specifically on foreigners the samples had been chosen and interrogated in different ways. My research aims to cast light on changes over the whole time span of these wars, and across different geographical stations.

Three ships were chosen from each of three points in time – roughly the beginning, middle and end of the wars. The results show that the proportion of foreigners was lower in 1793, at the start of the conflict, with only 6.24% of the men in the sample coming from abroad, but went up to 14.94% in 1802, halfway through the war, and 18.49% by 1813, towards the end of it.

This is likely to be a symptom of the Navy’s increasing hunger for manpower, as the war progressed with heavy casualties and the British reserves of seamen becoming depleted.

As is often the case when dealing with matters of national belonging, the status of many of the men in the sample is potentially ambiguous: legal distinctions between ‘British’ and ‘foreign’ were complex and far from clear-cut, depending on ideas of birthplace and ‘blood’, but also on cultural aspects such as personal choice, length of service, political loyalties, social status and general usefulness to the country.

If the British armed forces today only employ UK or Irish nationals, or Commonwealth nationals with settled status, this was not always the case: 200 years ago, men we would nowadays define as foreigners were actively sought and recruited by the British monarchy, and played an important role in British society and economy at large, as well as in the construction of an overseas empire.

Modelling regional imbalances in English plebeian migration

by Adam Crymble (University of Hertfordshire)

 

d00e33bf821fe63c44ac1602e79acdcc
FJohn Thomas Smith, Vagabondiana,1817

We often hear complaints of migrant groups negatively influencing British life. Grievances against them are many: migrants bring with them their language, cultural values, and sometimes a tendency to stick together rather than integrate. The story is never that simple, but these issues can get under the skin of the locals, leading to tension. Britain has always been home to migrants, and the tensions are nothing new, but two hundred years ago those outsiders were from much closer afield. Often they came from just down the road, as close as the next parish over. And yet they were still treated as outsiders by the law. Under the vagrancy laws, poor migrants in particular ran the risk of being arrested, whipped, put to hard labour, and expelled back home.

It was a way to make sure that welfare was only spent on local people. But thanks to this system, we’ve got a unique way to tell which parts of Britain were particularly connected to one another, and which bits just weren’t that interested in each other. Each of those expelled individuals left a paper trail, and that means we can calculate which areas sent more or fewer vagrants to places like London than we would expect. And that in turn tells us which parts of the country had the biggest potential to impact on the culture, life, and economy of the capital.

As it happens, it was Bristol that sent more paupers to London than anywhere else in England between 1777 and 1786, including at least 312 individuals. They did not arrive through any plan to overwhelm the metropolis, but through hundreds of individual decisions by Bristolians who thought they’d have a go at London life.

From a migration perspective, this tells us that the connectedness between London and Bristol was particularly strong at this time. Even when we correct for factors such as distance, cost of living, and population, Bristol was still substantially over-sending lower class migrants to the capital.

There are many possible explanations for this close connection. The tendency for migrants to move towards larger urban centres meant Bristolians had few other options for ‘bigger’ destinations than smaller towns. Improvements to the road network also meant the trip was both cheaper and more comfortable by the 1780s. And the beginning of a general decline in the Bristol domestic service economy was met with a rise in opportunities in the growing metropolis. These combined factors may have made the connections between London and Bristol particularly strong.

Other urban pockets of the country too showed a similarly strong connection to London, particularly in the West Midlands and West Country. Birmingham, Coventry, Worcester, Bath, Exeter, and Gloucester were all sending peculiarly high numbers of paupers to eighteenth century London. So too was Newcastle-upon-Tyne and Berwick-upon-Tweed, despite being located far to the north and almost certainly requiring a sea journey.

But not everywhere saw London as a draw. Yorkshire, Lincolnshire, Derbyshire, and Cheshire – a band of counties within walking distance of the sprouting mills of the industrialising North – all sent fewer people to London than we would expect. This suggests that the North was able to retain people, uniquely acting as a competitor to London at this time. It also means that places like Bristol and Newcastle-upon-Tyne may have had a bigger impact on the culture of the metropolis in the eighteenth century than places such as York and Sheffield. And that may have had lasting impact that we do not yet fully understand. Each of these migrants brought with them remnants of their local culture and belief systems: recipes, phrases, and mannerisms, as well as connections to people back home, that may mean that the London of today is a bit more like Bristol or Newcastle than it might otherwise have been. There is more research to be done, but with a clear map of how London was and was not connected to the rest of the country, we can now turn towards understanding how those connections sculpted the country.

To contact the author on Twitter: @adam_crymble

Engineering the industrial revolution (1770-1850)

by Gillian Cookson (University of Leeds)

The Age of Machinery: Engineering the Industrial Revolution, 1770-1850, is published in February by Boydell Press for the Economic History Society’s series ‘People, Markets, Goods’.

SAVE 25% when you order direct from the publisher. Discount applies to print and eBook editions. Click the link, add to basket and enter offer code BB500 in the box at the checkout. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291 and quote the same code. Offer ends on the 19th of March. Any queries please email marketing@boydell.co.uk

9781783272761_4Early machine-makers have always seemed tantalisingly out of reach. This was a localised, workshop-based trade whose products, methods, markets, skill-sets and industrial structure remained ill-defined. Yet out of it, somehow, was created the machinery – especially textile machines and steam engines – fundamental to industrial change in the eighteenth century. There are questions of great significance still unanswered: How could a high-tech mechanical engineering industry emerged from the rudimentary resources of a few localities in northern England? What can be known of the backgrounds and careers of these pioneering mechanical engineers? How did they develop skills, knowledge and system to achieve their ends?

As a research topic this was clearly a winner. But what is the historian to do when faced with such a dearth of substantial sources? Here is the explanation of why the subject has not hitherto been addressed. Evidence of early engineering was seriously lacking, business records almost entirely absent. It turned out, though, that the industry was hiding in plain sight. We’d been looking in the wrong places.

An early breakthrough came in the Hattersley of Keighley papers. Enough of Richard Hattersley’s early accounts and day books have survived, the first from 1793, to demonstrate a thriving pre-factory industry with Hattersley at its hub. He engaged a wider community in specialist component manufacture, using sub-contracting and various other flexible working practices as circumstances demanded. Hattersley’s company did not itself build machinery at that time, but he fed those who did with precision components, vital in making workable machines. The earliest production systems rested on networking, and can be most neatly described as a dispersed factory[1].

It wasn’t that archives had gone missing (though one or two are known to have been lost); but that businesses were so small scale that by and large they never generated any great weight of documentation. It was community-based sources – directories, muster rolls, parish registers, rate books, the West Riding deeds registry, and a painstaking assemblage of all kinds of stray references – that came to the rescue. While this may not exactly be a novel approach to industrial history, it turned out to be the only realistic way into exploring these small, workshop-based ventures in close-knit communities. Remarkably, too, it shone a light on aspects of the industry which business records alone could not have achieved. Community sources bring forward more than an account of business itself, for they set the actors upon their stage, placing engineers within their own environment. In particular, parish register searches, intended as no more than a confirmation of identities and movements, ultimately exposed remarkable connections. As short biographies were constructed, intermarriages and relationships were revealed which seem to explain career changes and migration (often from south to west Yorkshire, or Scotland to Lancashire) which otherwise had seemed random. So this context, which proved so influential, was not confined to engineering itself, but embraced surrounding cultures that were social and familial as much as industrial and technical. Through this information, we can infer some of the motives and concerns which impacted upon business decision-making.

All this, then, is central to The Age of Machinery. For a fully rounded account, other contexts needed unpacking: Which were the seminal machines, in terms of using new materials and parts that demanded different kinds of skills? Where did technological concepts originate, and how did technology move around? Why did engineering lag a generation behind its customer industry, textiles, in moving into factories? How did bans on machinery exportation and artisan emigration impact upon textile engineering, and why were they abandoned? And in an environment generally very welcoming of innovation, how to explain Luddism?

To contact the author: g.cookson@leeds.ac.uk

REFERENCES:

[1] See Gillian Cookson (1997) ‘Family Firms and Business Networks: Textile Engineering in Yorkshire, 1780–1830’, Business History, 39:1, 1-20