Can school centralization foster human capital accumulation? A quasi-experiment from early twentieth-century Italy

By Gabriele Cappelli (University of Siena) and Michelangelo Vasta (University of Siena)

The article is available on Early View at the Economic History Review’s link here

 

The issue of school reform is a key element of institutional change across countries. In developing economies the focus is rapidly shifting from increasing enrolments to improving educational outputs (literacy and skills) and outcomes (wages and productivity). In advanced economies, policy-makers focus on generating skills from educational inputs despite limited resources. This is unsurprising, because human capital formation is largely acknowledged as one of the main factors of economic growth.

Related to education policy, reforms have long focused on the way that the school systems can be organized, particularly its management and funding by local v. central government. On the one hand, local policy makers are more aware of the needs of local communities, which is supposed to improve schooling. On the other hand, school preferences might vary considerably between the central government and the local ruling elites, hampering the diffusion of education. Despite the importance of the topic, there is little historical research on this topic.

In this paper, we offer fresh evidence using a quasi-experiment that aims to explore dramatic changes in Italy’s educational institutions at the beginning of the 20th century, i.e. the 1911 Daneo-Credaro Reform. Due to this legislation, most municipalities moved from a decentralized school system, which had been based on the 1859 Casati Law, to direct state management and funding, while other municipalities, mainly provincial and district capitals, retained their autonomy, thus forming two distinct groups (Figure 1).

The Reform design allows us to compare these two groups through a quasi-experiment based on an innovative technique, namely Propensity Score Matching (henceforth PSM). PSM tackles an issue with the Reform that we study, namely that the assignment into treatment (centralization) of the municipalities is not random: the municipalities that retained school autonomy were those characterized by high literacy. By contrast, the poorest and less literate municipalities were more likely to end up under state control, implying that the analysis of the Daneo-Credaro Reform as an experiment will tend to overestimate the impact of centralization. PSM tackles the issue by ‘randomizing’ the selection into treatment: a statistical model is used to estimate the probability of being selected into centralization (propensity score) for each municipality; then, an algorithm matches municipalities in the treatment group with municipalities in the control group that have an equal (or very similar) propensity score – meaning that the only different feature will be whether they are treated or not. To perform PSM, we construct a novel database at the municipal level (a large sample of 1,000+ comuni). Secondly, we fill a gap in the historiography by providing an in-depth discussion of the way that the Reform worked, which has so far been neglected.

1
Figure 1 – Municipalities that still retained school autonomy in Italy by 1923. Source: Ministero della Pubblica Istruzione (1923), Relazione sul numero, la distribuzione e il funzionamento delle scuole elementari. Rome. Note: both the grey and black dots represent municipalities that retained school autonomy by 1923, while the others (not shown in the map) had shifted to centralized school management and funding. 

We find that the municipalities that switched to state control were characterized by a 0.43 percentage-point premium on the average annual growth of literacy between 1911 and 1921, compared to those that retained autonomy (Table 1). The estimated coefficient means that two very similar municipalities with equal literacy rates at 60% in 1911 will have a literacy gap equal to 3 percentage points in 1921, i.e. 72.07% (school autonomy) vs 75.17% (treated). This difference is similar to the gap between the treatment group and a counterfactual that we estimated in a robustness check based on Italian provinces (Figure 2).

Screen Shot 2019-07-22 at 20.34.56
Table 1 – Estimated treatment (Daneo-Credaro Reform) effect, 1911 – 1921.
2
Figure 2 – Literacy rates in the treatment and control groups, 1881 – 1921, pseudo-DiD. Source: see original article

Centralization improved the overall functioning of the school system and the efficiency of school funding. First, it reduced the distance between the central government and the city councils by granting more decision-making power to the provincial schooling board under the supervision of the central government. Thus, the control exercised by the Ministry reassured teachers that their salary would be increased, and the government could now guarantee that they would be paid regularly, which was not always the case when the municipalities managed primary schooling. Secondly, additional funding was provided to build new schools. The resultant increase appears to have been very large and its impact was amplified by the full reorganization of the school system. The funds could be directed to where they were most needed. Consequently, we argue, a mere increase in funding without institutional change would have been less effective in increasing literacy rates.
To conclude, the 50-year persistence of decentralized primary schooling hampered the accumulation of human capital and regional convergence in basic education, thus casting a long shadow on the future pace of aggregate and regional economic growth. The centralization of primary education via the Daneo-Credaro Reform in 1911 was a major breakthrough, which fostered the spread of literacy and allowed the country to reduce the human-capital gap with the most advanced economies.

 

To contact the author: Gabriele Cappelli

Email: gabriele.cappelli@unisi.it

Twitter: gabercappe

Nineteenth century savings banks, their ledgers and depositors

by Linda Perriton (University of Stirling)

 

If you look up as you walk along the streets of British towns and cities, you will see the proud and sometimes colourful traces of nineteenth century savings banks. But evidence of the importance of savings banks to working- and middle-class savers is harder to locate in economic history research.

English and Welsh savings banks operated on a ‘savings only’ model that funded interest payments to savers by purchasing government bonds and, in doing so, placed themselves outside the history of productive financialisation (Horne, 1947). This is a matter of regret, because whatever minor role trustee savings banks played in the productive economy, there is little doubt that they helped to financialise segments of society previously detached from such activities.

Untitled.png
Image: Author’s own. A mosaic over the door of the former Fountainbridge branch of the Edinburgh Savings Bank.

 

The research that Stuart Henderson (Ulster University) and I presented at the EHS 2019 annual conference looks in detail at the financial activity of depositors in one savings bank – the Limehouse Savings Bank, situated in the East End of London.

Savings bank ledgers are a rich source of social history data in addition to the financial, especially in socially diverse larger cities. The apostils of clerks reveal amusement at the names chosen for local clubs (for example, the Royal Order of the Jolly Cocks merits an exclamation mark) or a note as to love gone wrong (for example, a woman who returns the passbook of a lover from whom she has not heard for two years).

We also want to look beyond the aggregate deposit figures for Limehouse recorded in the government reports to discover how individuals used the bank over the period 1830-76.

As a start, we have recorded the account transactions for each of the 195 new accounts opened in 1830, from the first deposit to the last withdrawal – a total of 3,598 transactions. Using the account header information, we have also compiled the personal details of the account holder – such as gender, occupation and place of residence. We use the header profile to trace individual savers in the historical record in order to establish their age and any notable life events, such as marriage and the birth of children.

Apart from 12 accounts, which were registered to individuals who gave addresses other than East End parishes, all the 1830 savers were registered at addresses within a four miles by one mile strip of urban development, which also enabled us to record the residential clustering of savers.

Summary statistics enable us to establish the differences between the categories of savers across several different indicators of transaction activity.

Perhaps unsurprisingly, the men in our 1830 sample tended to make larger deposits and larger withdrawals than the women, with the difference in magnitude masked somewhat by large transactions undertaken by widows. Widows in our sample tended to have a relatively large opening balance and a higher number of withdrawals, suggesting that their accounts functioned more as a ‘draw down’ fund (Perriton and Maltby, 2015).Men also tended to make more transactions than women.

We also see a significant portion of accounts where activity was very limited. The median number of deposits across our 195 accounts was just two, suggesting that a large proportion of accounts acted as something of a (very) temporary financial warehouse. Minors and servants tended to have smaller transactions, but appear to have accumulated more – relatively speaking – than others.

But our interest in the savers goes beyond summary statistics. We know that very few accounts were managed in the way that the sponsors of savings bank legislation intended; the low median of deposits is testament to that.

The basic information in the ledger headers for each account provides a starting point for thinking about when in the life-cycle savings was more successful. Even with the compulsory registration of births, deaths and marriages after 1837 and census data after 1841, the ability to trace an individual saver is not guaranteed.

With so few data points, it is easy to lose individuals at the periphery of the professional and skilled working classes, even in a relatively well documented city like London. Yet the ability to build individual case studies of savers is important to our understanding of savings banks in terms of establishing who were the ‘successful’ savers, and also when – relative to the overall life-cycle of the saver – accounts were held.

Our research presents ten case study accounts from our larger sample to challenge the proposition in social history research on household finances that savings increased when teenage and young adult children were contributing wages to the household. We also look at the evidence for any savings in anticipation of significant life events such as marriage or childbirth. The evidence is weak on both counts.

The distribution of age at account opening among the ten case studies is varied: under 20 years old (3), 21-29 (2), 30-39 (2), 40-49 (0) and 50-59 (3). The three cases of accounts opened after the age of 50 relate to a widow and two married couples, who all had children aged 10-25. But the majority of the accounts we examined were opened by younger adults with young children and growing families.

There is no obvious case for suggesting that savings were possible because expenses could be offset against the wages of teenage or young adult children. Nor can we see any obvious anticipatory or responsive saving for life events in the case studies.

One of our sample account holders did open her account soon after being widowed, but another widow opened her account seven years after the death of her husband. Two men opened accounts when their children were very young, but not in anticipation of their arrival. The only evidence we have in the case studies for changed behaviour as a result of a life event is in the case of marriage – where all account activity ceased for one of our men in the first years of his union.

The mixed quantitative and biographical approach that we use in our study of the Limehouse Savings Bank point to a promising alternative direction for historical savings bank research – one that reconnects savings bank history with the wider history of retail banking and allows for a much richer interplay between social history and financial history.

By looking at the patterns of use by the Limehouse account holders, it is possible to see the ways in which working families and individuals interacted with a standard product and standard service offering, sometimes adding layers of complexity in order to create a different banking product, or using the accounts to budget within a short-term cycle rather than saving for a significant purchase or event.

 

Further reading:

Horne, HO (1947) A History of Savings Banks, Oxford University Press.

Perriton, L, and J Maltby (2015) ‘Working-class Households and Savings in England, 1850-1880’, Enterprise and Society 16(2): 413-45.

 

To contact the author: linda.perriton@stir.ac.uk

How the Bank of England managed the financial crisis of 1847

by Kilian Rieder (University of Oxford)

lancs-new-branch-bank-of-england-manchester-antique-print-1847-249638-p
New Branch Bank of England, Manchester, antique print, 1847. Available at <https://www.antiquemapsandprints.com/lancs-new-branch-bank-of-england-manchester-antique-print-1847-101568-p.asp&gt;

What drives a central bank’s decision to grant or refuse liquidity provision during a financial crisis? How does the central bank manage counterparty risk during such periods of high demand for liquidity, when time constraints make it hard to process all relevant information? How does a central bank juggle the provision of large amounts of liquidity with its monetary policy obligations?

All of these questions were live issues for the Bank of England during the financial crisis of 1847 just as they would be in 2007. My research uses archival data to shed light on these questions by looking at the Bank’s discount window policies in the crisis year of 1847.

The Bank had to manage the 1847 financial crisis despite being limited by a legal monetary policy provision in the Act to back any expansion of its note issue with gold. It is often cited as the last episode of financial distress during which the Bank rationed central bank liquidity before fully assuming its role as a lender of last resort (Bignon et al, 2012).

We find that the Bank did not engage in any kind of simple threshold rationing but rather monitored and managed its private sector asset holdings in similar ways to central banks have developed since the financial crisis of 2007. In another echo of the recent crisis, the Bank of England also required an indemnity from the UK government in 1847 allowing the Bank to supply more liquidity than it was legally allowed. This indemnity became part of the ‘reaction function’ in future financial crises.

Most importantly, the year 1847 witnessed the introduction of a sophisticated discount ledger system at the Bank. The Bank used the ledger system to record systematically its day-to-day transactions with key counterparties. Discount loan applicants submitted bills in parcels, sometimes containing a hundred or more, which the Bank would have to analyse collectively ‘on the fly’.

The Bank would reject those it didn’t like and then discount the remainder, typically charging a single interest rate. Subsequently, the parcels were ‘unpacked’ into individual bills in the separate customer ‘with and upon ledgers’ where they were classified under the name of their discounter and acceptor alongside several other characteristics at the bill level (drawer, place of origin, maturity, amount, etc.). By analysing these bills and their characteristics we are better able to understanding the Bank’s discount window policies.

We first find evidence that during crisis weeks the Bank was more likely to reject demands for credit from bill brokers – the money market mutual funds of their time – while favouring a small group of regular large discounters. Equally, firms associated with the commercial crisis and the corn price speculation in 1847 (many of which subsequently failed) were less likely to obtain central bank credit. The Bank was discerning about whom it lent to and the discount window was not entirely ‘frosted’ as suggested by Capie (2001).

But our findings support Capie’s main hypothesis that the decision whether to accept or reject a bill depended largely on individual bill characteristics. The Bank appeared to use a set of rules to decide on this, which it applied consistently in both crisis weeks and non-crisis weeks. Most ‘collateral characteristics’ – inter alia, the quality of the names endorsing a bill – were highly significant factors driving the Bank’s decision to reject.

This finding supports the idea that the Bank needed to be active in monitoring key counterparties in the financial system well before formal methods of supervision in the twentieth century, echoing results obtained by Flandreau and Ugolini (2011) for the later 1866 crisis.

 

Religion and development in post-Famine Ireland

by Stuart Henderson (Ulster University)
The full paper has been published on The Economic History Review and is available here 

 

The role of religion in economic development has attracted increasing debate among scholars in economics, and especially economic history. This is at least partially attributable to the normalization in recent times of conversations relating to the effect of religion on social progress. This paper adds a new perspective to that debate by exploring the relationship between religion and development in Ireland between 1861 and 1911. The paper highlights a religious reversal of fortunes—a Catholic embourgeoisement—in the years following the Great Irish Famine.   

000
Figure 1. St Patrick’s Cathedral, Armagh (Roman Catholic). Available at <https://commons.wikimedia.org/wiki/File:Armagh,_St_Patricks_RC_cathedral.jpg&gt;

 

Ireland is a rather curious case. Here the effect of the Protestant Reformation manifested, not through a conversionary zeal spreading the land, but rather by the movement of people across the Irish Sea. In the centuries that followed, the Protestant minority, and particularly adherents of the Anglican Church, gained economic and social supremacy. By contrast, the Roman Catholic majority were socioeconomically disadvantaged, and denied the societal privileges offered to their Protestant counterparts.

Slowly, however, the balance of power began to shift. Penal laws, which discriminated particularly against Roman Catholics, were overturned, and eventually The Roman Catholic Relief Act of 1829 marked the culmination of Catholic Emancipation.
However, the legal watershed of Catholic Emancipation did not resolve the uneven balance of economic power between Protestants and Catholics. The arrival of a National System of Education in 1834, was a marker of the amelioration of religious inequality, but arguably it was the Great Famine in the mid-nineteenth century that truly transformed the prevailing social paradigm.

The Great Famine had a disproportionate impact on Roman Catholics given their lower social status and geographic situation. While devastating, the Famine catalysed a new sense of purpose in Catholic society—peasant religion and superstition were suppressed as the Roman Catholic Church benefitted from a new religious fervour, religious personnel bolstered the provision of education, and a rationalisation of the farming family meant a population more receptive to the social control provided by the Church.

 

0001
Figure 2. Literacy by selected denominations in 1861
Of the population 5 years old and upwards. Calculated using: Census of Ireland, 1861 (P.P. 1863, LX), p. 558.

 

However, the effects of the Famine were hardly a mere religious awakening. With Catholic education in Catholic hands, the Catholic population became increasing literate. Literacy aided in occupational advancement and the diffusion of political consciousness. Moreover, with the entrenchment of barriers to Catholic progression—for example the predominance of Protestants in banking—rising literacy likely fuelled discontent and thus nationalist sentiment.

The economic progress of Roman Catholics in the post-Famine decades is statistically examined in the paper. Put simply, the results suggest a Catholic–Protestant convergence over the decades following the Famine. Roman Catholics were rapidly closing the literacy gap and rising in occupational status as Protestant dominance receded. There is also evidence provided which suggests that commercial activities in more Catholic-concentrated areas were catching up with less Catholic-concentrated areas. Indeed, the general trajectory observed is referred to as a Catholic embourgeoisement as Catholics were becoming a more middle-class people—increasingly “alike” their Protestant counterparts.

For Protestants, the prevailing cultural dichotomy—which had long been to their advantage—was perhaps relevant in the economic convergence of the denominations after the Famine, and indeed in ultimate independence. Societal separation meant that the Catholic majority had a religious identity around which to coalesce. Therefore, as legal barriers receded and human capital increased, Catholics began to create an institutional alternative to that provided by the “Protestant” state, with their own network of schools, banks and professionals. Moreover, such movements were likely self-reinforcing, as Catholic professionals aided a new generation to follow their ascent.

The significance of this development is considered further towards the end of the paper. Ireland’s obvious majority–minority structure is contrasted with the Netherlands where no religious majority prevailed. In the latter, this led to a society organised into distinct segments (or pillars), which coexisted in relative harmony. By contrast, in the Irish case, despite the economic convergence of the denominations, independence resulted. The movement towards independence was arguably aided by the mutually beneficial relationship between the Roman Catholic Church and nationalism—the Church, with its body of adherents, provided legitimising capital for nationalism, while nationalism espoused a vision of Ireland that was consistent with the teaching of the Church. Moreover, for individual Roman Catholics, such nationalist vision was likely attractive since it offered the opportunity for societal equality beyond simply materialistic gains—opportunity which the existing state apparatus was slow to provide.

Hence, in understanding the development of Ireland in the post-Famine era, this paper provides not only an important quantification of Catholic progress, but also widens the debate to what Amartya Sen eloquently calls ‘development as freedom’. In doing so, it emphasises the short-sightedness of a narrow materialistic view of societal development, and instead offers a more nuanced perspective on the Irish case.

 

To contact the author: s.henderson1@ulster.ac.uk

An Efficient Market? Going Public in London, 1891-1911

by Sturla Fjesme (Oslo Metropolitan University), Neal Galpin (Monash University Melbourne), Lyndon Moore (University of Melbourne)

This article is published by The Economic History Review, and it is available on the EHS website

 

37829
Antique print of the London Stock Exchange. Available at <https://www.ashrare.com/stock_exchange_prints.html>

The British at a disadvantage?
It has been claimed that British capital markets were unwelcoming to new and technologically advanced companies in the late 1800s and early 1900s. Allegedly, markets in the U.S. and Germany were far more developed in providing capital for growing research and development (R&D) companies whereas British capital markets favored older companies in more mature industries, leaving new technology companies at a great disadvantage.
In the article An Efficient Market? Going Public in London, 1891-1911 we investigate this claim by obtaining detailed investment data on all the companies that listed publicly in the U.K. over the period 1891 to 1911. By combining company prospectuses, which provide issuer information such as industry, patenting activity, and company age with those company’s investors we investigate if certain company types were left at a disadvantage. For a total of 339 companies (out of 611 new listings) we obtain share prices, prospectuses, and detailed investor information on name and occupation.

A welcoming exchange
Contrary to prior expectations we find that the London Stock Exchange (LSE) was very welcoming to young, technologically advanced, and foreign companies from a great variety of industries. Table 1 shows that new companies were from a great variety of industries, were often categorized as new-technology, and almost half of the companies listed were foreign. We find that 81% and 84% of the new and old technology firms that applied for an official quotation of their shares were accepted by the LSE listing committee, respectively. Therefore, there is no evidence that the LSE treated new or foreign companies differently.

Table 1. IPOs by Industry

  IPOs Old-Tech New-Tech Domestic Foreign
Banks and Discount Companies 4 4 0 0 4
Breweries and Distilleries 13 13 0 12 1
Commercial, Industrial, &c. 155 137 18 125 30
Electric Lighting & Power 11 0 11 9 2
Financial Land and Investment 23 23 0 2 21
Financial Trusts 12 12 0 8 4
Insurance 7 7 0 7 0
Iron, Coal and Steel 20 20 0 20 0
Mines 8 8 0 0 8
Nitrate 3 3 0 0 3
Oil 11 11 0 0 11
Railways 10 9 1 5 5
Shipping 3 3 0 3 0
Tea, Coffee and Rubber 48 48 0 0 48
Telegraphs and Telephones 3 1 2 1 2
Tramways and Omnibus 6 0 6 5 1
Water Works 2 2 0 1 1
Total 339 301 38 198 141

Note: We group firms by industry, according to their classification by the Stock Exchange Daily Official List.

We also find that investors treated disparate companies similarly. We find British investors were willing to place their money in young and old, high and low technology, and domestic and foreign firms without charging large price discounts to do so. We do, however, find that investors who worked in the same industry or lived close to where the companies operated were able to use their superior information to obtain larger investments in well performing companies. Together our findings suggest that the market for newly listed companies in late Victorian Britain was efficient and welcoming to new companies. We find no evidence indicating that the LSE (or its investors) withheld support for foreign, young, or new-technology companies.

 

To contact Lyndon Moore:  Lyndon.moore@unimelb.edu.au

Missing girls in 19th-century Spain

by Francisco J. Beltrán Tapia (Norwegian University of Science and Technology)

This article is published by the Economic History Review, and it is available here

Gender discrimination, in the form of sex-selective abortion, female infanticide and the mortal neglect of young girls, constitutes a pervasive feature of many contemporary developing countries, especially in South and East Asia and Africa. Son preference stemmed from economic and cultural factors that have long influenced the perceived relative value of women in these regions and resulted in millions of “missing girls”. But, were there “missing girls” in historical Europe? The conventional narrative argues that there is little evidence for this kind of gender discrimination. According to this view, the European household formation system, together with prevailing ethical and religious values, limited female infanticide and the mortal neglect of young girls.

However, several studies suggest that parents treated their sons and daughters differently in 19th-century Britain and continental Europe (see, for instance, here, here or here). These authors stress that an unequal allocation of food, care and/or workload negatively affected girls’ nutritional status and morbidity, which translated in worsened heights and mortality rates. In order to provide more systematic historical evidence of this type of behaviour, our research (with Domingo Gallego-Martínez) relies on sex ratios at birth and at older ages. In the absence of gender discrimination, the number of boys per hundred girls in different age groups is remarkably regular, so comparing the observed figure to the expected (gender-neutral) sex ratio permits assessing the cumulative impact of gender bias in peri-natal, infant and child mortality and, consequently, the importance of potential discriminatory practices. However, although non-discriminatory sex ratios at birth revolve around 105-106 boys per hundred girls in most developed countries today, historical sex ratios cannot be compared directly to modern ones.

We have shown here that non-discriminatory infant and child sex ratios were much lower in the past. The biological survival advantage of girls was more visible in the high-mortality environments that characterised pre-industrial Europe due to poor living conditions, lack of hygiene and the absence of public health systems. Subsequently, boys suffered relatively higher mortality rates both in utero and during infancy and childhood. Historical infant and child sex ratios were therefore relatively low, even in the presence of gender-discriminatory practices. This is illustrated in Figure 1 below which plots the relationship between child sex ratios and infant mortality rates using information from seventeen European countries between 1750 and 2001. In particular, in societies where infant mortality rates were around 250 deaths (per 1,000 live births), a gender-neutral child sex ratio should have been slightly below parity (around 99.5 boys per hundred girls).

pic 01
Figure 1. Infant mortality rates and child sex ratios in Europe, 1750-2001

 

Compared to this benchmark, infant and child sex ratios in 19th-century Spain were abnormally high (see black dots in Figure 1 above; the number refers to the year of the observation), thus suggesting that some sort of gender discrimination was unduly increasing female mortality rates at those ages. This pattern, which is not the result of under-enumeration of girls in the censuses, mostly disappeared at the turn of the 20th century. Notwithstanding that average sex ratios remained relatively high in nineteenth- century Spain, some regions exhibited even more extreme figures. In 1860, 54 districts (out of 471) had infant sex ratios above 115, figures that are extremely unlikely to have occurred by chance. Relying on an extremely rich dataset at the district level, our research analyses regional variation in order to examine what lies behind the unbalanced sex ratios. Our results show that the presence of wage labour opportunities for women and the prevalence of extended families in which different generations of women cohabited had beneficial effects on girls’ survival. Likewise, infant and child sex ratios were lower in dense, more urbanized areas.

This evidence thus suggests that discriminatory practices with lethal consequences for girls constituted a veiled feature of pre-industrial Spain. Excess female mortality was then not necessarily the result of ill-treatment of young girls but could have been just based on an unequal allocation of resources within the household, a circumstance that probably cumulated as infants grew older. In contexts where infant and child mortality is high, a slight discrimination in the way that young girls were fed or treated when ill, as well as in the amount of work which they were entrusted with, was likely to have resulted in more girls dying from the combined effect of undernutrition and illness. Although female infanticide or other extreme versions of mistreatment of young girls may not have been a systematic feature of historical Europe, this line of research would point to more passive, but pervasive, forms of gender discrimination that also resulted in a significant fraction of missing girls.

To contact the author:

francisco.beltran.tapia@ntnu.no

Twitter: @FJBeltranTapia

Revisiting the changing body

by Bernard Harris (University of Strathclyde)

The Society has arranged with CUP that a 20% discount is available on this book, valid until the 11th November 2018. The discount page is: www.cambridge.org/wm-ecommerce-web/academic/landingPage/EHS20

The last century has witnessed unprecedented improvements in survivorship and life expectancy. In the United Kingdom alone, infant mortality fell from over 150 deaths per thousand births at the start of the last century to 3.9 deaths per thousand births in 2014 (see the Office for National Statistics  for further details). Average life expectancy at birth increased from 46.3 to 81.4 years over the same period (see the Human Mortality Database). These changes reflect fundamental improvements in diet and nutrition and environmental conditions.

The changing body: health, nutrition and human development in the western world since 1700 attempted to understand some of the underlying causes of these changes. It drew on a wide range of archival and other sources covering not only mortality but also height, weight and morbidity. One of our central themes was the extent to which long-term improvements in adult health reflected the beneficial effect of improvements in earlier life.

The changing body also outlined a very broad schema of ‘technophysio evolution’ to capture the intergenerational effects of investments in early life. This is represented in a very simple way in Figure 1. The Figure tries to show how improvements in the nutritional status of one generation increase its capacity to invest in the health and nutritional status of the next generation, and so on ‘ad infinitum’ (Floud et al. 2011: 4).

fig01
Figure 1. Technophysio evolution: a schema. Source: See Floud et al. 2011: 3-4.

We also looked at some of the underlying reasons for these changes, including the role of diet and ‘nutrition’. As part of this process, we included new estimates of the number of calories which could be derived from the amount of food available for human consumption in the United Kingdom between circa 1700 and 1913. However, our estimates contrasted sharply with others published at the same time (Muldrew 2011) and were challenged by a number of other authors subsequently. Broadberry et al. (2015) thought that our original estimates were too high, whereas both Kelly and Ó Gráda (2013) and Meredith and Oxley (2014) regarded them as too low.

Given the importance of these issues, we revisited our original calculations in 2015. We corrected an error in the original figures, used Overton and Campbell’s (1996) data on extraction rates to recalculate the number of calories, and included new information on the importation of food from Ireland to other parts of what became the UK. Our revised Estimate A suggested that the number of calories rose by just under 115 calories per head per day between 1700 and 1750 and by more than 230 calories between 1750 and 1800, with little changes between 1800 and 1850. Our revised Estimate B suggested that there was a much bigger increase during the first half of the eighteenth century, followed by a small decline between 1750 and 1800 and a bigger increase between 1800 and 1850 (see Figure 2). However, both sets of figures were still well below the estimates prepared by Kelly and Ó Gráda, Meredith and Oxley, and Muldrew for the years before 1800.

fig02
Source: Harris et al. 2015: 160.

These calculations have important implications for a number of recent debates in British economic and social history (Allen 2005, 2009). Our data do not necessarily resolve the debate over whether Britons were better fed than people in other countries, although they do compare quite favourably with relevant French estimates (see Floud et al. 2011: 55). However, they do suggest that a significant proportion of the eighteenth-century population was likely to have been underfed.
Our data also raise some important questions about the relationship between nutrition and mortality. Our revised Estimate A suggests that food availability rose slowly between 1700 and 1750 and then more rapidly between 1750 and 1800, before levelling off between 1800 and 1850. These figures are still broadly consistent with Wrigley et al.’s (1997) estimates of the main trends in life expectancy and our own figures for average stature. However, it is not enough simply to focus on averages; we also need to take account of possible changes in the distribution of foodstuffs within households and the population more generally (Harris 2015). Moreover, it is probably a mistake to examine the impact of diet and nutrition independently of other factors.

To contact the author: bernard.harris@strath.ac.uk

References

Allen, R. (2005), ‘English and Welsh agriculture, 1300-1850: outputs, inputs and income’. URL: https://www.nuffield.ox.ac.uk/media/2161/allen-eandw.pdf.

Allen, R. (2009), The British industrial revolution in global perspective, Cambridge: Cambridge University Press.

Broadberry, S., Campbell, B., Klein, A., Overton, M. and Van Leeuwen, B. (2015), British economic growth, 1270-1870, Cambridge: Cambridge University Press.

Floud, R., Fogel, R., Harris, B. and Hong, S.C. (2011), The changing body: health, nutrition and human development in the western world since 1700, Cambridge: Cambridge University Press.

Harris, B. (2015), ‘Food supply, health and economic development in England and Wales during the eighteenth and nineteenth centuries’, Scientia Danica, Series H, Humanistica, 4 (7), 139-52.

Harris, B., Floud, R. and Hong, S.C. (2015), ‘How many calories? Food availability in England and Wales in the eighteenth and nineteenth centuries’, Research in Economic History, 31, 111-91.

Kelly, M. and Ó Gráda, C. (2013), ‘Numerare est errare: agricultural output and food supply in England before and during the industrial revolution’, Journal of Economic History, 73 (4), 1132-63.

Meredith, D. and Oxley, D. (2014), ‘Food and fodder: feeding England, 1700-1900’, Past and Present, 222, 163-214.

Muldrew, C. (2011), Food, energy and the creation of industriousness: work and material culture in agrarian England, 1550-1780, Cambridge: Cambridge University Press.

Overton, M. and Campbell, B. (1996), ‘Production et productivité dans l’agriculture anglaise, 1086-1871’, Histoire et Mésure, 1 (3-4), 255-97.

Wrigley, E.A., Davies, R., Oeppen, J. and Schofield, R. (1997), English population history from family reconstitution, Cambridge: Cambridge University Press.

Surprisingly gentle confinement

Tim Leunig (LSE), Jelle van Lottum (Huygens Institute) and Bo Poulsen (Aarlborg University) have been investigating the treatment of prisoners of war in the Napoleonic Wars.

 

index
Napoleonic Prisoner of War. Available at <https://blog.findmypast.com.au/explore-our-fascinating-new-napoleonic-prisoner-of-war-records-1406376311.html&gt;

For most of history, life as a prisoner of war was nasty, brutish and short. There were no regulations on the treatment of prisoners until the 1899 Hague convention, and the later Geneva conventions. Many prisoners were killed immediately, other enslaved to work in mines, and other undesirable places.

The poor treatment of prisoners of war was partly intentional – they were the hated enemy, after all. And partly it was economic. It costs money to feed and shelter prisoners. Countries in the past – especially in times of war and conflict – were much poorer than today.

Nineteenth century prisoner death rates were horrific. Between one-half and six-sevenths of Napoleon’s 17,000 troops surrendering to the Spanish in 1808 after the Battle of Balién died as prisoners of war. The American civil war saw death rates rise to 27%, even though the average prisoner was captive for less than a year.

The Napoleonic Wars saw the British capture 7,000 Danish and Norwegian sailors, military and merchant. Britain did not desire war with Denmark (which ruled Norway at the time), but did so to prevent Napoleon seizing the Danish fleet. Prisoners were incarcerated on old, unseaworthy “prison hulks”, moored in the Thames Estuary, near Rochester. Conditions were crowded: each man was given just 2 feet (60 cm) in width to hang his hammock.

Were these prison hulks floating tombs, as some contemporaries claimed? Our research shows otherwise. The Admiralty kept exemplary records, now held in the National Archive in Kew. These show the date of arrival in prison, and the date of release, exchange, escape – or death. They also tell us the age of the prisoner, where they came from, the type of ship they served on, and whether they were an officer, craftsman, or regular sailor. We can use these records to look at how many died, and why.

The prisoners ranged in age from 8 to 80, with half aged 22 to 35. The majority sailed on merchant vessels, with a sixth on military vessels, and a quarter on licenced pirate boats, permitted to harass British shipping. The amount of time in prison varied dramatically, from 3 days to over 7 years, with an average of 31 months. About two thirds were released before the end of the war.

Taken as a whole, 5% of prisoners died. This is a remarkably low number, given how long they were held, and given experience elsewhere in the nineteenth century. Being held prisoner for longer increased your chance of dying, but not by much: those who spent three years on a prison hulk had only a 1% greater chance of dying than those who served just one year.

Death was (almost) random. Being captured at the start of the war was neither better nor worse than being captured at the end. The number of prisoners held at any one time did not increase the death rate. The old were no more likely to die than the young – anyone fit enough to go to see was fit enough to withstand any rigours of prison life. Despite extra space and better rations, officers were no less likely to die, implying that conditions were reasonable for common sailors.

There is only one exception: sailors from licenced pirate boats were twice as likely to die as merchant or official navy sailors. We cannot know the reason. Perhaps they were treated less well by their guards, or other prisoners. Perhaps they were risk takers, who gambled away their rations. Even for this group, however, the death rates were very low compared with those captured in other places, and in other wars.

The British had rules on prisoners of war, for food and hygiene. Each prisoner was entitled to 2.5 lbs (~1 kg) of beef, 1 lb of fish, 10.5 lbs of bread, 2 lbs of potatoes, 2.5lbs of cabbage, and 14 pints (8 litres) of (very weak) beer a week. This is not far short of Danish naval rations, and prisoners are less active than sailors. We cannot be sure that they received their rations in full every week, but the death rates suggest that they were not hungry in any systematic way. The absence of epidemics suggests that hygiene was also good. Remarkably, and despite a national debt that peaked at a still unprecedented 250% of GDP, the British appear to have obeyed their own rules on how to treat prisoners.

Far from being floating tombs, therefore, this was a surprisingly gentle confinement for the Danish and Norwegian sailors captured by the British in the Napoleonic Wars.

Britain’s post-Brexit trade: learning from the Edwardian origins of imperial preference

by Brian Varian (Swansea University)

798px-Imperial_Federation,_map_of_the_world_showing_the_extent_of_the_British_Empire_in_1886
Imperial Federation, map of the world showing the extent of the British Empire in 1886. Wikimedia Commons

In December 2017, Liam Fox, the Secretary of State for International Trade, stated that ‘as the United Kingdom negotiates its exit from the European Union, we have the opportunity to reinvigorate our Commonwealth partnerships, and usher in a new era where expertise, talent, goods, and capital can move unhindered between our nations in a way that they have not for a generation or more’.

As policy-makers and the public contemplate a return to the halcyon days of the British Empire, there is much to be learned from those past policies that attempted to cultivate trade along imperial lines. Let us consider the effect of the earliest policies of imperial preference: policies enacted during the Edwardian era.

In the late nineteenth century, Britain was the bastion of free trade, imposing tariffs on only a very narrow range of commodities. Consequently, Britain’s free trade policy afforded barely any scope for applying lower or ‘preferential’ duties to imports from the Empire.

The self-governing colonies of the Empire possessed autonomy in tariff-setting and, with the notable exception of New South Wales, did not emulate the mother country’s free trade policy. In the 1890s and 1900s, when the emergent industrial nations of Germany and the United States reduced Britain’s market share in these self-governing colonies, there was indeed scope for applying preferential duties to imports from Britain, in the hope of diverting trade back toward the Empire.

Trade policies of imperial preference were implemented in succession by Canada (1897), the South African Customs Union (1903), New Zealand (1903) and Australia (1907). By the close of the first era of globalisation in 1914, Britain enjoyed some margin of preference in all of the Dominions. Yet my research, a case study of New Zealand, casts doubt on the effectiveness of these polices at raising Britain’s share in the imports of the Dominions.

Unlike the policies of the other Dominions, New Zealand’s policy applied preferential duties to only selected commodity imports (44 out of 543). This cross-commodity variation in the application of preference is useful for estimating the effect of preference. I find that New Zealand’s Preferential and Reciprocal Trade Act of 1903 had no effect on the share of the Empire, or of Britain specifically, in New Zealand’s imports.

Why was the policy ineffective at raising Britain’s share of New Zealand’s imports? There are several likely reasons: that Britain’s share was already quite large; that some imported commodities were highly differentiated and certain varieties were only produced in other industrial countries; and, most importantly, that the margin of preference – the extent to which duties were lower for imports from Britain – was too small to effect any trade diversion.

As Britain considers future trade agreements, perhaps with Commonwealth countries, it should be remembered that a trade agreement does not necessarily entail a great, or even any, increase in trade. The original policies of imperial preference were rather symbolic measures and, at least in the case of New Zealand, economically inconsequential.

Brexit might well present an ‘opportunity to reinvigorate our Commonwealth partnerships’, but would that be a reinvigoration in substance or in appearance?

Lessons for the euro from Italian and German monetary unification in the nineteenth century

by Roger Vicquéry (London School of Economics)

Unificazione-Monetaria-Italiana-2012
Special euro-coin issued in 2012 to celebrate the 150th anniversary of the monetary unification of Italy. From Numismatica Pacchiega, available at <https://www.numismaticapacchiega.it/5-euro-annivesario-unificazione/&gt;

Is the euro area sustainable in its current membership form? My research provides new lessons from past examples of monetary integration, looking at the monetary unification of Italy and Germany in the second half of the nineteenth century.

 

Currency areas’ optimal membership has recently been at the forefront of the policy debate, as the original choice of letting peripheral countries join the euro was widely blamed for the common currency existential crisis. Academic work on ‘optimum currency areas’ (OCA) traditionally warned against the risk of adopting a ‘one size fits all’ monetary policy for regions with differing business cycles.

Krugman (1993) even argued that monetary unification in itself might increase its own costs over time, as regions are encouraged to specialise and thus become more different to one another. But those concerns were dismissed by Frankel and Rose’s (1998) influential ‘OCA endogeneity’ theory: once regions with ex-ante diverging paths join a common currency, they will see their business cycle synchronise progressively ex-post.

My findings question the consensus view in favour of ‘OCA endogeneity’ and raise the issue of the adverse effects of monetary integration on regional inequality. I argue that the Italian monetary unification played a role in the emergence of the regional divide between Italy’s Northern and Southern regions by the turn of the twentieth century.

I find that pre-unification Italian regions experienced largely asymmetric shocks, pointing to high economic costs stemming from the 1862 Italian monetary unification. While money markets in Northern Italy were synchronised with the core of the European monetary system, Southern Italian regions tended to move together with the European periphery.

The Italian unification is an exception in this respect, as I show that other major monetary arrangements in this period, particularly the German monetary union but also the Latin Monetary Convention and the Gold Standard, occurred among regions experiencing high shock synchronisation.

Contrary to what ‘OCA endogeneity’ would imply, shock asymmetry among Italian regions actually increased following monetary unification. I estimate that pairs of Italian provinces that came to be integrated following unification became, over four decades, up to 15% more dissimilar to one another in their economic structure compared to pairs of provinces that already belonged to the same monetary union. This means that, in line with Krugman’s pessimistic take on currency areas, economic integration in itself increased the likelihood of asymmetric shocks.

In this respect, the global grain crisis of the 1880s, disproportionally affecting the agricultural South while Italy pursued a restrictive monetary policy, might have laid the foundations for the Italian ‘Southern Question’. As pointed out by Krugman, asymmetric shocks in a currency area with low transaction costs can lead to permanent loss in regional income, as prices are unable to adjust fast enough to prevent factors of production to permanently leave the affected region.

The policy implications of this research are twofold.

First, the results caution against the prevalent view that cyclical symmetry within a currency area is bound to improve by itself over time. In particular, the role of specialisation and factor mobility in driving cyclical divergence needs to be reassessed. As the euro area moves towards more integration, additional specialisation of its regions could further magnify – by increasing the likelihood of asymmetric shocks – the challenges posed by the ‘one size fits all’ policy of the European Central Bank on the periphery.

Second, the Italian experience of monetary unification underlines how the sustainability of currency areas is chiefly related to political will rather than economic costs. Despite the fact that the Italian monetary union has been sub-optimal from the start and to a large extent remained so, it has managed to survive unscathed for the last century and a half. While the OCA framework is a good predictor of currency areas’ membership and economic performance, their sustainability is likely to be a matter of political integration.