The Economic History Society is very happy to launch its new website as of November 2020.
This site streamlines everything that the Society has to offer, including events and registration for the Annual Conference, and member-only access to the Economic History Review, grant and bursary applications for students and ECRs, and a growing repository of resources in economic and social history.
The Long Run will now be hosted on the new EHS website. Read past and new blogs here.
by Simon Szreter (University of Cambridge) and Graham Mooney (Johns Hopkins University)
In 1998 we published in the Economic History Reviewan analysis showing that all the available robust demographic evidence testified to a deterioration of mortality conditions in fast-growing industrial towns and cities in the second quarter of the nineteenth century. We also demonstrated that although there was some alleviation in the 1850s from the terrible death rates experienced in the 1830s and 1840s, sustained and continuous improvement in the life expectancies of the larger British urban populations did not begin to occur until the 1870s. In other publications, we have each shown how it is most likely that an increasing range and density of politically-initiated public health interventions in the urban environments, starting in earnest in the late 1860s and 1870s and gaining depth and sophistication through to the 1900s, was primarily responsible for the observed demographic and epidemiological patterns.
In a new article in Economic History Review in 2020, Romola Davenport has argued that a single disease, scarlet fever, should be attributed primary significance as the cause of major urban mortality trends of the period, not only in Britain but across Europe, Russia and North America.
In this response we critically examine the evidence adduced by Davenport for this hypothesis and find it entirely unconvincing. While scarlet fever was undoubtedly an important killer of young children, focusing on the chronology of the incidence of scarlet fever in Britain shows that it lags behind by a clear decade or more the major turning points in urban mortality trends. Scarlet fever did not make any significant recorded impact until the 1840s and it did not exert its most deadly effects until the 1850s. Its severe depredations then continued unabated in the 1860s and 1870s, thereafter declining sharply in the period 1880-85.
We therefore maintain that our original findings and interpretation of the main causes of Britain’s urban mortality patterns during the course of the nineteenth century remain entirely valid.
This blog is based on research funded by a bursary from the Economic History Society
by Sean Bottomley (Northumbria University)
Under the influence of institutional economics, a consensus has emerged on the importance of the balanced state: one that is strong enough to fund and provide a legal framework enabling market exchange and security from internal and external predation, but which is constrained from undermining the security of private property rights. The formative example is England, where it is claimed that a balanced state emerged after the Glorious Revolution, leading in turn to the Industrial Revolution (North and Weingast 1989, Acemoglu and Robinson 2012).
This account, though, is controversial. It is widely accepted that property rights (certainly in land) have been secure since at least 1540 (Clark 1996). It is in this context that my project examines royal wardship in Britain from 1485 to 1660, a topic that has been largely neglected by historians since the 1950s (Bell 1953, Hurstfield 1958), and has never been the subject of an economic history. Beginning with Henry VII, the English Crown strained to re-establish its archaic, prerogative rights to ‘wardship’. These included the right to take temporary proprietorship of freehold lands held of it by certain feudal-military tenures which had descended to an underage heritor (the ward) upon the death of their ancestor. It also included the right to take physical custody of the ward until they reached the age of majority and, when they were unwed, to decide who they would marry.
Three main points have emerged from the project so far. Firstly, the Crown’s re-imposition of wardship served to undermine property rights. Most commonly, the Crown sold wards and their lands on to third parties, acting as guardians. Guardians seldom had any incentive to take care of the estate, rather woods were chopped down, lands were over-cropped, and buildings torn down for materials. It is partly in this context that Blackstone’s assessment of the Tenures Abolition Act (1660), which abolished wardship and the feudal-military tenures which underpinned it, as the ‘great[est] acquisition to the civil property of this kingdom than even Magna Carta’ can be understood. Secondly, the incidence of wardship was such that it had tangible economic consequences. For instance, land held by feudal-military tenures sold at a 10 percent discount relative to those tenures that did not entail wardship (socage) – significant in what was still a predominantly agrarian economy and where land was the pre-eminent asset and store of value. Thirdly, wardship is indicative of wider systemic failings of the early modern English state. It might have been an immensely productive source of revenue, but due to maladministration and the malfeasance of its officers, only a very small proportion of potential revenues actually accrued to the Crown.
Tentatively, wardship and its eventual abolition supports the argument that constitutional changes during the seventeenth century did augur a demonstrable improvement in state capacity and the security of property rights. However, critical components of the project remain unfinished. In particular, contemporaries often claimed that the Crown was purposely distorting the content and institutions of the land law in order to increase its income from wardships. Investigation of contemporary legal sources will allow me to determine the accuracy of this claim. These sources may also be useful for examining related issues, especially whether the complexities of tenure impeded land conveyancing (significant given the demonstrable importance of transaction costs for productivity and land usage) and large-scale land improvements (particularly in land drainage).
Another unfinished component of my project is archival research on wardship in Ireland and Scotland. This is not conceived as merely an adjunct to the project for England: the inability of the Crown to raise funds from wardship was systemic in each of the three Stuart kingdoms, and a comparative approach should yield more meaningful explanations why this state of affairs existed. Work at the National Archives of Scotland will also serve a secondary purpose. Unlike in England or Ireland, the tenurial framework underpinning wardship was abolished at a significantly later date (1747) and there was a continuously updated register of land conveyances, the Register of Sasins. The intention is to explore whether the Register can be used to measure any changes in land values and/or usage coincident with the abolition of feudal-military tenures in Scotland.
Country risk includes, ‘any macroeconomic, microeconomic, financial, social, political, institutional, judiciary, climatic, technological, or sanitary risk that affects (or could affect) an investor in a foreign country. Damages may materialize in several ways: financial losses; threat to the safety of the investing company’s employees, clients, or consumers; reputational damage, or loss of a market or supply source’ (Gaillard 2020).
Although it was not until the mid-1970s that the concept of country risk began to permeate the economic literature and media, the obstacles listed above have challenged international investors for many years.
One of the most important threats was certainly expropriation (or nationalization) risk, which materialized dramatically with the advent of WWI. In the first weeks of war, Austria-Hungary, France, Germany, Great Britain, and Russia enacted legislation that prohibited trade with the enemy and sanctioned the confiscation and even liquidation of the businesses owned by enemy aliens located on their territory (Caglioti 2014). In doing so, combatant nations violated the principle of ‘immunity of private enemy property’ established by the Hague Conventions of 1899 and 1907.
The most striking nationalization episode followed the Soviet Revolution of 1917, when the communists seized all property belonging to foreigners. Other significant expropriations occurred in Bolivia and Mexico in 1937 and 1938, respectively; they affected American and British oil companies (Maurer 2013).
World War II led to the widespread confiscation of enemy assets, which further undermined the sanctity of foreign private property. During the Cold War, expropriation risk remained a major concern among international investors (Figure 1). Nationalizations were driven by three factors: interventionist or even socialist policies, the extensive interpretation of every State’s right to exploit its natural resources and, finally, idiosyncratic political motives (e.g., Iran in 1951 and 1979–1980; Indonesia in the early 1960s).
How can we explain the spectacular fall in the number of expropriation acts observed in the 1980s and 1990s? In fact, the debt crisis of the 1980s obliged most developing and emerging countries to adopt market-friendly policies (for example, legal security for property rights, privatization of state-owned enterprises, and the liberalization of trade and finance). These policies were soon encapsulated in the term ‘Washington Consensus (Williamson 1990).
Nationalization risk was crucial in the history of country risk. The expropriation acts announced by Fidel Castro in 1959–1960 served as a catalyst and led to the development of political risk and country risk assessment tools.
In my book, I identify several generations of indicators. The first generation includes the risk indicators developed in the 1960s by consulting firms, such as Business International and Business Environmental Risk Index (BERI). The second generation is composed of the external country risk assessors that emerged shortly before the globalization years – namely, Euromoney and International Country Risk Guide. The third generation is represented by export credit agencies. The fourth generation comprises what I call ‘neoliberal indicators’. Launched in the mid-1990s, the indices of the Heritage Foundation and the Fraser Institute put considerable emphasis on economic freedom. The fifth generation, which gained influence in the 2000s, scrutinizes competitiveness; the Global Competitiveness Reports published by the World Economic Forum are emblematic of this last generation.
I analyse how well these risk indicators anticipated eight types of shocks: major episodes of international political violence, major episodes of domestic political violence, expropriation acts, high-inflation peaks, deep economic depressions, significant restrictions on capital flows, sovereign debt crises, and exceptional natural disasters. The accuracy of these ratings reveals some performance gaps. Euromoney and the World Economic Forum’s Global Competitiveness Index (GCI) outperform their counterparts. Euromoney’s ‘risk-free’ rating category and the GCI’s top-ranked group (that is, the top 25 per cent) are especially reliable. The Fraser Institute’s Economic Freedom of the World index is less accurate than other indicators owing to the high proportion of ‘crisis countries’ in its top ratings and rankings.
The full article from this post was published on The Economic History Review and is now available on Early Viewat this link
The social mobility rate represents the degree to which the socioeconomic status of descendants varies relative to that of their progenitors. If the rate is very low then the social pyramid remains unchanged over many generations. Conversely, if the rate of social mobility is very high, then family, cultural, ethnic, and historical backgrounds are not useful in explaining the current social status of an individual. In essence, history determines present outcomes when there are lower rates of social mobility. Interest in social mobility research has grown since the Great Recession because of its relationship with socioeconomic inequality, or political upheaval.
This renewed interest in the study of social mobility has generated new approaches to this subject. Recent social mobility studies which use surnames show that underlying social mobility rates in all cases studied are both very low and very similar across countries and time periods. This research uses an enhanced surname methodology and previously unused historical data to study social mobility in a new Spanish speaking, Central American economy. Costa Rica is particularly interesting as it has exhibited relatively egalitarian distributions of income since colonial times. This is significantly different to the previous Latin American economy, Chile, which had been the focus of a surname study of social mobility similar to this one. In order to study historical social mobility in Costa Rica over the past century and a half, one cannot use traditional father-son linkages since constructing such a dataset would be extremely difficult, if not impossible. Traditional methods require panel datasets, such as the United States National Longitudinal Survey of Youth (NLSY), or rich population registries like those found in Sweden and Iceland. This limits the historical and geographical contexts in which social mobility can be studied. Surnames facilitate research by permitting the clustering of people to identify groups of sons who collectively originated from a group of fathers, without needing to follow the branches of each specific family tree.
One of the methodologies used in this research involves overrepresentation of surname groups within certain elite professions in the 2006 electoral census. The central idea is to see how frequent a surname is within the census and then use that to predict how many we should find in a sample of elite professionals. If a certain surname group represents 1 per cent of the population but 5 per cent of the individuals in high skilled professions, then they are overrepresented, and of higher status. In order to study how long the rich stay rich in Costa Rica, the author compiled a dataset of historically advantaged groups before the beginning of the elite profession dataset, in order to avoid selection bias. The groups are: top coffee growers from 1911, coffee exporters in 1934, teachers and professors between 1923-1933, Jamaican banana growers from 1908, and ethnically- mixed plantation owners. Figure 1 shows how these elite groups were still overrepresented at the end of the twentieth century and that they will require an average of six to seven generations to regress to the mean. These results are comparable to those produced by Clark, for a completely different set of socioeconomic and historical backgrounds. Of particular interest is the comparison of the results with Chile, since the two countries had different colonial experiences and varying degrees of inequality throughout their histories.
This research shows that regression to the socioeconomic mean in Costa Rica occurred at a slower pace than that predicted by the previous literature. This implies that the equality-driven policy maker should be more concerned with economic growth, which should increase the average income of every strata, at least under a Kaldor compensation criterion, and with compressing the distance between social strata, rather than concerning itself with social mobility. This study has shown how historical groups take fewer generations to regress to the mean in comparison to the Chilean case studied in Clark. This is attributed to the fact that the historical groups were not that far apart to begin with.
 G. Clark, The Son Also Rises: Surnames and the History of Social Mobility. Princeton: Princeton University Press, 2015; G. Clark and N. Cummins, ‘Surnames and Social Mobility in England, 1170-2012’, Human Nature, 25 (2014), pp. 517-537.
 This posits that an activity moves the economy closer to Pareto optimality if the maximum amount the gainers are prepared to pay to the losers to agree to the change is greater than the minimum amount losers are prepared to accept
by Graeme Acheson (University of Strathclyde Business School), Aine Gallagher, Gareth Campbell, and John D.Turner (Queen’s University Centre for Economic History)
The full article from this blog post has been published on The Economic History Review, and it is currently available on Early Viewhere
Women have a long tradition of investing in financial instruments, and scholars have recently documented the rise of female shareholders in nineteenth-century Britain, the United States, Australia, and Europe. However, we know very little about how this progressed into the twentieth century, and whether women shareholders over a century ago behaved differently from their male counterparts. To address this, we turn to the shareholder constituencies of railways, which were the largest public companies a century ago.
Railway companies in the UK popularised equity investment among the middle classes; they had been a major investment asset since the first railway boom of the mid-1830s. At the start of the 1900s, British railways made up about half of the market value of all domestic equity listed in the UK, and they constituted 49 of the 100 largest companies on the British stock market in 1911. The railways, therefore, make an interesting case through which to examine women investors. Detailed railway shareholder records, comparable to those for other sectors, have generally not been preserved. However, we have found Railway Shareholder Address Books for six of the largest railway companies between 1915 and 1922. We have supplemented these with several address books for these companies back to 1870, and have analysed the Shareholder Register for the Great Western Railway (GWR) from 1843, to place the latter period in context.
An analysis of these shareholder address books reveals the growing importance of women shareholders from 1843, when they made up about 11 per cent of the GWR shareholder base, to 1920, when they constituted about 40 per cent of primary shareholders. By the early twentieth century, women represent 30 to 40 per cent of shareholders in each railway company in our sample, which is in line with estimates of the number of women investing in other companies at this time (Rutterford, Green, Maltby and Owens, 2011). This implies that women were playing an important role in financial markets in the early twentieth century.
Although women were becoming increasingly prevalent in shareholder constituencies, we know little about how they were responding to changing social perceptions, and the increasing availability of financial information, in order to make informed investment decisions, or if they were influenced by male relatives. To examine this, we focus on joint shareholdings, where people would invest together, rather than buying shares on their own. This practice was extremely common, and from our data we are able to analyse the differences between solo shareholders, lead joint shareholders (i.e., individuals who owned shares with others but held the voting rights), and secondary joint shareholders (i.e., individuals who owned shares with others but did not hold the voting rights).
We find that women were much more likely to be solo shareholders than men, with 70 to 80 per cent of women investing on their own, compared to just 30 to 40 per cent of men. When women participated in joint shareholdings, there was no discernible difference as to whether they were the lead shareholder or the secondary shareholder, whereas the majority of men took up a secondary position. When women participated as a secondary shareholder, the lead was usually not a male relative. These findings are strong evidence that women shareholders were acting independently by choosing to take on the sole risks and rewards of share ownership when making their investments.
We then analyse how the interaction between gender and joint shareholdings affected investment decisions. We begin by examining differences in terms of local versus arms-length investment, using geospatial analysis to calculate the distance between each shareholder’s address and the nearest station of the railway they had invested in. We find that women were more likely than men, and solo investors more likely than joint shareholders, to invest locally. This suggests that men may have used joint investments as a way of reducing the risks of investing at a distance. In contrast, women preferred to maintain their independence even if this meant focusing more on local investments.
We then examine the extent to which women and men invested across different railways. In the modern era, it is common to adopt a value-weighted portfolio which is most heavily concentrated in larger companies. As three of our sample companies were amongst the six largest companies of their era and a further two were in the top twenty-five, we would, a priori, expect to see some overlap of shareholders investing in different railways if they adopted this approach to diversification. From our analysis, we find that male and joint shareholders were more likely than female and solo shareholders to hold multiple railway stocks. This could imply that men were using joint shareholdings as a means of increasing diversification. In contrast, women may have been prioritising independence, even if it meant being less diversified.
We also consider whether there were differences in terms of how long each type of shareholder held onto their shares because modern studies suggest that women are much less likely than men to trade their shares. We find that only a minority of shareholders maintained a long-run buy and hold strategy, with little suggestion that this differed on the basis of gender or joint versus solo shareholders. This implies that our findings are not being driven by a cohort effect, and that the increasing numbers of women shareholders consciously chose to invest independently.
by Thales Zamberlan Pereira (Getúlio Vargas Foundation, São Paulo School of Economics)
Brazil supplied 40 per cent of cotton imports in Liverpool during the last decade of the eighteenth century (Krichtal 2013). By the first half of the nineteenth century, however, cotton exports stagnated, and Brazil became the only major international cotton producer that decreased its exports to European countries. The reason for decline in production, despite increasing international demand during the 19th century, is not generally agreed on. Scholars have attributed the decline to high transport costs, competition from sugar and coffee plantations for slaves, Dutch disease from the increase in coffee exports, among others (Leff 1972; Stein 1979; Canabrava 2011). There is disagreement in part because previous research largely relies on data after 1850, which was after the decline of cotton plantations in Brazil.
In a new paper, I argue that cotton profitability was restricted by the fiscal policy implemented by the Portuguese (and, later, Brazilian) government after 1808. To make this argue I first show new patterns on the timing of the decline of Brazilian cotton. Specifically, using new data of cotton productivity for the 1800-1860 period, this research shows that Brazil’s stagnation began in the first decades of the nineteenth century.
The decline therefore cannot be explained by a number of factors. It took place before the United States managed to increase its productivity in cotton production and became the world export leader (Olmstead and Rhode 2008). Cotton regions in Brazil did not have a labour supply problem nor suffered from a Dutch disease phenomenon during the early nineteenth century (Pereira 2018). The new evidence also suggests that external factors, such as declining international prices or maritime transport costs, were not responsible for the stagnation of cotton exports in Brazil. As any other commodity at the time, falls in international prices would have to be offset by increases in productivity. In fact, Figure 1 shows that Brazilian cotton prices were competitive in Liverpool. From the staples presented in the Figure, the standard cotton from the provinces of Pernambuco and Maranhão had higher quality than from New Orleans and Georgia (and, hence, achieved higher prices), but “Maranhão saw-ginned”, which achieved similar prices, used the same seeds as the ones in US plantations.
So, what caused the stagnation of cotton exports in Brazil? I argue that the fiscal policy implemented by the Portuguese government after 1808 restricted cotton profitability. High export taxes, whose funds were transferred to Rio de Janeiro, explain the ‘profitability paradox’ that British consuls in Brazil reported at the time. They remarked that even in periods with high prices and foreign demand, Brazilian planters had limited profitability. Favourable market conditions after the Napoleonic wars allowed production in Brazil to continue growing at least until the early 1830s.
Figure 2 shows that when international prices started to decline after 1835, cotton was no longer a profitable crop in many Brazilian regions. This was especially pronounced for regions where plantations were far from the coast, which had to pay higher transport costs in addition to the export tax. To support that the tax burden decreased profitability, I calculate an “optimal tax rate”, which maximized government revenues, and the “effective tax rate”, which was the amount that exporters paid. Figure 2 illustrates that, while the tax rate by law was low, the effective tax rate for cotton producers was significantly greater than the optimal tax rate after 1835.
Facing lower prices, cotton producers in Brazil could have shifted production to varieties of cotton produced in the United States, which had higher productivity and were in increasing demand in British markets. As presented in Figure 1, some regions in Brazil tried to follow this route (with saw-ginned cotton in Maranhão), but this type of production was not profitable with an export tax that reached 20 percent. Brazil, therefore, was stuck in the market for long-staple cotton, for which demand remained relatively stable during the nineteenth century. Regions that could not produce long-staple cotton practically abandoned production.
Not only do the results provide insight to the cotton decline, but the paper contributes to a better understanding of the roots of regional inequality in Brazil and the political economy of taxation. Cotton production before 1850 was concentrated in the northeast region, which continues to lag in economic conditions to this day. As I argue in the paper, the export taxes implemented after 1808 largely targeted commodities from the northeast. Production of commodities from southeast regions, such as coffee, paid lower tax rates. Parliamentary debates at the time show cotton producers in the Northeast did demand tax reform. Their demands, however, were not met quickly enough to prevent Brazilian cotton plantations from being priced-out from the international market.
Canabrava, Alice P. 2011. O Desenvolvimento Da Cultura Do Algodão Na Província de São Paulo, 1861-1875. São Paulo: EDUSP.
Krichtal, Alexey. 2013. “Liverpool and the Raw Cotton Trade: A Study of the Port and Its Merchant Community, 1770-1815.” Victoria University of Wellington.
Leff, Nathaniel H. 1972. “Economic Development and Regional Inequality: Origins of the Brazilian Case.” The Quarterly Journal of Economics 86 (2): 243–62. https://doi.org/10.2307/1880562.
Olmstead, Alan L., and Paul W. Rhode. 2008. “Biological Innovation and Productivity Growth in the Antebellum Cotton Economy.” The Journal of Economic History 68 (04): 1123–1171. https://doi.org/10.1017/S0022050708000831.
Pereira, Thales A. Zamberlan. 2018. “Poor Man’s Crop? Slavery in Cotton Regions in Brazil (1800-1850).” Estudos Econômicos (São Paulo) 48 (4).
Stein, Stanley J. 1979. Origens e evolução da indústria têxtil no Brasil: 1850-1950. Rio de Janeiro: Editora Campus.
by Leonardo Ridolfi (University of Siena), Mara Squicciarini (Bocconi University), and Jacob Weisdorf (Sapienza University of Rome)
Should workers fear technical innovations? Economists have not provided a clear answer to this perennial question. Some believe machines make ‘one man to do the work of many’; that mechanisation will generate cheaper goods, more consumer spending, increased labour demand and thus more jobs. Others, instead, worry that automation will be labour-cheapening, making workers – especially unskilled ones – redundant, and so result in increased unemployment and growing income inequality.
Our research seeks answers from the historical account. We focus on the first Industrial Revolution, when technical innovations became a key component of the production process.
The common understanding is that mechanisation during the early phases of industrialisation allowed firms to replace skilled with unskilled male workers (new technology was deskilling) and also male workers with less expensive female and child labourers. Much of this understanding is inspired by the Luddite movement – bands of nineteenth century workers who destroyed early industrial machinery that they believed was threatening their jobs.
To test these hypotheses, we investigate one of the major technological advancements in human history: the rise and spread of steam engines.
Nineteenth century France provides an exemplary setting to explore the effects. French historical statistics are extraordinarily detailed, and the first two national industry-level censuses – one from the 1840s, when steam power was just beginning to spread; and one from the 1860s, when they were more common – help us to observe the labour market conditions that led to the adoption of steam engines, as well as the effects of adopting the new technology on the demand for male, female and child labour, and on their wages.
Consistent with the argument that steam technology emerged for labour-cheapening purposes, our analysis shows that the adoption of steam technology was significantly higher in districts (arrondissements) where:
industrial labour productivity was low, so that capital-deepening could serve to improve output per worker;
the number of workers was high, so the potential for cutting labour costs by replacing them with machines was large;
the share of male workers was high, so the potential for cutting labour costs by shifting towards women and children was large; and
steam engines had already been installed in other industries, thus lowering the costs of adopting the new technology.
We also find, however, that steam technology, once adopted, was neither labour-saving nor skill-saving. Steam-powered industries did use higher shares of (cheaper) female and child workers than non-steam-powered industries. At the same time, though, since steam-operating industries employed considerably more workers in total, they ended up using also more male workers – and not just more women and children.
We also find that steam-powered industries paid significantly higher wages, both to men and women. In contrast with the traditional narrative of early industrial technologies being deskilling, this result provides novel empirical evidence that steam-use was instead skill-demanding.
Although workers seemed to have gained from the introduction of steam technology, both in terms of employment and payment opportunities, our findings show that labour’s share was lower in steam-run industries. This motivates Engels-Marx-Piketty-inspired concerns that advancing technology leaves workers with a shrinking share of output.
Our findings thus highlight the multi-sided effects of adopting general-purpose technological progress. On the positive side, the steam engine prompted higher wages and a growing demand for both male and female workers. On the negative side, steam-powered industries relied more heavily on child labour and also placed a larger share of output in the hands of capitalist.
by Sibylle Lehmann-Hasemeyer and Fabian Wahl (University of Hohenheim)
The full blog post from this article was published on The Economic History Review and is now available on early view at this link
The German banking system is often considered a key factor in German industrialisation. For Alexander Gerschenkron, Germany’s experience can serve as a role model for other moderately backward economies: governments could trigger economic development by supporting the establishment of modern financial institutions such as universal banks, which were typical of the German banking system and which mobilised savings, reduced risks for investors, and improved the allocation of resources. Such activities ease the trading of goods and services and foster technological innovation.
Scholarly discussion on the banking-growth nexus in Germany has focused on universal banks without giving significant attention to other forms of banking. It is surprising that earlier research has ascribed savings banks a limited role in industrialization: by 1913, they held 24.8 per cent of the total assets of all German financial institutions and ranked first among all bank types for net investment. Moreover, savings banks had the advantage of being public institutions. Because they were not profit-driven, they could focus on long-term projects with high social returns.
Our study revisits the banking-growth nexus by focusing on savings banks in 978 Prussian cities. We found a positive and significant relationship between the establishment of savings banks and city growth, and the number of steam engines per factory in the nineteenth century (1854-75). Previous research has either studied the impact of savings banks at a highly aggregated level or qualitatively with case studies. This study is the first to provide quantitative evidence on the local impact of savings banks during the early nineteenth century.
To address potential endogeneity, we refer to a decree issued in 1854 by the Minister for Trade and Commerce. This decree enhanced the equal distribution of saving, because it demanded the founding of at least one savings bank per county. It further encouraged poorer local authorities to found a savings bank by offering institutional and financial support. Following this decree, a wave of savings banks were established on a much wider geographical distribution than before. In 1849, savings banks were present in about half of the counties; this had risen to nearly 95 per cent by 1864.
We also observed a significant pre-growth trend in earlier periods before the founding of a savings bank in a city. There is no such trend, however, after 1854 (Figure 1).The savings banks that were founded during this wave were often established in smaller cities that might not have been able to afford them without support. Thus, the decree can be seen as a public policy to promote the establishment of public financial infrastructure in remote regions.
Although we cannot perfectly solve the endogeneity issue, the regression results suggest that savings banks promoted city growth. This is even more plausible when considering that Germany’s industrialisation was not only based on larger, multinational firms and coal resources, but rather on good public infrastructure, a competitive schooling system, and, in particular, on small and medium-sized firms, which were the backbone of German industry. The resulting economic landscape persists today.
Our study contributes to the understanding of why Germany industrialised by analysing a neglected aspect of the relationship between banks and growth. Earlier research, which focussed on the impact of large universal banks and stock markets at the end of the 19th century, largely overlooking the impact of savings banks and the potential benefit of a decentralised financial system. The evidence that we provide clearly shows that there is a considerable gap in the literature on savings banks and how they actually contributed to economic growth.
By Francisco J. Beltrán Tapia (Norwegian University of Science and Technology)
This blog is part of our EHS Annual Conference 2020 Blog Series.
Gender discrimination – in the form of sex-selective abortion, female infanticide and the mortal neglect of young girls – constitutes a pervasive feature of many contemporary developing countries, especially in South and East Asia. Son preference stems from economic and cultural factors that have long influenced the perceived relative value of women in these regions and resulted in millions of ‘missing girls’.
But were there ‘missing girls’ in historical Europe? Although the conventional narrative argues that there is little evidence for this kind of behaviour (here), my research shows that this issue was much more important than previously thought, especially (but not exclusively) in Southern and Eastern Europe.
It should be noted first that historical sex ratios cannot be compared directly to modern ones. The biological survival advantage of girls was more visible in the high-mortality environments that characterised pre-industrial Europe. Subsequently, boys suffered higher mortality rates both in utero and during infancy and early childhood. Historical infant and child sex ratios were therefore relatively low, even in the presence of gender-discriminatory practices.
This is illustrated in Figure 1 below, which plots the relationship between child sex ratios (the number of boys per 100 girls) and infant mortality rates using information from European countries between 1750 and 2001. In particular, in societies where infant mortality rates were around 250 deaths (per 1,000 live births), a gender-neutral child sex ratio should have been slightly below parity (around 99.5 boys per 100 girls).
Figure 1: Infant mortality rates and child sex ratios in Europe, 1750-2001
Compared with this benchmark, infant and child sex ratios were abnormally high in some European regions (see Map 1 below), suggesting that some sort of gender discrimination was unduly increasing female mortality rates at those ages.
Interestingly, the observed differences in sex ratios are also visible throughout childhood. In fact, the evolution of sex ratios by age shows stark disparities across countries. Figure 2 shows how the number of boys per 100 girls changes as children grew older for a sample of countries, both in levels and in the observed trends.
In Bulgaria, Greece and France, for example, sex ratios increased with age, providing evidence that gender discrimination continued to increase female mortality rates as girls grew older. Importantly, the unbalanced sex ratios observed in some regions are not due to random noise, female under-registration or sex-specific migratory flows.
Likewise, although geography, climate and population density contributed to shaping infant and child sex ratios due to their impact on the disease environment, these factors cannot explain away the patterns of gender discrimination reported here.
Map 1: Child sex ratios in Europe, c.1880
Figure 2: Sex ratios by age in a sample of countries, c.1880
This evidence indicates that discriminatory practices with lethal consequences for girls constituted a veiled feature of our European past. But the actual nature of discrimination remains unclear and surely varies by region.
Excess female mortality was then not necessarily the result of ill treatment of young girls, but could have been just based on an unequal allocation of resources within the household, a circumstance that probably cumulated as infants grew older.
In contexts where infant and child mortality rates are high, a slight discrimination in the way that young girls were fed or treated when ill, as well as in the amount of work with which they were entrusted, was likely to have resulted in more girls dying from the combined effect of undernutrition and illness.
Although female infanticide or other extreme versions of mistreatment of young girls may not have been a systematic feature of historical Europe, this line of research would point to more passive, but pervasive, forms of gender discrimination that also resulted in a significant fraction of missing girls.