North & South in the 1660s and 1670s: new understanding of the long-run origins of wealth inequality in England

By Andrew Wareham (University of Roehampton)

This blog is part of a series of New Researcher blogs.

Maps of England circa 1670, Darbie 10 of 40. Available at Wikimedia Commons.

New research shows that before the industrial revolution many more houses in south-east England had more fireplaces than houses in the Midlands and northern England. When Mrs Gaskell wrote North and South, she reflected on a theme which was nearly two centuries old and which continues to divide England.

Since the 1960s, historians have wanted to use the Restoration hearth tax to provide a national survey of distributions of population and wealth. But, for technical reasons until now, it has not been possible to move beyond city and county boundaries to make comparisons. 

Hearth Tax Digital, arising from a partnership between the Centre for Hearth Tax Research (Roehampton University, UK) and the Centre for Information Modelling (Graz University, Austria) overcomes these technical barriers. This digital resource provides free access to the tax returns, with full transcription of the records and links to archival shelf marks and location by county and parish. Data on around 188,000 households in London and 15 cities/counties can be searched, with the capacity to download search queries into a databasket, and work on GIS mapping is in development.

In the 1660s and 1670s, after London, the West Riding of Yorkshire and Norfolk stand out as densely populated regions. The early stages of industrialization meant that Leeds, Sheffield, Doncaster and Halifax were overtaking the former leading towns of Hull, Malton and Beverley. But the empty landscapes of north and east Norfolk, enjoyed by holiday makers today, were also densely populated then.

The hearth tax, as a nation-wide levy on domestic fireplaces, was charged against every hearth in each property, and the tax was collected twice a year at Lady Day (March) and Michaelmas (September).  In 1689 after 27 years it was abolished in perpetuity in England and Wales, but it continued to be levied in Ireland until the early nineteenth century and it was levied as a one- off tax in Scotland in 1691. Any property with three hearths and over was liable to pay the tax, and many properties with one or two hearths, such as those occupied by the ordinary poor, were exempt from the tax. (The destitute and those in receipt of poor relief were not included in the tax registers). A family living in a home with one hearth had to use it for all their cooking, heating and leisure purposes, but properties with more than three  hearths had at least one hearth in the kitchen, one in the parlour and one in an upstairs chamber. 

In a  substantial majority of parishes in northern England (County Durham, Westmorland, the East and North Ridings of Yorkshire) less than 20 per cent of households had three hearths and over, and only in the West Riding was there a significant number of parishes where 30 percent and more of households had three hearths and over. But, in southern England, across Middlesex, Surrey, southern Essex, western Kent and a patchwork of parishes across Norfolk, it was common for at least a third of the properties to have three hearths and over. 

There are many local contrasts to explore further. South-east Norfolk and north-east Essex were notably more prosperous than north-west Essex, independent of the influence of London, and the patchwork pattern of wealth distribution in Norfolk around its market towns and prosperous villages is repeated in the Midlands. Nonetheless, the general pattern is clear enough: the distribution of population in the late seventeenth century was quite different from patterns found today, but Samuel Pepys and Daniel Defoe would have recognized a world in which south-east England abounded with the signs of prosperity and comfort in contrast to the north.

How Indian cottons steered British industrialisation

By Alka Raman (LSE)

This blog is part of a series of New Researcher blogs.

“Methods of Conveying Cotton in India to the Ports of Shipment,” from the Illustrated London News, 1861. Available at Wikimedia Commons.

Technological advancements within the British cotton industry have widely been acknowledged as the beginning of industrialisation in eighteenth and nineteenth century Britain. My research reveals that these advances were driven by a desire to match the quality of handmade cotton textiles from India.

I highlight how the introduction of Indian printed cottons into British markets created a frenzy of demand for these exotic goods. This led to immediate imitations by British textile manufacturers, keen to gain footholds in the domestic and world markets where Indian cottons were much desired.

The process of imitation soon revealed that British spinners could not spin the fine cotton yarn required to hand make the fine cotton cloth needed for fine printing. And British printers could not print cloth in the multitudes of colourfast colours that the Indian artisans had mastered over centuries.

These two key limitations in British textile manufacturing spurred demand-induced technological innovations to match the quality of Indian handmade printed cottons.

In order to test this, I chart the quality of English cotton textiles from 1740-1820 and compare them with Indian cottons of the same period. Thread per inch count is used as the measure of quality, and digital microscopy is deployed to establish their yarn composition to determine whether they are all-cotton textiles or mixed linen-cottons.

My findings show that the earliest British ‘cotton’ textiles were mixed linen-cottons and not all-cottons. Technological evolution in the British cotton industry was a pursuit of first the coarse, yet all-cotton cloth, followed by the fine all-cotton cloth such as muslin.

The evidence shows that British cotton cloth quality improved by 60% between 1747 and 1782 during the decades of the famous inventions of James Hargreaves’ spinning jenny, Richard Arkwright’s waterframe and Samuel Crompton’s mule. It further improved by 24% between 1782 and 1816. Overall, cloth quality improved by a staggering 99% between 1747 and 1816.

My research challenges our current understanding of industrialisation as a British and West European phenomenon, commonly explained using rationales such as high wages, availability of local energy sources or access to New World resources. Instead, it reveals that learning from material goods and knowledge brought into Britain and Europe from the East directly and substantially affected the foundations of the modern world as we know it.

The results also pose a more fundamental question: how does technological change take place? Based on my findings, learning from competitor products – especially imitation of novel goods using indigenous processes – may be identified as one crucial pathway for the creation of new ideas that shape technological change.

Industrial, regional, and gender divides in British unemployment between the wars

By Meredith M. Paker (Nuffield College, Oxford)

This blog is part of a series of New Researcher blogs.

A view from Victoria Tower, depicts the position of London on both sides of the Thames, 1930. Available at Wikimedia Commons.

‘Sometimes I feel that unemployment is too big a problem for people to deal with … It makes things no better, but worse, to know that your neighbours are as badly off as yourself, because it shows to what an extent the evil of unemployment has grown. And yet no one does anything about it’.

A skilled millwright, Memoirs of the Unemployed, 1934.

At the end of the First World War, an inflationary boom collapsed into a global recession, and the unemployment rate in Britain climbed to over 20 per cent. While the unemployment rate in other countries recovered during the 1920s, in Britain it remained near 10 per cent for the entire decade before the Great Depression. This persistently high unemployment was then intensified by the early 1930s slump, leading to an additional two million British workers becoming unemployed.

What caused this prolonged employment downturn in Britain during the 1920s and early 1930s? Using newly digitized data and econometrics, my project provides new evidence that a structural transformation of the economy away from export-oriented heavy manufacturing industries toward light manufacturing and service industries contributed to the employment downturn.

At a time when few countries collected any reliable national statistics at all, in every month of the interwar period the Ministry of Labour published unemployment statistics for men and women in 100 industries. These statistics derived from Britain’s unemployment benefit program established in 1911—the first such program in the world. While many researchers have used portions of this remarkable data by manually entering the data into a computer, I was able to improve on this technique by developing a process using an optical-character recognition iPhone app. The digitization of all the printed tables in the Ministry of Labour’s Gazette from 1923 through 1936 enables the econometric analysis of four times as many industries as in previous research and permits separate analyses for male and female workers (Figure 1).

Figure 1: Data digitization. Left-hand side is a sample printed table in the Ministry of Labour Gazette. Right-hand side is the cleaned digitized table in Excel.

This new data and analysis reveal four key findings about interwar unemployment.  First, the data show that unemployment was different for men and women. The unemployment rate for men was generally higher than for women, averaging 16.1 percent and 10.3 per cent, respectively.  Unemployment increased faster for women at the onset of the Great Depression but also recovered quicker (Figure 2). One reason for these distinct experiences is that men and women generally worked in different industries. Many unemployed men had previously worked in coal mining, building, iron and steel founding, and shipbuilding, while many unemployed women came from the cotton-textile industry, retail, hotel and club services, the woolen and worsted industry, and tailoring.

Figure 2: Male and female monthly unemployment rates. Source: Author’s digitization of Ministry of Labour Gazettes.

Second, regional differences in unemployment rates in the interwar period were not due only to the different industries located in each region. There were large regional differences in unemployment above and beyond the effects of the composition of industries in a region.

Third, structural change played an important role in interwar unemployment. A series of regression models indicate that, ceteris paribus, industries that expanded to meet production needs during World War I had higher unemployment rates in the 1920s. Additionally, industries that exported much of their production also faced more unemployment. An important component of the national unemployment problem was thus the adjustments that some industries had to make due to the global trade disturbances following World War I.

Finally, the Great Depression accelerated this structural change. In almost every sector, more adjustment occurred in the early 1930s than in the 1920s. Workers were drawn into growing industries from declining industries, at a particularly fast rate during the Great Depression.

Taken together, these results suggest that there were significant industrial, regional, and gender divides in interwar unemployment that are obscured by national unemployment trends. The employment downturn between the wars was thus intricately linked with the larger structural transformation of the British economy.


Meredith M. Paker

meredith.paker@nuffield.ox.ac.uk

Twitter: @mmpaker

Seeing like the Chinese imperial state: how many government employees did the empire need?

By Ziang Liu (LSE)

This blog is part of a series of New Researcher blogs.

The Qianlong Emperor’s Southern Inspection Tour, Scroll Six Entering Suzhou and the Grand Canal. Available at Wikimedia Commons

How many government employees do we need? This has always been a question for both politicians and the public. We often see the debates from both sides arguing about whether the government should hire or reduce more employees for many reasons.

This was also a question for the Chinese imperial government centuries ago. As the Chinese state governs a vast territory with great cultural and socio-economic diversity, the size of government concerned not only the empire’s fiscal challenges, but also the effectiveness of the governance. My research finds that while a large-scale reduction in government expenditure may have short-term benefits of improving fiscal conditions, in the long term, the lack of investments in administration may harm the state’s ability to govern.

Using the Chinese case, we are interested to see how much the imperial central government counted as a ‘sufficient’ number of employees. How did the Chinese central government make the calculation? After all, the government has to know the ‘numbers’ before it takes any further actions.

Long before the late sixteenth century, the Chinese central government did not have a clear account of how much was spent on its local governments. It was only then, when the marketisation trend of China’s economy enabled the state to calculate the costs of its spending in silver currency, that the imperial central government began to ‘see’ the previously unknown amount of local spending in a unified and legible form.

Consequently, my research finds that over the sixteenth and eighteenth centuries, the Chinese imperial central state significantly improved its fiscal circumstances at the expense of local finance. For roughly a century’s long fiscal pressure between the late sixteenth and late seventeenth century (see Figure A), the central government continuously expanded its incomes and cut off local spending on government employees.

Eventually, at the turn of the eighteenth century, the central treasury’s annual income was roughly four to five times larger than the late sixteenth century level (see Figure B), and the accumulated fiscal surplus was in general one to two times greater than its annual budgetary income (see Figure C).

But what the central government left to the local, both manpower and funding, seems to have been too little to govern the empire. My research finds that either in terms of total numbers of government employees (see Figure D) or employees per thousand population (see Figure E), the size of China’s local states shrank quite dramatically from the late sixteenth century.

In the sample regions, we find that in the eighteenth century, every one to two government employees had to serve one thousand local population (Figure E). In the meantime, records also show that salary payments for local government employees remained completely unchanged from the late seventeenth century.

Therefore, my research considers that when the Chinese central state attempted to intervene local finance, it had the stronger intention of constraining rather than rationalising local finance. Even in the eighteenth century, when the empire’s fiscal circumstances were unprecedentedly good, the central state did not consider increasing investments in local administration.

Given the constant population growth in China from 100 million in the early seventeenth century to more than 300 million in the early nineteenth century, it is hardly persuasive that the size of China’s local governments could be effective in local governance, not even to mention that due to reductions in local finance, the Chinese local states from the late seventeenth century kept more personnel for state logistics and information network instead of local public services such as education and local security.

Spain’s tourism boom and the social mobility of migrant workers

By José Antonio García Barrero (University of Barcelona)

This blog is part of a series of New Researcher blogs.

Spain Balearic Islands Mediterranean Menorca. Available at Wikimedia Commons.

My research, which is based on a new database of the labour force in Spain’s tourism industry, analyses the assimilation of internal migrants in the Balearic Islands during the tourism boom between 1959 and 1973.

I show that tourism represented a context for upward social mobility for natives and migrants. But the extent of upward mobility was uneven among the different groups. While natives, foreigners and internal urban migrants achieved a significant level of upward mobility, the majority faced more difficulties to improve. The transferability of human capital to the services economy and the characteristics of their migratory fluxes determined the extent of the labour attainment of the migrants.

The tourism boom constituted one of the main scenarios of the path to modernisation in Spain in the twentieth century. Between 1959 and 1973, the country transformed into one of the top tourist economies of the world, mobilising a rapid and intense demographic and landscape transformation among coastal regions of the peninsula and the archipelagos.

The increasing demand for tourism services from West European societies triggered the massive arrival of tourists to the country. In 1959, four million tourists visited Spain; by 1973, the country hosted 31 million visitors. The epicentre of this phenomenon was the Balearic Islands.

In the Balearics, a profound transformation took place. In more than a decade, the capacity of the tourism industry skyrocketed from 215 to 1,534 hotels and pensions, and from 11,496 to 216,113 hotel beds. Between 1950 and 1981, the number of Spanish-born people from outside the Balearics increased from 33,000 inhabitants to 150,000, attracted by the high labour demand for tourism services. In 1950, they accounted for 9% of the total population; in 1981, that share had reached 34.4%.

In my research, I analyse whether the internal migrants who arrived at the archipelago – mostly seasonal migrants who ended up becoming permanent residents from stagnant rural agrarian areas in southern Spain – were able to take advantage of the rapid and profound transformation of the tourism industry. Instead of putting my focus on the process of movement from agrarian to services activities, my interest was in the potential possibilities of upward mobility in the host society.

I use a new database of the workforce, both men and women, in the tourism industry, comprising a total of 10,520 observations with a wide range of personal, professional and business data for each individual up to 1970. The features of this data make it possible to analyse the careers of these workers in the emerging service industry by cohort characteristics, including variables such as gender, place of birth, language skills or firm, among others. Using these variables, I examine the likelihood of belonging to four income categories.

My results suggest that the tourism explosion opened significant opportunities for upward labour mobility. Achieving high-income jobs was possible for workers involved in hospitality and tourism-related activities. But those who took advantage of this scenario were mainly male natives and urban migrants coming from northern Spain, mainly from Catalonia, and especially from European countries with clear advantages in terms of language skills.

For natives, human and social capital made the difference. For migrants, the importance of self-selection and the transferability of skills from urban cities to the new leisure economies were decisive.

Likewise, despite lagging behind, those from rural areas in southern Spain were able to achieve some degree of upward mobility, thus reducing progressively although not completely the gap with natives. Acquiring human capital through learning-by-doing and the formation of networks of support and information among migrants from the same areas increased the chances of improvement. Years of experience, knowing where to find job opportunities and holding personal contacts in the firms were important skills.

In that sense, the way that the migrants arrived at the archipelago mattered. Those more exposed to seasonal flows of migrants faced a lower capacity for upward mobility since they were recruited in their place of origin rather than through migrant networks or returned to their homes at the end of each season.

In comparison, those who relied on migratory networks and remained as residents in the archipelago had a greater chance of getting better jobs and reducing their socio-economic distance from the natives.

Colonialism, institutional quality and the resource curse

by Jubril Animashaun (University of Manchester)

This blog is part of a series of New Researcher blogs.

Why are so many oil-rich countries characterised by slow economic growth and corruption? Are they cursed by the resource endowment per se or is it the mismanagement of oil wealth? We used to think that it is mostly the latter. These days, however, we know that it is far more complicated than that: institutional reform is challenging because institutions are multifaceted and path-dependent.

A primary objective of European colonialism was to expand the economic base of the home country through the imposition of institutions that favoured rent-seeking in the colony. If inherited, such structures can constitute a significant reason for the resource curse and why post-colonial institutional reform is hard. Following this argument, post-colonial groups that benefitted from the institutional system may be able to reproduce this system after independence.

Our study finds support for this argument in oil-rich countries. This suggests the enduring impact of the sixteenth to nineteenth century European colonial practices as an obstacle to institutional reforms in oil-rich countries today.

We come to this conclusion by investigating the changes in economic development over the period 1960-2015 in 69 countries. Our results show that the variation in economic development over these 45 years can be explained to a large extent by institutional quality and oil abundance and their interaction. Our findings are unchanged after controlling for countries that became independent after 1960 (many former Portuguese colonies are in this category).

In our study, we define colonial experience if an oil-rich country had European colonial settlement (for example, settler mortality records) and/or if any of the colonial European languages (English, French, Spanish, etc.) persist as official post-independence language. Persistence of the colonial language helps to distinguish colonies based on the depth of colonial economic engagement.

We further capture colonialism with a dummy variable to reduce the measurement error with estimates of both settler mortality and language. Institutions are measured as the unweighted averages of executive constraints, expropriation risk and government effectiveness (institutional quality index).

Figure 1: Log of settler mortality on institutional quality in oil and gas-rich countries that were former European colonies

It is important in our kind of research to distinguish the impact of colonial legacy from the pre-colonial conditions in the colonised states to validate our result. This is because places with sophisticated technologies could have resisted colonial occupation, and such historical technologies may also have a persistent long-term effect. As our sample comprises countries with giant oil discoveries, and because oil discoveries did not drive the sixteenth to nineteenth century European colonialism, our findings rule out other backdoor effects of colonial and pre-colonial impact on current performance.

Figure 2: Log illiteracy and experience of colonialism in oil-rich countries with control for log GDP and population

We find a significant gap in illiteracy levels between colonised and non-colonised countries. We also find that countries with colonial heritage have less trust. We suggest that to reverse the resource curse, higher priorities should be placed on investment in human capital and education. These will boost citizens’ ability to demand accountability and good governance from elected officials and improve the quality of discourse on civic engagement on institutional reforms.

Figure 3: Social trust index and the experience of colonialism

Famine, institutions, and indentured migration in colonial India

By Ashish Aggarwal (University of Warwick)

This blog is part of a series of New Researcher blogs.

 

Aggarwal1
Women fetching water in India in the late 19th century. Available at Wikimedia Commons.

A large share of the working population in developing countries is still engaged in agricultural activities. In India, for instance, over 40% of the employed population works in the agricultural sector and nearly three-quarters of the households depend on rural incomes (World Bank[1]). In addition, the agricultural sector in developing countries is plagued with low investments, forcing workers to rely on natural sources for irrigation as opposed to perennial man-made sources. Gadgil and Gadgil (2006) study the agricultural sector in India during 1951-2003 and find that despite a decline in share of agriculture in GDP in India, severe droughts still adversely impact GDP by 2-5%. In such a context, any unanticipated deviation from normal in rainfall is bound to have adverse effects on productivity and consequently, on incomes of these workers. In this paper, I study whether workers adopt migration as a coping strategy in response to income risks arising out of negative shocks to agriculture. And, if local institutions facilitate or hinder the use of this strategy. In a nutshell, the answers are yes and yes.

I study these questions in the context of indentured migration from colonial India to several British colonies. The abolition of slavery in the 1830s led to a demand for new sources of labour to work on plantations in the colonies. Starting with the “great experiment” in Mauritius (Carter, 1993), over a million Indians became indentured migrants with Mauritius, British Guyana, Natal, and Trinidad being the major destinations. The indentured migration from India was a system of voluntary migration, wherein passages were paid-for and migrants earned fixed wages and rations. The exact terms varied across different colonies, but generally the contracts were specified for a period of five years and after ten years of residency in the colony, a paid-for return passage was also available.

Using a unique dataset on annual district-level outflows of indentured migrants from colonial lndia to several British colonies in the period 1860-1912, I find that famines increased indentures. However, this effect varied according to the land-revenue collection system established by the British. Using the year the district was annexed by Britain to construct an instrument for the land revenue system (Banerjee and Iyer, 2005), I find that emigration responded less to famines in British districts where landlords collected revenue (as opposed to places where individual was responsible for revenue payments). I also find this to be the case in Princely States. However, the reasons for these results are markedly different. Qualitative evidence suggests that landlords were unlikely to grant remissions to their tenants; this increased tenant debt, preventing them from migrating. Interlinked transactions and a general fear of the landlords prevented the tenants from defaulting on their debts. Such coercion was not witnessed in areas where landlords were not the revenue collectors making it easier for people to migrate in times of distress. On the other hand, in Princely states, local rulers adopted liberal measures during famine years in order to help the population. These findings are robust to various placebo and robustness checks. The results are in line with Persaud (2019) who shows that people engaged in indentured migration to escape local price volatility.

 

[1] https://www.worldbank.org/en/news/feature/2012/05/17/india-agriculture-issues-priorities

 

References

Banerjee, Abhijit, and Lakshmi Iyer (2005): “History, Institutions, and Economic Performance: The Legacy of Colonial Land Tenure Systems in India”, American Economic Review, Vol. 95, No. 4, pp. 1190-1213.

Carter, Marina (1993): “The Transition from Slave to Indentured Labour in Mauritius”, Slavery and Abolition, 14:1, pp. 114-130.

Gadgil, Sulochana, and Siddhartha Gadgil (2006): “The Indian Monsoon, GDP and Agriculture”, Economic and Political Weekly, Vol. 41, No. 47, 4887-4895.

Persaud, Alexander (2019): “Escaping Local Risk by Entering Indentureship: Evidence from Nineteenth-Century Indian Migration”, Journal of Economic History, Vol. 79, No. 2, pp. 447-476.

 

 

Strangling Speculation: The Effects of the 1903 Viennese Futures Trading Ban

By Laura Wurm (Queen’s University Belfast)

This blog is part of a series of New Researcher blogs.

 

101123-A-0193C-001
Farmland in Dalat, Vietnam. Available at Wikimedia Commons.

 

Ever since the emergence of futures markets and speculation, the effects of futures trading on spot price volatility have been subject to intense debate. While the populist discourse affirms the adverse and price-disturbing consequences of futures trading, the work of scholars stresses the risk allocation and information transmission function of futures towards spot markets, essential for pricing cash transactions. My research tests whether these volatility-lowering effects of futures trading towards the cash market hold true by assuming the opposite: what happens if futures trading no longer exists?

To do so, I go back to the early 20th century, when futures trading in the Viennese grain market was, unlike at other trade locations at the time, such as Germany, England, or Texas, banned permanently. The 1903 parliament-enforced prohibition of futures trading was the consequence of an aversion against speculators, who were blamed for “never having held actual grain in their hands”. Putting an end to the vibrant futures market of the Agricultural Products Exchange, the city’s gathering place for farmers, millers, large-scale customers, and speculators, was thought to be the last resort to curb undue speculation. Up to the present day, futures trading has not been resumed. The uniqueness of this ban makes it an ideally suited natural experiment to test the effects of futures trading and its abolishment on spot price volatility. Prices from the Budapest Stock and Commodity Exchange, which was not affected by the ban, are used as a synthetic control. The Budapest market, as part of the Austro-Hungarian Empire, operated under similar legal-economic and geographic conditions, and was, in addition to Vienna, the only Austro-Hungarian market offering a trade in futures. This makes Budapest an ideally suited control.

My project examines the information transmission function of futures to spot markets and finds a heightened spot price volatility in Vienna and a lower accuracy in pricing cash transactions after futures trading was banned. The intra-day variation of spot prices increased after the ban. Without futures trading, the Viennese market lacked pricing accuracy and efficiency. The effect on volatility holds true when using a commodity traded exclusively on the Viennese spot market as a control. In addition, assessing Granger causality, information flows between the futures and spot markets of the two cities are found to have existed prior to the ban, which links to the information transmission function of futures towards cash markets and the close ties between the two markets. After futures trading was prohibited in Vienna, Budapest futures prices with 3-6 months maturity continued to significantly Granger-cause Viennese spot prices.

 

 


Laura Wurm

lwurm01@qub.ac.uk