Independent Women: Investing in British Railways, 1870-1922

by Graeme Acheson (University of Strathclyde Business School), Aine Gallagher, Gareth Campbell, and John D.Turner (Queen’s University Centre for Economic History)

The full article from this blog post has been published on The Economic History Review, and it is currently available on Early View here

Women have a long tradition of investing in financial instruments, and scholars have recently documented the rise of female shareholders in nineteenth-century Britain, the United States, Australia, and Europe. However, we know very little about how this progressed into the twentieth century, and whether women shareholders over a century ago behaved differently from their male counterparts. To address this, we turn to the shareholder constituencies of railways, which were the largest public companies a century ago.

Figure 1. Illustration of a female investor reading the ticker tape in the early twentieth century. Source: the authors

Railway companies in the UK popularised equity investment among the middle classes; they had been a major investment asset since the first railway boom of the mid-1830s. At the start of the 1900s, British railways made up about half of the market value of all domestic equity listed in the UK, and they constituted 49 of the 100 largest companies on the British stock market in 1911. The railways, therefore, make an interesting case through which to examine women investors. Detailed railway shareholder records, comparable to those for other sectors, have generally not been preserved. However, we have found Railway Shareholder Address Books for six of the largest railway companies between 1915 and 1922. We have supplemented these with several address books for these companies back to 1870, and have analysed the Shareholder Register for the Great Western Railway (GWR) from 1843, to place the latter period in context.

An analysis of these shareholder address books reveals the growing importance of women shareholders from 1843, when they made up about 11 per cent of the GWR shareholder base, to 1920, when they constituted about 40 per cent of primary shareholders. By the early twentieth century, women represent 30 to 40 per cent of shareholders in each railway company in our sample, which is in line with estimates of the number of women investing in other companies at this time (Rutterford, Green, Maltby and Owens, 2011). This implies that women were playing an important role in financial markets in the early twentieth century.

Although women were becoming increasingly prevalent in shareholder constituencies, we know little about how they were responding to changing social perceptions, and the increasing availability of financial information, in order to make informed investment decisions, or if they were influenced by male relatives. To examine this, we focus on joint shareholdings, where people would invest together, rather than buying shares on their own. This practice was extremely common, and from our data we are able to analyse the differences between solo shareholders, lead joint shareholders (i.e., individuals who owned shares with others but held the voting rights), and secondary joint shareholders (i.e., individuals who owned shares with others but did not hold the voting rights).

We find that women were much more likely to be solo shareholders than men, with 70 to 80 per cent of women investing on their own, compared to just 30 to 40 per cent of men. When women participated in joint shareholdings, there was no discernible difference as to whether they were the lead shareholder or the secondary shareholder, whereas the majority of men took up a secondary position. When women participated as a secondary shareholder, the lead was usually not a male relative. These findings are strong evidence that women shareholders were acting independently by choosing to take on the sole risks and rewards of share ownership when making their investments. 

We then analyse how the interaction between gender and joint shareholdings affected investment decisions. We begin by examining differences in terms of local versus arms-length investment, using geospatial analysis to calculate the distance between each shareholder’s address and the nearest station of the railway they had invested in. We find that women were more likely than men, and solo investors more likely than joint shareholders, to invest locally. This suggests that men may have used joint investments as a way of reducing the risks of investing at a distance. In contrast, women preferred to maintain their independence even if this meant focusing more on local investments.

We then examine the extent to which women and men invested across different railways. In the modern era, it is common to adopt a value-weighted portfolio which is most heavily concentrated in larger companies. As three of our sample companies were amongst the six largest companies of their era and a further two were in the top twenty-five, we would, a priori, expect to see some overlap of shareholders investing in different railways if they adopted this approach to diversification. From our analysis, we find that male and joint shareholders were more likely than female and solo shareholders to hold multiple railway stocks. This could imply that men were using joint shareholdings as a means of increasing diversification. In contrast, women may have been prioritising independence, even if it meant being less diversified.

We also consider whether there were differences in terms of how long each type of shareholder held onto their shares because modern studies suggest that women are much less likely than men to trade their shares. We find that only a minority of shareholders maintained a long-run buy and hold strategy, with little suggestion that this differed on the basis of gender or joint versus solo shareholders. This implies that our findings are not being driven by a cohort effect, and that the increasing numbers of women shareholders consciously chose to invest independently. 

To contact the authors:

Graeme Acheson, graeme.acheson@strath.ac.uk

Aine Gallagher, Aine.Galagher@qub.ac.uk

Gareth Campbell, gareth.campbell@qub.ac.uk

John D.Turner, j.turner@qub.ac.uk

Taxation and the stagnation of cotton exports in Brazil, 1800 – 1860

by Thales Zamberlan Pereira (Getúlio Vargas Foundation, São Paulo School of Economics)

Port of Pernambuco. Emil Bauch, 1852. Brasiliana Iconográfica.

Brazil supplied 40 per cent of cotton imports in Liverpool during the last decade of the eighteenth century (Krichtal 2013). By the first half of the nineteenth century, however, cotton exports stagnated, and Brazil became the only major international cotton producer that decreased its exports to European countries. The reason for decline in production, despite increasing international demand during the 19th century, is not generally agreed on. Scholars have attributed the decline to high transport costs, competition from sugar and coffee plantations for slaves, Dutch disease from the increase in coffee exports, among others (Leff 1972; Stein 1979; Canabrava 2011). There is disagreement in part because previous research largely relies on data after 1850, which was after the decline of cotton plantations in Brazil.

In a new paper, I argue that cotton profitability was restricted by the fiscal policy implemented by the Portuguese (and, later, Brazilian) government after 1808. To make this argue I first show new patterns on the timing of the decline of Brazilian cotton. Specifically, using new data of cotton productivity for the 1800-1860 period, this research shows that Brazil’s stagnation began in the first decades of the nineteenth century.

The decline therefore cannot be explained by a number of factors. It took place before the United States managed to increase its productivity in cotton production and became the world export leader (Olmstead and Rhode 2008). Cotton regions in Brazil did not have a labour supply problem nor suffered from a Dutch disease phenomenon during the early nineteenth century (Pereira 2018). The new evidence also suggests that external factors, such as declining international prices or maritime transport costs, were not responsible for the stagnation of cotton exports in Brazil. As any other commodity at the time, falls in international prices would have to be offset by increases in productivity. In fact, Figure 1 shows that Brazilian cotton prices were competitive in Liverpool. From the staples presented in the Figure, the standard cotton from the provinces of Pernambuco and Maranhão had higher quality than from New Orleans and Georgia (and, hence, achieved higher prices), but “Maranhão saw-ginned”, which achieved similar prices, used the same seeds as the ones in US plantations.

Figure 1: Cotton prices in Liverpool 1825 – 1850
Source: Liverpool Mercury and The Times newspapers

So, what caused the stagnation of cotton exports in Brazil? I argue that the fiscal policy implemented by the Portuguese government after 1808 restricted cotton profitability. High export taxes, whose funds were transferred to Rio de Janeiro, explain the ‘profitability paradox’ that British consuls in Brazil reported at the time. They remarked that even in periods with high prices and foreign demand, Brazilian planters had limited profitability. Favourable market conditions after the Napoleonic wars allowed production in Brazil to continue growing at least until the early 1830s.

Figure 2 shows that when international prices started to decline after 1835, cotton was no longer a profitable crop in many Brazilian regions. This was especially pronounced for regions where plantations were far from the coast, which had to pay higher transport costs in addition to the export tax. To support that the tax burden decreased profitability, I calculate an “optimal tax rate”, which maximized government revenues, and the “effective tax rate”, which was the amount that exporters paid. Figure 2 illustrates that, while the tax rate by law was low, the effective tax rate for cotton producers was significantly greater than the optimal tax rate after 1835.

Figure 2 – Rate of cotton export tariffs, 1809-1850.

Facing lower prices, cotton producers in Brazil could have shifted production to varieties of cotton produced in the United States, which had higher productivity and were in increasing demand in British markets. As presented in Figure 1, some regions in Brazil tried to follow this route (with saw-ginned cotton in Maranhão), but this type of production was not profitable with an export tax that reached 20 percent. Brazil, therefore, was stuck in the market for long-staple cotton, for which demand remained relatively stable during the nineteenth century. Regions that could not produce long-staple cotton practically abandoned production.

Not only do the results provide insight to the cotton decline, but the paper contributes to a better understanding of the roots of regional inequality in Brazil and the political economy of taxation. Cotton production before 1850 was concentrated in the northeast region, which continues to lag in economic conditions to this day. As I argue in the paper, the export taxes implemented after 1808 largely targeted commodities from the northeast. Production of commodities from southeast regions, such as coffee, paid lower tax rates. Parliamentary debates at the time show cotton producers in the Northeast did demand tax reform. Their demands, however, were not met quickly enough to prevent Brazilian cotton plantations from being priced-out from the international market.

To contact the author: thales.pereira@fgv.br

References:

Canabrava, Alice P. 2011. O Desenvolvimento Da Cultura Do Algodão Na Província de São Paulo, 1861-1875. São Paulo: EDUSP.

Krichtal, Alexey. 2013. “Liverpool and the Raw Cotton Trade: A Study of the Port and Its Merchant Community, 1770-1815.” Victoria University of Wellington.

Leff, Nathaniel H. 1972. “Economic Development and Regional Inequality: Origins of the Brazilian Case.” The Quarterly Journal of Economics 86 (2): 243–62. https://doi.org/10.2307/1880562.

Olmstead, Alan L., and Paul W. Rhode. 2008. “Biological Innovation and Productivity Growth in the Antebellum Cotton Economy.” The Journal of Economic History 68 (04): 1123–1171. https://doi.org/10.1017/S0022050708000831.

Pereira, Thales A. Zamberlan. 2018. “Poor Man’s Crop? Slavery in Cotton Regions in Brazil (1800-1850).” Estudos Econômicos (São Paulo) 48 (4).

Stein, Stanley J. 1979. Origens e evolução da indústria têxtil no Brasil: 1850-1950. Rio de Janeiro: Editora Campus.

The labour market causes and consequences of general purpose technological progress: evidence from steam engines

by Leonardo Ridolfi (University of Siena), Mara Squicciarini (Bocconi University), and Jacob Weisdorf (Sapienza University of Rome)

Steam locomotive running gear. Available at Wikimedia Commons.

Should workers fear technical innovations? Economists have not provided a clear answer to this perennial question. Some believe machines make ‘one man to do the work of many’; that mechanisation will generate cheaper goods, more consumer spending, increased labour demand and thus more jobs. Others, instead, wor­ry that automation will be labour-cheapening, making workers – especially unskilled ones – redundant, and so result in increased unemployment and growing income inequality.

Our research seeks answers from the historical account. We focus on the first Industrial Revolution, when technical innovations became a key component of the production process.

The common understanding is that mechanisation during the early phases of industrialisation allowed firms to replace skilled with unskilled male workers (new technology was deskilling) and also male workers with less expensive female and child labourers. Much of this understanding is inspired by the Luddite movement – bands of nineteenth century workers who destroyed early industrial machinery that they believed was threatening their jobs.

To test these hypotheses, we investigate one of the major technological advancements in human history: the rise and spread of steam engines.

Nineteenth century France provides an exemplary setting to explore the effects. French historical statistics are extraordinarily detailed, and the first two national industry-level censuses – one from the 1840s, when steam power was just beginning to spread; and one from the 1860s, when they were more common – help us to observe the labour market conditions that led to the adoption of steam engines, as well as the effects of adopting the new technology on the demand for male, female and child labour, and on their wages.

Consistent with the argument that steam technology emerged for labour-cheapening purposes, our analysis shows that the adoption of steam technology was significantly higher in districts (arrondissements) where:

  1. industrial labour productivity was low, so that capital-deepening could serve to improve output per worker;
  2. the number of workers was high, so the potential for cutting labour costs by replacing them with machines was large;
  3. the share of male workers was high, so the potential for cutting labour costs by shifting towards women and children was large; and
  4. steam engines had already been installed in other industries, thus lowering the costs of adopting the new technology.

We also find, however, that steam technology, once adopted, was neither labour-saving nor skill-saving. Steam-powered industries did use higher shares of (cheaper) female and child workers than non-steam-powered industries. At the same time, though, since steam-operating industries employed considerably more workers in total, they ended up using also more male workers – and not just more women and children.

We also find that steam-powered industries paid significantly higher wages, both to men and women. In contrast with the traditional narrative of early industrial technologies being deskilling, this result provides novel empirical evidence that steam-use was instead skill-demanding.

Although workers seemed to have gained from the introduction of steam technology, both in terms of employment and payment opportunities, our findings show that labour’s share was lower in steam-run industries. This motivates Engels-Marx-Piketty-inspired concerns that advancing technology leaves workers with a shrinking share of output.

Our findings thus highlight the multi-sided effects of adopting general-purpose technological progress. On the positive side, the steam engine prompted higher wages and a growing demand for both male and female workers. On the negative side, steam-powered industries relied more heavily on child labour and also placed a larger share of output in the hands of capitalist.

The German bank-growth nexus revisited: Savings banks and economic growth in Prussia

by Sibylle Lehmann-Hasemeyer and Fabian Wahl (University of Hohenheim)

The full blog post from this article was published on The Economic History Review and is now available on early view at this link

The German banking system is often considered a key factor in German industrialisation. For Alexander Gerschenkron,  Germany’s experience can serve as a role model for other moderately backward economies: governments could trigger economic development by supporting the establishment of modern financial institutions such as universal banks, which were typical of the German banking system and which mobilised savings, reduced risks for investors, and improved the allocation of resources. Such activities ease the trading of goods and services and foster technological innovation.

Scholarly discussion on the banking-growth nexus in Germany has focused on universal banks  without giving significant attention to other forms of banking. It is surprising that earlier research has ascribed  savings banks a limited role in industrialization:  by 1913, they held 24.8 per cent of the total assets of all German financial institutions and ranked first among all bank types for net investment. Moreover, savings banks had the advantage of being public institutions. Because they were not profit-driven, they could focus on long-term projects with high social returns.

Our study revisits the banking-growth nexus by focusing on savings banks in 978 Prussian cities. We found a positive and significant relationship between the establishment of savings banks and city growth, and the number of steam engines per factory in the nineteenth century (1854-75). Previous research has either studied the impact of savings banks at a highly aggregated level or qualitatively with case studies. This study is the first to provide quantitative evidence on the local impact of savings banks during the early nineteenth century.

To address potential endogeneity, we refer to a decree issued in 1854 by the Minister for Trade and Commerce. This decree enhanced the equal distribution of saving, because it demanded the founding of at least one savings bank per county. It further encouraged poorer local authorities to found a savings bank by offering institutional and financial support. Following this decree, a wave of savings banks were established on a much wider geographical distribution than before.  In 1849, savings banks were present in about half of the counties; this had risen to nearly 95 per cent by 1864.

We also observed a significant pre-growth trend in earlier periods before the founding of a savings bank in a city. There is no such trend, however, after 1854 (Figure 1).The savings banks that were founded during this wave were often established in smaller cities that might not have been able to afford them without support. Thus, the decree can be seen as a public policy to promote the establishment of public financial infrastructure in remote regions.

Figure 1. The growth of savings banks in Prussia, c.1800-1870. Source: as per article

Although we cannot perfectly solve the endogeneity issue, the regression results suggest that savings banks promoted city growth. This is even more plausible when considering that Germany’s industrialisation was not only based on larger, multinational firms and coal resources, but rather on good public infrastructure, a competitive schooling system, and, in particular, on small and medium-sized firms, which were the backbone of German industry. The resulting economic landscape persists today.

Our study contributes to the understanding of why Germany industrialised by analysing a neglected aspect of the relationship between banks and growth. Earlier research, which focussed on the impact of large universal banks and stock markets at the end of the 19th century, largely overlooking the impact of savings banks and the potential benefit of a decentralised financial system. The evidence that we provide clearly shows that there is a considerable gap in the literature on savings banks and how they actually contributed to economic growth. 

To contact the authors:

Sibylle Lehmann-Hasemeyer, sibylle.lehmann@uni-hohenheim.de

Fabian Wahl, fabian.wahl@uni-hohenheim.de

Sex ratios and missing girls in nineteenth century Europe

By Francisco J. Beltrán Tapia (Norwegian University of Science and Technology)

This blog is part of our EHS Annual Conference 2020 Blog Series.

The flying girl. Available at Wikimedia Commons.

Gender discrimination – in the form of sex-selective abortion, female infanticide and the mortal neglect of young girls – constitutes a pervasive feature of many contemporary developing countries, especially in South and East Asia. Son preference stems from economic and cultural factors that have long influenced the perceived relative value of women in these regions and resulted in millions of ‘missing girls’.

But were there ‘missing girls’ in historical Europe? Although the conventional narrative argues that there is little evidence for this kind of behaviour (here), my research shows that this issue was much more important than previously thought, especially (but not exclusively) in Southern and Eastern Europe.

It should be noted first that historical sex ratios cannot be compared directly to modern ones. The biological survival advantage of girls was more visible in the high-mortality environments that characterised pre-industrial Europe. Subsequently, boys suffered higher mortality rates both in utero and during infancy and early childhood. Historical infant and child sex ratios were therefore relatively low, even in the presence of gender-discriminatory practices.

This is illustrated in Figure 1 below, which plots the relationship between child sex ratios (the number of boys per 100 girls) and infant mortality rates using information from European countries between 1750 and 2001. In particular, in societies where infant mortality rates were around 250 deaths (per 1,000 live births), a gender-neutral child sex ratio should have been slightly below parity (around 99.5 boys per 100 girls).

Figure 1: Infant mortality rates and child sex ratios in Europe, 1750-2001

Compared with this benchmark, infant and child sex ratios were abnormally high in some European regions (see Map 1 below), suggesting that some sort of gender discrimination was unduly increasing female mortality rates at those ages.

Interestingly, the observed differences in sex ratios are also visible throughout childhood. In fact, the evolution of sex ratios by age shows stark disparities across countries. Figure 2 shows how the number of boys per 100 girls changes as children grew older for a sample of countries, both in levels and in the observed trends.

In Bulgaria, Greece and France, for example, sex ratios increased with age, providing evidence that gender discrimination continued to increase female mortality rates as girls grew older. Importantly, the unbalanced sex ratios observed in some regions are not due to random noise, female under-registration or sex-specific migratory flows.

Likewise, although geography, climate and population density contributed to shaping infant and child sex ratios due to their impact on the disease environment, these factors cannot explain away the patterns of gender discrimination reported here.

Map 1: Child sex ratios in Europe, c.1880

Figure 2: Sex ratios by age in a sample of countries, c.1880

This evidence indicates that discriminatory practices with lethal consequences for girls constituted a veiled feature of our European past. But the actual nature of discrimination remains unclear and surely varies by region.

Excess female mortality was then not necessarily the result of ill treatment of young girls, but could have been just based on an unequal allocation of resources within the household, a circumstance that probably cumulated as infants grew older.

In contexts where infant and child mortality rates are high, a slight discrimination in the way that young girls were fed or treated when ill, as well as in the amount of work with which they were entrusted, was likely to have resulted in more girls dying from the combined effect of undernutrition and illness.

Although female infanticide or other extreme versions of mistreatment of young girls may not have been a systematic feature of historical Europe, this line of research would point to more passive, but pervasive, forms of gender discrimination that also resulted in a significant fraction of missing girls.

North & South in the 1660s and 1670s: new understanding of the long-run origins of wealth inequality in England

By Andrew Wareham (University of Roehampton)

This blog is part of a series of New Researcher blogs.

Maps of England circa 1670, Darbie 10 of 40. Available at Wikimedia Commons.

New research shows that before the industrial revolution many more houses in south-east England had more fireplaces than houses in the Midlands and northern England. When Mrs Gaskell wrote North and South, she reflected on a theme which was nearly two centuries old and which continues to divide England.

Since the 1960s, historians have wanted to use the Restoration hearth tax to provide a national survey of distributions of population and wealth. But, for technical reasons until now, it has not been possible to move beyond city and county boundaries to make comparisons. 

Hearth Tax Digital, arising from a partnership between the Centre for Hearth Tax Research (Roehampton University, UK) and the Centre for Information Modelling (Graz University, Austria) overcomes these technical barriers. This digital resource provides free access to the tax returns, with full transcription of the records and links to archival shelf marks and location by county and parish. Data on around 188,000 households in London and 15 cities/counties can be searched, with the capacity to download search queries into a databasket, and work on GIS mapping is in development.

In the 1660s and 1670s, after London, the West Riding of Yorkshire and Norfolk stand out as densely populated regions. The early stages of industrialization meant that Leeds, Sheffield, Doncaster and Halifax were overtaking the former leading towns of Hull, Malton and Beverley. But the empty landscapes of north and east Norfolk, enjoyed by holiday makers today, were also densely populated then.

The hearth tax, as a nation-wide levy on domestic fireplaces, was charged against every hearth in each property, and the tax was collected twice a year at Lady Day (March) and Michaelmas (September).  In 1689 after 27 years it was abolished in perpetuity in England and Wales, but it continued to be levied in Ireland until the early nineteenth century and it was levied as a one- off tax in Scotland in 1691. Any property with three hearths and over was liable to pay the tax, and many properties with one or two hearths, such as those occupied by the ordinary poor, were exempt from the tax. (The destitute and those in receipt of poor relief were not included in the tax registers). A family living in a home with one hearth had to use it for all their cooking, heating and leisure purposes, but properties with more than three  hearths had at least one hearth in the kitchen, one in the parlour and one in an upstairs chamber. 

In a  substantial majority of parishes in northern England (County Durham, Westmorland, the East and North Ridings of Yorkshire) less than 20 per cent of households had three hearths and over, and only in the West Riding was there a significant number of parishes where 30 percent and more of households had three hearths and over. But, in southern England, across Middlesex, Surrey, southern Essex, western Kent and a patchwork of parishes across Norfolk, it was common for at least a third of the properties to have three hearths and over. 

There are many local contrasts to explore further. South-east Norfolk and north-east Essex were notably more prosperous than north-west Essex, independent of the influence of London, and the patchwork pattern of wealth distribution in Norfolk around its market towns and prosperous villages is repeated in the Midlands. Nonetheless, the general pattern is clear enough: the distribution of population in the late seventeenth century was quite different from patterns found today, but Samuel Pepys and Daniel Defoe would have recognized a world in which south-east England abounded with the signs of prosperity and comfort in contrast to the north.

A century of wind power: why did it take so long to develop to utility scale?

by Mercedes Galíndez, University of Cambridge

This blog is based on research funded by a bursary from the Economic History Society. More information here

Marcellus Jacobs on a 2.5kW machine in the 1940s. Available at <http://www.jacobswind.net/history&gt;

Seventeen years passed between Edison patenting his revolutionary incandescent light bulb in 1880, and Poul la Cour’s first test of a wind turbine for generating electricity. Yet it would be another hundred years before wind power would become an established industry in the 2000s. How can we explain the delay in harvesting the cheapest source of electricity generation?

In the early twentieth century wind power emerged to fill the gaps of nascent electricity grids. This technology was first adopted in rural areas. The incentive was purely economic: the need for decentralised access to electricity. In this early stage there were no concerns about the environmental implications of wind power.

The Jacobs Wind Electricity Company delivered 30,000 three-blade wind turbines in the US  between 1927 and 1957.[1] The basic mechanics of these units did not differ too much from their modern counterparts. Once the standard electrical grid reached rural areas, however, the business case for wind power weakened. Soon it became more economic to buy electricity from centralised utilities, which benefited from significant economics of scale.

It was not until the late 1970s that wind power became a potential substitute for electricity generated by fossil fuels or hydropower. Academic literature agrees on two main triggers for this change: the oil crises in the 1970s, and the politicisation of Climate Change. When the price of oil quadrupled in 1973, rising to nearly US $12 per barrel, industrialised countries’ dependency on foreign producers of oil was exposed. The reaction was to find new domestic sources of energy. Considerable  effort was devoted to nuclear power, but technologies like wind power were also revived.

In the late 1980s Climate Change became more politicised, and interest in wind energy as a technology that could mitigate environmental damage, was renewed.  California’s governor, Jerry Brown, was aligned with these ideals and in 1978, in a move ahead of its time, he provided extra tax incentives to renewable energy producers in his tate.[2] This soon created a ‘California Wind Rush’ which saw both local and European turbine manufacturers burst onto the market, with $1 billion US dollars being  invested in the region of Altamont Pass between 1981 and 1986.[3]

The California Wind Rush ended suddenly when central government support was withdrawn.   However, the European Union (EU) accepted the challenge to maintain the industry. In 2001, the EU introduced Directive 2001/77/EC for the promotion of renewable energy sources. This Directive required Member States to set renewable energy targets.[4] Many directives followed which triggered renewable energy programmes throughout the EU. Following the first directive in 2001, the installed capacity of wind power in the EU increased thirteen-fold, from 13GW to 169GW in 2017.

Whilst there is no doubt that the EU regulatory framework played a key role in the development of wind power, other factors were also at play. Nicolas Rochon, a green investment manager, published a memoir in 2020 in which he argued that clean energy development was also enabled by a change in the investment community. As interest rates decreased during the first two decades of the twenty-first century, investment managers revised downwards their expectations on future returns – which fostered more attention to clean energy assets offering lower profitability. Growing competition in the sector reduced the price of electricity obtained from renewable energy. [5]

My research aims to understand the macroeconomic conditions that enabled wind power to develop to national scale. In particular, how wind power developers accessed capital, and how bankers and investors took a leap of faith to invest in the technology. My research will utilise  oral history interviews with subjects like Nicolas Rochon, who made  financial decisions on wind power projects.

To contact the author:

Mercedes Galíndez (mg570@cam.ac.uk)


[1] Righter, Robert, Wind Energy in America: A History, Norman, University of Oklahoma Press, 1996, page 93

[2] Madrigal, Alexis. Powering the Dream: The History and Promise of Green Technology. Cambridge, MA: Da Capo Press, 2011, pages 239-239

[3] Jones, Geoffrey. Profits and Sustainability. A History of Green Entrepreneurship. Oxford: Oxford University Press, 2017, page 330

[4] EU Directive 2001/77/EC

[5] Rochon, Nicolas, Ma transition énergétique 2005 – 2020, Les Papiers Verts, Paris, 2020

How Indian cottons steered British industrialisation

By Alka Raman (LSE)

This blog is part of a series of New Researcher blogs.

“Methods of Conveying Cotton in India to the Ports of Shipment,” from the Illustrated London News, 1861. Available at Wikimedia Commons.

Technological advancements within the British cotton industry have widely been acknowledged as the beginning of industrialisation in eighteenth and nineteenth century Britain. My research reveals that these advances were driven by a desire to match the quality of handmade cotton textiles from India.

I highlight how the introduction of Indian printed cottons into British markets created a frenzy of demand for these exotic goods. This led to immediate imitations by British textile manufacturers, keen to gain footholds in the domestic and world markets where Indian cottons were much desired.

The process of imitation soon revealed that British spinners could not spin the fine cotton yarn required to hand make the fine cotton cloth needed for fine printing. And British printers could not print cloth in the multitudes of colourfast colours that the Indian artisans had mastered over centuries.

These two key limitations in British textile manufacturing spurred demand-induced technological innovations to match the quality of Indian handmade printed cottons.

In order to test this, I chart the quality of English cotton textiles from 1740-1820 and compare them with Indian cottons of the same period. Thread per inch count is used as the measure of quality, and digital microscopy is deployed to establish their yarn composition to determine whether they are all-cotton textiles or mixed linen-cottons.

My findings show that the earliest British ‘cotton’ textiles were mixed linen-cottons and not all-cottons. Technological evolution in the British cotton industry was a pursuit of first the coarse, yet all-cotton cloth, followed by the fine all-cotton cloth such as muslin.

The evidence shows that British cotton cloth quality improved by 60% between 1747 and 1782 during the decades of the famous inventions of James Hargreaves’ spinning jenny, Richard Arkwright’s waterframe and Samuel Crompton’s mule. It further improved by 24% between 1782 and 1816. Overall, cloth quality improved by a staggering 99% between 1747 and 1816.

My research challenges our current understanding of industrialisation as a British and West European phenomenon, commonly explained using rationales such as high wages, availability of local energy sources or access to New World resources. Instead, it reveals that learning from material goods and knowledge brought into Britain and Europe from the East directly and substantially affected the foundations of the modern world as we know it.

The results also pose a more fundamental question: how does technological change take place? Based on my findings, learning from competitor products – especially imitation of novel goods using indigenous processes – may be identified as one crucial pathway for the creation of new ideas that shape technological change.

Workshop – Bullets and Banknotes: War Financing through the 20th Century

By David Foulk (University of Oxford)

This workshop intends to bring together military historians, international relations researchers, and economic historians, and aims to explore the financing of conventional and irregular warfare.

Martial funds originated from a variety of legitimate and illegitimate sources. The former includes direct provision by government or central banking activity. Private donations also need to be considered, as they have also proven a viable means of financing paramilitary activity.  Illegitimate sources in the context of war refer to the ability of an occupying force to extract economic and monetary resources and include, for example, ‘patriotic hold-ups’ of financial institutions, and spoliation.

This workshop seeks to provide answers to three central questions. First, who paid for war? Second, how did belligerents finance war – by borrowing, or printing money?  Finally, was there a juncture between resistance financing and the funding of conventional forces?

In the twentieth century, the global nature of conflict drastically altered existing power blocs and fostered ideologically motivated regimes. These changes were aided by improvements in mass communication technology, and a nascent corporatism that replaced the empire-building of the long nineteenth century. 

What remained unchanged, however, was the need for money in the waging of war. Throughout history, success in war depended on financial support. With it, armies can be paid and fed; research can be encouraged; weapons can be bought, and ordnance shipped. Without it, troops, arms, and supplies become scarcer and more difficult to acquire. Many of these considerations are just as applicable to clandestine forces. Nonetheless, there is an obvious constraint for the latter: their activity takes place in secret. This engenders important operational differences compared to state-sanctioned warfare and generates its own specific problems.

Traditionally, banking operations are predicated on an absence of confidence between parties to a transaction. Banks are institutional participants who act as trusted intermediaries, but what substitute intermediaries exist if the banking system has failed?  This was the quandary faced by members of the internal French resistance during the Second World War. Who could they trust to supply them regularly with funds? Where could they safely store their money, and who would change foreign currency into francs on their behalf?

Members of resistance groups could not acquire funds from the remnants of the French government while Marshal Pétain’s regime retained nominal control over the Non-Occupied Zone, nor could they obtain credit from the banking system.  Instead, resistance forces came to depend on external donations which were either airdropped or handed over by agents working on behalf of the British, American and Free French secret services. The traditional role of the banking sector was supplanted by military agents; the few bankers involved in resistance activities acted more as moneychangers, rather than as issuers of credit.

Without funding, clandestine operatives were unable to purchase food from the black market, or to rent rooms. Wages were indeed paid to resistance members, but there were disparities between the different groups and no official pay-scale existed.  Instead, leaders of the various groups decided on the salary range of their subordinates, which varied during the Second World War.

As liberation approached, a fifty-franc note, was produced on the orders of the Supreme Headquarters Allied Expeditionary Force (S.H.A.E.F.) in anticipation of being used within the Allied Military Government for Occupied Territories, in 1944, once the invasion of France was underway (Figure 1).

Figure 1. Allied Military Government for Occupied Territories (A.M.G.O.T.) fifty-
franc note (1944). Source: author’s collection

Clearly, there are many aspects of resistance financing, and the funding of conventional forces, that remain to be investigated. This workshop intends to facilitate ongoing discussions.

Due to the pandemic, this workshop will take place online on 6th November 2020. The keynote speech will be given via webinar and participants’ contributions will be uploaded before the event.

The workshop is financed by the ‘Initiatives & Conference Fund’ from the Economic History Society, a ‘Conference Organisation’ bursary from Royal Historical Society, and Oriel College, Oxford. 

More information about the Economic History Society’s grants opportunities can be found here

For more information: david.foulk@oriel.ox.ac.uk                          

@DavidFoulk9

Industrial, regional, and gender divides in British unemployment between the wars

By Meredith M. Paker (Nuffield College, Oxford)

This blog is part of a series of New Researcher blogs.

A view from Victoria Tower, depicts the position of London on both sides of the Thames, 1930. Available at Wikimedia Commons.

‘Sometimes I feel that unemployment is too big a problem for people to deal with … It makes things no better, but worse, to know that your neighbours are as badly off as yourself, because it shows to what an extent the evil of unemployment has grown. And yet no one does anything about it’.

A skilled millwright, Memoirs of the Unemployed, 1934.

At the end of the First World War, an inflationary boom collapsed into a global recession, and the unemployment rate in Britain climbed to over 20 per cent. While the unemployment rate in other countries recovered during the 1920s, in Britain it remained near 10 per cent for the entire decade before the Great Depression. This persistently high unemployment was then intensified by the early 1930s slump, leading to an additional two million British workers becoming unemployed.

What caused this prolonged employment downturn in Britain during the 1920s and early 1930s? Using newly digitized data and econometrics, my project provides new evidence that a structural transformation of the economy away from export-oriented heavy manufacturing industries toward light manufacturing and service industries contributed to the employment downturn.

At a time when few countries collected any reliable national statistics at all, in every month of the interwar period the Ministry of Labour published unemployment statistics for men and women in 100 industries. These statistics derived from Britain’s unemployment benefit program established in 1911—the first such program in the world. While many researchers have used portions of this remarkable data by manually entering the data into a computer, I was able to improve on this technique by developing a process using an optical-character recognition iPhone app. The digitization of all the printed tables in the Ministry of Labour’s Gazette from 1923 through 1936 enables the econometric analysis of four times as many industries as in previous research and permits separate analyses for male and female workers (Figure 1).

Figure 1: Data digitization. Left-hand side is a sample printed table in the Ministry of Labour Gazette. Right-hand side is the cleaned digitized table in Excel.

This new data and analysis reveal four key findings about interwar unemployment.  First, the data show that unemployment was different for men and women. The unemployment rate for men was generally higher than for women, averaging 16.1 percent and 10.3 per cent, respectively.  Unemployment increased faster for women at the onset of the Great Depression but also recovered quicker (Figure 2). One reason for these distinct experiences is that men and women generally worked in different industries. Many unemployed men had previously worked in coal mining, building, iron and steel founding, and shipbuilding, while many unemployed women came from the cotton-textile industry, retail, hotel and club services, the woolen and worsted industry, and tailoring.

Figure 2: Male and female monthly unemployment rates. Source: Author’s digitization of Ministry of Labour Gazettes.

Second, regional differences in unemployment rates in the interwar period were not due only to the different industries located in each region. There were large regional differences in unemployment above and beyond the effects of the composition of industries in a region.

Third, structural change played an important role in interwar unemployment. A series of regression models indicate that, ceteris paribus, industries that expanded to meet production needs during World War I had higher unemployment rates in the 1920s. Additionally, industries that exported much of their production also faced more unemployment. An important component of the national unemployment problem was thus the adjustments that some industries had to make due to the global trade disturbances following World War I.

Finally, the Great Depression accelerated this structural change. In almost every sector, more adjustment occurred in the early 1930s than in the 1920s. Workers were drawn into growing industries from declining industries, at a particularly fast rate during the Great Depression.

Taken together, these results suggest that there were significant industrial, regional, and gender divides in interwar unemployment that are obscured by national unemployment trends. The employment downturn between the wars was thus intricately linked with the larger structural transformation of the British economy.


Meredith M. Paker

meredith.paker@nuffield.ox.ac.uk

Twitter: @mmpaker