Scarlet Fever and nineteenth-century mortality trends. A Reply to Romola Davenport

The full article from this blog post will be published on The Economic History Review, and it is now available on Early View at this link

Children affected by scarlet fever, 1910 ca. Available at <https://qz.com/651644/a-19th-century-disease-is-on-a-dramatic-rise-in-the-uk-what-do-we-know-about-it-so-far/>

by Simon Szreter (University of Cambridge) and Graham Mooney (Johns Hopkins University)

In 1998 we published in the Economic History Review an analysis showing that all the available robust demographic evidence testified to a deterioration of mortality conditions in fast-growing industrial towns and cities in the second quarter of the nineteenth century. We also demonstrated that although there was some alleviation in the 1850s from the terrible death rates experienced in the 1830s and 1840s, sustained and continuous improvement in the life expectancies of the larger British urban populations did not begin to occur until the 1870s. In other publications, we have each shown how it is most likely that an increasing range and density of politically-initiated public health interventions in the urban environments, starting in earnest in the late 1860s and 1870s and gaining depth and sophistication through to the 1900s, was primarily responsible for the observed demographic and epidemiological patterns.

In a new article in Economic History Review in 2020, Romola Davenport has argued that a single disease, scarlet fever, should be attributed primary significance as the cause of major urban mortality trends of the period, not only in Britain but across  Europe, Russia and North America.

In this response we critically examine the evidence adduced by Davenport for this hypothesis and find it entirely unconvincing. While scarlet fever was undoubtedly an important killer of young children, focusing on the chronology of the incidence of scarlet fever in Britain shows that it lags behind by a clear decade or more the major turning points in urban mortality trends. Scarlet fever did not make any significant recorded impact until the 1840s and it did not exert its most deadly effects until the 1850s. Its severe depredations then continued unabated in the 1860s and 1870s, thereafter declining sharply in the period 1880-85.

We therefore maintain that our original findings and interpretation of the main causes of Britain’s urban mortality patterns during the course of the nineteenth century remain entirely valid. 

Historical Social Stratification and Mobility in Costa Rica, 1840-2006

by Daniel Diaz Vidal (University of Tampa)

The full article from this post was published on The Economic History Review and is now available on Early View at this link

Banana Workers – available at <https://travelcostarica.nu/history>

The social mobility rate represents the degree to which the socioeconomic status of descendants varies relative to that of their progenitors. If the rate is very low then the social pyramid remains unchanged over many generations. Conversely, if the rate of social mobility is very high, then family, cultural, ethnic, and historical backgrounds are not useful in explaining the current social status of an individual. In essence, history determines present outcomes when there are lower rates of social mobility. Interest in social mobility research has grown since the Great Recession because of its relationship with socioeconomic inequality, or political upheaval.

This renewed interest in the study of social mobility has generated new approaches to this subject.  Recent social mobility studies which use surnames show that underlying social mobility rates in all cases studied are both very low and very similar across countries and time periods.[1] This research uses an enhanced surname methodology and previously unused historical data to study  social mobility in a new Spanish speaking, Central American economy. Costa Rica is particularly interesting as it has exhibited relatively egalitarian distributions of income since colonial times. This is significantly different to the previous Latin American economy, Chile, which had been the focus of a surname study of social mobility similar to this one. In order to study historical social mobility in Costa Rica over the past century and a half, one cannot use traditional father-son linkages since constructing such a dataset would be extremely difficult, if not impossible.  Traditional methods require panel datasets, such as the United States National Longitudinal Survey of Youth (NLSY), or rich population registries like those found in Sweden and  Iceland. This limits the historical and geographical contexts in which social mobility can be studied. Surnames facilitate research  by permitting the clustering of people to identify groups of sons who collectively originated from a group of fathers, without needing to follow the branches of each specific family tree.

One of the methodologies used in this research involves overrepresentation of surname groups within certain elite professions in the 2006 electoral census. The  central idea is to see how frequent a surname is within the census and then use that to predict how many we should find in a sample of elite professionals. If a certain surname group represents 1 per cent of the population but 5 per cent of the individuals in high skilled professions, then they are overrepresented, and of higher status. In order to study how long the rich stay rich in Costa Rica, the author compiled a dataset of historically advantaged groups before the beginning of the elite profession dataset, in order to avoid selection bias.  The groups are: top coffee growers from 1911, coffee exporters in 1934, teachers and professors between 1923-1933, Jamaican banana growers from 1908, and ethnically- mixed plantation owners.  Figure 1 shows how these elite groups were still overrepresented at the end of the twentieth century and that they will require an average of six to seven generations to regress to the mean. These results are comparable to those produced by Clark, for a completely different set of socioeconomic and historical backgrounds.[2]  Of particular interest is the comparison of the results with Chile, since the two countries had different colonial experiences and varying degrees of inequality throughout their histories.

Figure 1. Elite Group Representation in Costa Rica – Note: The vertical black line determines where the data end and the projections begin.
Sources: Tribunal Supremo de Elecciones, Padron Nacional Electoral; Costa Rica, Direccion General de Estadística y Censos, Lista de cultivadores de banano, anuario 1907; Costa Rica, Instituto de Defensa del Café de Costa Rica, Revista del Instituto de Defensa del Café de Costa Rica. See article for further details.

This research shows that regression to the socioeconomic mean in Costa Rica occurred at a slower pace than that predicted by the previous literature. This implies that the equality-driven policy maker should be more concerned with economic growth, which should increase the average income of every strata, at least under a Kaldor compensation criterion[3], and with compressing the distance between social strata, rather than concerning itself with social mobility. This study has shown how historical groups take fewer generations to regress to the mean in comparison to the Chilean case studied in Clark.[4] This is attributed to the fact that the historical groups were not that far apart to begin with.

To contact the author: DDIAZVIDAL@ut.edu


[1]           G. Clark, The Son Also Rises: Surnames and the History of Social Mobility. Princeton: Princeton University Press, 2015; G. Clark and N. Cummins, ‘Surnames and Social Mobility in England, 1170-2012’, Human Nature, 25 (2014), pp. 517-537.

[2]           Clark, The Son also rises.

[3]           This posits that an activity moves the economy closer to Pareto optimality if the maximum amount the gainers are prepared to pay to the losers to agree to the change is greater than the minimum amount losers are prepared to accept

[4]            Ibid.


Independent Women: Investing in British Railways, 1870-1922

by Graeme Acheson (University of Strathclyde Business School), Aine Gallagher, Gareth Campbell, and John D.Turner (Queen’s University Centre for Economic History)

The full article from this blog post has been published on The Economic History Review, and it is currently available on Early View here

Women have a long tradition of investing in financial instruments, and scholars have recently documented the rise of female shareholders in nineteenth-century Britain, the United States, Australia, and Europe. However, we know very little about how this progressed into the twentieth century, and whether women shareholders over a century ago behaved differently from their male counterparts. To address this, we turn to the shareholder constituencies of railways, which were the largest public companies a century ago.

Figure 1. Illustration of a female investor reading the ticker tape in the early twentieth century. Source: the authors

Railway companies in the UK popularised equity investment among the middle classes; they had been a major investment asset since the first railway boom of the mid-1830s. At the start of the 1900s, British railways made up about half of the market value of all domestic equity listed in the UK, and they constituted 49 of the 100 largest companies on the British stock market in 1911. The railways, therefore, make an interesting case through which to examine women investors. Detailed railway shareholder records, comparable to those for other sectors, have generally not been preserved. However, we have found Railway Shareholder Address Books for six of the largest railway companies between 1915 and 1922. We have supplemented these with several address books for these companies back to 1870, and have analysed the Shareholder Register for the Great Western Railway (GWR) from 1843, to place the latter period in context.

An analysis of these shareholder address books reveals the growing importance of women shareholders from 1843, when they made up about 11 per cent of the GWR shareholder base, to 1920, when they constituted about 40 per cent of primary shareholders. By the early twentieth century, women represent 30 to 40 per cent of shareholders in each railway company in our sample, which is in line with estimates of the number of women investing in other companies at this time (Rutterford, Green, Maltby and Owens, 2011). This implies that women were playing an important role in financial markets in the early twentieth century.

Although women were becoming increasingly prevalent in shareholder constituencies, we know little about how they were responding to changing social perceptions, and the increasing availability of financial information, in order to make informed investment decisions, or if they were influenced by male relatives. To examine this, we focus on joint shareholdings, where people would invest together, rather than buying shares on their own. This practice was extremely common, and from our data we are able to analyse the differences between solo shareholders, lead joint shareholders (i.e., individuals who owned shares with others but held the voting rights), and secondary joint shareholders (i.e., individuals who owned shares with others but did not hold the voting rights).

We find that women were much more likely to be solo shareholders than men, with 70 to 80 per cent of women investing on their own, compared to just 30 to 40 per cent of men. When women participated in joint shareholdings, there was no discernible difference as to whether they were the lead shareholder or the secondary shareholder, whereas the majority of men took up a secondary position. When women participated as a secondary shareholder, the lead was usually not a male relative. These findings are strong evidence that women shareholders were acting independently by choosing to take on the sole risks and rewards of share ownership when making their investments. 

We then analyse how the interaction between gender and joint shareholdings affected investment decisions. We begin by examining differences in terms of local versus arms-length investment, using geospatial analysis to calculate the distance between each shareholder’s address and the nearest station of the railway they had invested in. We find that women were more likely than men, and solo investors more likely than joint shareholders, to invest locally. This suggests that men may have used joint investments as a way of reducing the risks of investing at a distance. In contrast, women preferred to maintain their independence even if this meant focusing more on local investments.

We then examine the extent to which women and men invested across different railways. In the modern era, it is common to adopt a value-weighted portfolio which is most heavily concentrated in larger companies. As three of our sample companies were amongst the six largest companies of their era and a further two were in the top twenty-five, we would, a priori, expect to see some overlap of shareholders investing in different railways if they adopted this approach to diversification. From our analysis, we find that male and joint shareholders were more likely than female and solo shareholders to hold multiple railway stocks. This could imply that men were using joint shareholdings as a means of increasing diversification. In contrast, women may have been prioritising independence, even if it meant being less diversified.

We also consider whether there were differences in terms of how long each type of shareholder held onto their shares because modern studies suggest that women are much less likely than men to trade their shares. We find that only a minority of shareholders maintained a long-run buy and hold strategy, with little suggestion that this differed on the basis of gender or joint versus solo shareholders. This implies that our findings are not being driven by a cohort effect, and that the increasing numbers of women shareholders consciously chose to invest independently. 

To contact the authors:

Graeme Acheson, graeme.acheson@strath.ac.uk

Aine Gallagher, Aine.Galagher@qub.ac.uk

Gareth Campbell, gareth.campbell@qub.ac.uk

John D.Turner, j.turner@qub.ac.uk

Taxation and the stagnation of cotton exports in Brazil, 1800 – 1860

by Thales Zamberlan Pereira (Getúlio Vargas Foundation, São Paulo School of Economics)

Port of Pernambuco. Emil Bauch, 1852. Brasiliana Iconográfica.

Brazil supplied 40 per cent of cotton imports in Liverpool during the last decade of the eighteenth century (Krichtal 2013). By the first half of the nineteenth century, however, cotton exports stagnated, and Brazil became the only major international cotton producer that decreased its exports to European countries. The reason for decline in production, despite increasing international demand during the 19th century, is not generally agreed on. Scholars have attributed the decline to high transport costs, competition from sugar and coffee plantations for slaves, Dutch disease from the increase in coffee exports, among others (Leff 1972; Stein 1979; Canabrava 2011). There is disagreement in part because previous research largely relies on data after 1850, which was after the decline of cotton plantations in Brazil.

In a new paper, I argue that cotton profitability was restricted by the fiscal policy implemented by the Portuguese (and, later, Brazilian) government after 1808. To make this argue I first show new patterns on the timing of the decline of Brazilian cotton. Specifically, using new data of cotton productivity for the 1800-1860 period, this research shows that Brazil’s stagnation began in the first decades of the nineteenth century.

The decline therefore cannot be explained by a number of factors. It took place before the United States managed to increase its productivity in cotton production and became the world export leader (Olmstead and Rhode 2008). Cotton regions in Brazil did not have a labour supply problem nor suffered from a Dutch disease phenomenon during the early nineteenth century (Pereira 2018). The new evidence also suggests that external factors, such as declining international prices or maritime transport costs, were not responsible for the stagnation of cotton exports in Brazil. As any other commodity at the time, falls in international prices would have to be offset by increases in productivity. In fact, Figure 1 shows that Brazilian cotton prices were competitive in Liverpool. From the staples presented in the Figure, the standard cotton from the provinces of Pernambuco and Maranhão had higher quality than from New Orleans and Georgia (and, hence, achieved higher prices), but “Maranhão saw-ginned”, which achieved similar prices, used the same seeds as the ones in US plantations.

Figure 1: Cotton prices in Liverpool 1825 – 1850
Source: Liverpool Mercury and The Times newspapers

So, what caused the stagnation of cotton exports in Brazil? I argue that the fiscal policy implemented by the Portuguese government after 1808 restricted cotton profitability. High export taxes, whose funds were transferred to Rio de Janeiro, explain the ‘profitability paradox’ that British consuls in Brazil reported at the time. They remarked that even in periods with high prices and foreign demand, Brazilian planters had limited profitability. Favourable market conditions after the Napoleonic wars allowed production in Brazil to continue growing at least until the early 1830s.

Figure 2 shows that when international prices started to decline after 1835, cotton was no longer a profitable crop in many Brazilian regions. This was especially pronounced for regions where plantations were far from the coast, which had to pay higher transport costs in addition to the export tax. To support that the tax burden decreased profitability, I calculate an “optimal tax rate”, which maximized government revenues, and the “effective tax rate”, which was the amount that exporters paid. Figure 2 illustrates that, while the tax rate by law was low, the effective tax rate for cotton producers was significantly greater than the optimal tax rate after 1835.

Figure 2 – Rate of cotton export tariffs, 1809-1850.

Facing lower prices, cotton producers in Brazil could have shifted production to varieties of cotton produced in the United States, which had higher productivity and were in increasing demand in British markets. As presented in Figure 1, some regions in Brazil tried to follow this route (with saw-ginned cotton in Maranhão), but this type of production was not profitable with an export tax that reached 20 percent. Brazil, therefore, was stuck in the market for long-staple cotton, for which demand remained relatively stable during the nineteenth century. Regions that could not produce long-staple cotton practically abandoned production.

Not only do the results provide insight to the cotton decline, but the paper contributes to a better understanding of the roots of regional inequality in Brazil and the political economy of taxation. Cotton production before 1850 was concentrated in the northeast region, which continues to lag in economic conditions to this day. As I argue in the paper, the export taxes implemented after 1808 largely targeted commodities from the northeast. Production of commodities from southeast regions, such as coffee, paid lower tax rates. Parliamentary debates at the time show cotton producers in the Northeast did demand tax reform. Their demands, however, were not met quickly enough to prevent Brazilian cotton plantations from being priced-out from the international market.

To contact the author: thales.pereira@fgv.br

References:

Canabrava, Alice P. 2011. O Desenvolvimento Da Cultura Do Algodão Na Província de São Paulo, 1861-1875. São Paulo: EDUSP.

Krichtal, Alexey. 2013. “Liverpool and the Raw Cotton Trade: A Study of the Port and Its Merchant Community, 1770-1815.” Victoria University of Wellington.

Leff, Nathaniel H. 1972. “Economic Development and Regional Inequality: Origins of the Brazilian Case.” The Quarterly Journal of Economics 86 (2): 243–62. https://doi.org/10.2307/1880562.

Olmstead, Alan L., and Paul W. Rhode. 2008. “Biological Innovation and Productivity Growth in the Antebellum Cotton Economy.” The Journal of Economic History 68 (04): 1123–1171. https://doi.org/10.1017/S0022050708000831.

Pereira, Thales A. Zamberlan. 2018. “Poor Man’s Crop? Slavery in Cotton Regions in Brazil (1800-1850).” Estudos Econômicos (São Paulo) 48 (4).

Stein, Stanley J. 1979. Origens e evolução da indústria têxtil no Brasil: 1850-1950. Rio de Janeiro: Editora Campus.

The labour market causes and consequences of general purpose technological progress: evidence from steam engines

by Leonardo Ridolfi (University of Siena), Mara Squicciarini (Bocconi University), and Jacob Weisdorf (Sapienza University of Rome)

Steam locomotive running gear. Available at Wikimedia Commons.

Should workers fear technical innovations? Economists have not provided a clear answer to this perennial question. Some believe machines make ‘one man to do the work of many’; that mechanisation will generate cheaper goods, more consumer spending, increased labour demand and thus more jobs. Others, instead, wor­ry that automation will be labour-cheapening, making workers – especially unskilled ones – redundant, and so result in increased unemployment and growing income inequality.

Our research seeks answers from the historical account. We focus on the first Industrial Revolution, when technical innovations became a key component of the production process.

The common understanding is that mechanisation during the early phases of industrialisation allowed firms to replace skilled with unskilled male workers (new technology was deskilling) and also male workers with less expensive female and child labourers. Much of this understanding is inspired by the Luddite movement – bands of nineteenth century workers who destroyed early industrial machinery that they believed was threatening their jobs.

To test these hypotheses, we investigate one of the major technological advancements in human history: the rise and spread of steam engines.

Nineteenth century France provides an exemplary setting to explore the effects. French historical statistics are extraordinarily detailed, and the first two national industry-level censuses – one from the 1840s, when steam power was just beginning to spread; and one from the 1860s, when they were more common – help us to observe the labour market conditions that led to the adoption of steam engines, as well as the effects of adopting the new technology on the demand for male, female and child labour, and on their wages.

Consistent with the argument that steam technology emerged for labour-cheapening purposes, our analysis shows that the adoption of steam technology was significantly higher in districts (arrondissements) where:

  1. industrial labour productivity was low, so that capital-deepening could serve to improve output per worker;
  2. the number of workers was high, so the potential for cutting labour costs by replacing them with machines was large;
  3. the share of male workers was high, so the potential for cutting labour costs by shifting towards women and children was large; and
  4. steam engines had already been installed in other industries, thus lowering the costs of adopting the new technology.

We also find, however, that steam technology, once adopted, was neither labour-saving nor skill-saving. Steam-powered industries did use higher shares of (cheaper) female and child workers than non-steam-powered industries. At the same time, though, since steam-operating industries employed considerably more workers in total, they ended up using also more male workers – and not just more women and children.

We also find that steam-powered industries paid significantly higher wages, both to men and women. In contrast with the traditional narrative of early industrial technologies being deskilling, this result provides novel empirical evidence that steam-use was instead skill-demanding.

Although workers seemed to have gained from the introduction of steam technology, both in terms of employment and payment opportunities, our findings show that labour’s share was lower in steam-run industries. This motivates Engels-Marx-Piketty-inspired concerns that advancing technology leaves workers with a shrinking share of output.

Our findings thus highlight the multi-sided effects of adopting general-purpose technological progress. On the positive side, the steam engine prompted higher wages and a growing demand for both male and female workers. On the negative side, steam-powered industries relied more heavily on child labour and also placed a larger share of output in the hands of capitalist.

Sex ratios and missing girls in nineteenth century Europe

By Francisco J. Beltrán Tapia (Norwegian University of Science and Technology)

This blog is part of our EHS Annual Conference 2020 Blog Series.

The flying girl. Available at Wikimedia Commons.

Gender discrimination – in the form of sex-selective abortion, female infanticide and the mortal neglect of young girls – constitutes a pervasive feature of many contemporary developing countries, especially in South and East Asia. Son preference stems from economic and cultural factors that have long influenced the perceived relative value of women in these regions and resulted in millions of ‘missing girls’.

But were there ‘missing girls’ in historical Europe? Although the conventional narrative argues that there is little evidence for this kind of behaviour (here), my research shows that this issue was much more important than previously thought, especially (but not exclusively) in Southern and Eastern Europe.

It should be noted first that historical sex ratios cannot be compared directly to modern ones. The biological survival advantage of girls was more visible in the high-mortality environments that characterised pre-industrial Europe. Subsequently, boys suffered higher mortality rates both in utero and during infancy and early childhood. Historical infant and child sex ratios were therefore relatively low, even in the presence of gender-discriminatory practices.

This is illustrated in Figure 1 below, which plots the relationship between child sex ratios (the number of boys per 100 girls) and infant mortality rates using information from European countries between 1750 and 2001. In particular, in societies where infant mortality rates were around 250 deaths (per 1,000 live births), a gender-neutral child sex ratio should have been slightly below parity (around 99.5 boys per 100 girls).

Figure 1: Infant mortality rates and child sex ratios in Europe, 1750-2001

Compared with this benchmark, infant and child sex ratios were abnormally high in some European regions (see Map 1 below), suggesting that some sort of gender discrimination was unduly increasing female mortality rates at those ages.

Interestingly, the observed differences in sex ratios are also visible throughout childhood. In fact, the evolution of sex ratios by age shows stark disparities across countries. Figure 2 shows how the number of boys per 100 girls changes as children grew older for a sample of countries, both in levels and in the observed trends.

In Bulgaria, Greece and France, for example, sex ratios increased with age, providing evidence that gender discrimination continued to increase female mortality rates as girls grew older. Importantly, the unbalanced sex ratios observed in some regions are not due to random noise, female under-registration or sex-specific migratory flows.

Likewise, although geography, climate and population density contributed to shaping infant and child sex ratios due to their impact on the disease environment, these factors cannot explain away the patterns of gender discrimination reported here.

Map 1: Child sex ratios in Europe, c.1880

Figure 2: Sex ratios by age in a sample of countries, c.1880

This evidence indicates that discriminatory practices with lethal consequences for girls constituted a veiled feature of our European past. But the actual nature of discrimination remains unclear and surely varies by region.

Excess female mortality was then not necessarily the result of ill treatment of young girls, but could have been just based on an unequal allocation of resources within the household, a circumstance that probably cumulated as infants grew older.

In contexts where infant and child mortality rates are high, a slight discrimination in the way that young girls were fed or treated when ill, as well as in the amount of work with which they were entrusted, was likely to have resulted in more girls dying from the combined effect of undernutrition and illness.

Although female infanticide or other extreme versions of mistreatment of young girls may not have been a systematic feature of historical Europe, this line of research would point to more passive, but pervasive, forms of gender discrimination that also resulted in a significant fraction of missing girls.

A century of wind power: why did it take so long to develop to utility scale?

by Mercedes Galíndez, University of Cambridge

This blog is based on research funded by a bursary from the Economic History Society. More information here

Marcellus Jacobs on a 2.5kW machine in the 1940s. Available at <http://www.jacobswind.net/history&gt;

Seventeen years passed between Edison patenting his revolutionary incandescent light bulb in 1880, and Poul la Cour’s first test of a wind turbine for generating electricity. Yet it would be another hundred years before wind power would become an established industry in the 2000s. How can we explain the delay in harvesting the cheapest source of electricity generation?

In the early twentieth century wind power emerged to fill the gaps of nascent electricity grids. This technology was first adopted in rural areas. The incentive was purely economic: the need for decentralised access to electricity. In this early stage there were no concerns about the environmental implications of wind power.

The Jacobs Wind Electricity Company delivered 30,000 three-blade wind turbines in the US  between 1927 and 1957.[1] The basic mechanics of these units did not differ too much from their modern counterparts. Once the standard electrical grid reached rural areas, however, the business case for wind power weakened. Soon it became more economic to buy electricity from centralised utilities, which benefited from significant economics of scale.

It was not until the late 1970s that wind power became a potential substitute for electricity generated by fossil fuels or hydropower. Academic literature agrees on two main triggers for this change: the oil crises in the 1970s, and the politicisation of Climate Change. When the price of oil quadrupled in 1973, rising to nearly US $12 per barrel, industrialised countries’ dependency on foreign producers of oil was exposed. The reaction was to find new domestic sources of energy. Considerable  effort was devoted to nuclear power, but technologies like wind power were also revived.

In the late 1980s Climate Change became more politicised, and interest in wind energy as a technology that could mitigate environmental damage, was renewed.  California’s governor, Jerry Brown, was aligned with these ideals and in 1978, in a move ahead of its time, he provided extra tax incentives to renewable energy producers in his tate.[2] This soon created a ‘California Wind Rush’ which saw both local and European turbine manufacturers burst onto the market, with $1 billion US dollars being  invested in the region of Altamont Pass between 1981 and 1986.[3]

The California Wind Rush ended suddenly when central government support was withdrawn.   However, the European Union (EU) accepted the challenge to maintain the industry. In 2001, the EU introduced Directive 2001/77/EC for the promotion of renewable energy sources. This Directive required Member States to set renewable energy targets.[4] Many directives followed which triggered renewable energy programmes throughout the EU. Following the first directive in 2001, the installed capacity of wind power in the EU increased thirteen-fold, from 13GW to 169GW in 2017.

Whilst there is no doubt that the EU regulatory framework played a key role in the development of wind power, other factors were also at play. Nicolas Rochon, a green investment manager, published a memoir in 2020 in which he argued that clean energy development was also enabled by a change in the investment community. As interest rates decreased during the first two decades of the twenty-first century, investment managers revised downwards their expectations on future returns – which fostered more attention to clean energy assets offering lower profitability. Growing competition in the sector reduced the price of electricity obtained from renewable energy. [5]

My research aims to understand the macroeconomic conditions that enabled wind power to develop to national scale. In particular, how wind power developers accessed capital, and how bankers and investors took a leap of faith to invest in the technology. My research will utilise  oral history interviews with subjects like Nicolas Rochon, who made  financial decisions on wind power projects.

To contact the author:

Mercedes Galíndez (mg570@cam.ac.uk)


[1] Righter, Robert, Wind Energy in America: A History, Norman, University of Oklahoma Press, 1996, page 93

[2] Madrigal, Alexis. Powering the Dream: The History and Promise of Green Technology. Cambridge, MA: Da Capo Press, 2011, pages 239-239

[3] Jones, Geoffrey. Profits and Sustainability. A History of Green Entrepreneurship. Oxford: Oxford University Press, 2017, page 330

[4] EU Directive 2001/77/EC

[5] Rochon, Nicolas, Ma transition énergétique 2005 – 2020, Les Papiers Verts, Paris, 2020

How Indian cottons steered British industrialisation

By Alka Raman (LSE)

This blog is part of a series of New Researcher blogs.

“Methods of Conveying Cotton in India to the Ports of Shipment,” from the Illustrated London News, 1861. Available at Wikimedia Commons.

Technological advancements within the British cotton industry have widely been acknowledged as the beginning of industrialisation in eighteenth and nineteenth century Britain. My research reveals that these advances were driven by a desire to match the quality of handmade cotton textiles from India.

I highlight how the introduction of Indian printed cottons into British markets created a frenzy of demand for these exotic goods. This led to immediate imitations by British textile manufacturers, keen to gain footholds in the domestic and world markets where Indian cottons were much desired.

The process of imitation soon revealed that British spinners could not spin the fine cotton yarn required to hand make the fine cotton cloth needed for fine printing. And British printers could not print cloth in the multitudes of colourfast colours that the Indian artisans had mastered over centuries.

These two key limitations in British textile manufacturing spurred demand-induced technological innovations to match the quality of Indian handmade printed cottons.

In order to test this, I chart the quality of English cotton textiles from 1740-1820 and compare them with Indian cottons of the same period. Thread per inch count is used as the measure of quality, and digital microscopy is deployed to establish their yarn composition to determine whether they are all-cotton textiles or mixed linen-cottons.

My findings show that the earliest British ‘cotton’ textiles were mixed linen-cottons and not all-cottons. Technological evolution in the British cotton industry was a pursuit of first the coarse, yet all-cotton cloth, followed by the fine all-cotton cloth such as muslin.

The evidence shows that British cotton cloth quality improved by 60% between 1747 and 1782 during the decades of the famous inventions of James Hargreaves’ spinning jenny, Richard Arkwright’s waterframe and Samuel Crompton’s mule. It further improved by 24% between 1782 and 1816. Overall, cloth quality improved by a staggering 99% between 1747 and 1816.

My research challenges our current understanding of industrialisation as a British and West European phenomenon, commonly explained using rationales such as high wages, availability of local energy sources or access to New World resources. Instead, it reveals that learning from material goods and knowledge brought into Britain and Europe from the East directly and substantially affected the foundations of the modern world as we know it.

The results also pose a more fundamental question: how does technological change take place? Based on my findings, learning from competitor products – especially imitation of novel goods using indigenous processes – may be identified as one crucial pathway for the creation of new ideas that shape technological change.

Baumol, Engel, and Beyond: Accounting for a century of structural transformation in Japan, 1885-1985

by Kyoji Fukao (Hitotsubashi University) and Saumik Paul (Newcastle University and IZA)

The full article from this blog post was published on The Economic History Review, and it is now available on Early View at this link

Bank of Japan, silver convertible yen. Available on Wiki Commons

Over the past two centuries, many industrialized countries have experienced dramatic changes in the sectoral composition of output and employment. The pattern of structural transformation, depicted for most of the developed countries, entails a steady fall in the primary sector, a steady increase in the tertiary sector, and a hump shape in the secondary sector. In the literature, the process of structural transformation is explained through two broad channels: the income effect, driven by the generalization of Engel’s law, and the substitution effect, following the differences in the rate of productivity across sectors, also known as “Baumol’s cost disease effect”.

At the same time, an input-output (I-O) model provides a comprehensive way to study the process of structural transformation. The input-output analysis accounts for intermediate input production by a sector, as many sectors predominantly produce intermediate inputs, and their outputs rarely enter directly into consumer preferences. Moreover, an input-output analysis relies on observed data and a national income identity to handle imports and exports. The input-output analysis has considerable advantages in the context of Japanese structural transformation first from agriculture to manufactured final consumption goods, and then to services, alongside transformations in Japanese exports and imports that have radically changed over time.

We examine the drivers of the long-run structural transformation in Japan over a period of 100 years, from 1885 to 1985. During this period, the value-added share of the primary sector dropped from 60 per cent  to less than 1 per cent, whereas that of the tertiary sector rose from 27 to nearly 60 per cent in Japan (Figure 1). We apply the Chenery, Shishido, and Watanabe framework to examine changes in the composition of sectoral output shares. Chenery, Shishido, and Watanabe used an inter-industry model to explain deviations from proportional growth in output in each sector and decomposed the deviation in sectoral output into two factors: the demand side effect, a combination of the Engel and Baumol effects (discussed above), and  the supply side effect, a change in the technique of production. However, the current input-output framework is unable to uniquely separate the demand side effect into forces labelled under the Engel and Baumol effects.

Figure 1. Structural transformation in Japan, 1874-2008. Source: Fukao and Paul (2017). 
Note: Sectoral shares in GDP are calculated using real GDP in constant 1934-36 prices for 1874-1940 and constant 2000 prices for 1955-2008. In the current study, the pre-WWII era is from 1885 to1935, and the post-WWII era is from 1955 to 1985. 

To conduct the decomposition analysis, we use seven I-O tables (every 10 years) in the prewar era from 1885 to 1935 and six I-O tables (every 5 years) in the postwar era from 1955 to 1985. These seven sectors include: agriculture, forestry, and fishery; commerce and services; construction;  food;  mining and manufacturing (excluding food and textiles); textiles, and  transport, communication, and utilities.

The results show that the annual growth rate of GDP more than doubled in the post-WWII era compared to the pre-WWII era. The real output growth was the highest in the commerce and services sector throughout the period under study, but there was also rapid growth of output in mining and manufacturing, especially in the second half of the 20th century. Sectoral output growth in mining and manufacturing (textile, food, and the other manufacturing), commerce and services, and transport, communications, and utilities outgrew the pace of growth in GDP in most of the periods. Detailed decomposition results show that in most of the sectors (agriculture, commerce and services, food, textiles, and transport, communication, and utilities), changes in private consumption were the dominant force behind the demand-side explanations. The demand-side effect was strongest in the commerce and services sector.

Overall, demand-side factors — a combination of the Baumol and Engel effects, were the main explanatory factors in the pre-WWII period, whereas  supply-side factors were the key driver of structural transformation in the post-WWII period.

To contact the authors:

Kyoji Fukao, k.fukao@r.hit-u.ac.jp

Saumik Paul, paulsaumik@gmail.com, @saumik78267353

Notes

Baumol, William J., “Macroeconomics of unbalanced growth: the anatomy of urban crisis”. American Economic Review 57, (1967) 415–426.

Chenery, Hollis B., Shuntaro Shishido and Tsunehiko Watanabe. “The pattern of Japanese growth, 1914−1954”, Econometrica30 (1962), 1, 98−139.

Fukao, Kyoji and Saumik Paul “The Role of Structural Transformation in Regional Convergence in Japan: 1874-2008.” Institute of Economic Research Discussion Paper No. 665. Tokyo: Institute of Economic Research (2017).

Settler capitalism: company colonisation and the rage for speculation (NR Online Session 5)

By Matthew Birchall (Cambridge University)

This research is due to be presented in the fifth New Researcher Online Session: ‘Government & Colonization’.

 

Birchall1
Scan from “Historical Atlas” by William R. Shepherd, New York, Henry Holt and Company, 1923. Available at Wikimedia Commons.

My research explores the little-known story of how company colonisation propelled the settler revolution. Characterised by mass emigration to Britain’s settler colonies during the long nineteenth century, the settler revolution transformed Chicago and Melbourne, London and New York, drawing all into a vast cultural and political network that straddled the globe. But while the settler revolution is now well integrated into recent histories of the British Empire, it remains curiously disconnected from the history of global capitalism.

Prising open what I call the inner lives of colonial corporations, I tell the story of how and why companies remade the settler world. It takes a fresh look at the colonial history of Australia and New Zealand in an attempt to map a new history of chartered colonial enterprise, one that is as sensitive to rhetoric as it is to ledgers documenting profit and loss. We tend to understand companies in terms of their institutional make-up, that is to say their legal and economic structure, but we sometimes forget that they are also cultural constructions with very human histories.

The story that I narrate takes us from the boardrooms of the City of London back out to the pastures of the colonial frontier: it is a snapshot of settler capitalism from the inside out. From the alleys and byways immortalised in Walter Bagehot’s Lombard Street (1873) to the sheep-runs of New South Wales and the South Canterbury plains, company colonisation has a global history – a history that links the Atlantic and the antipodes, Māori and metropolitan capital, country and the City of London. My study marks a first attempt at bringing this history to light.

In digging deep into the social and cultural history of company colonisation, I focus in particular on the legitimating narratives that underwrote visions of colonial reform. How did these company men make sense of their own ventures? What traditions of thought did they draw on to justify the appropriation of indigenous lands? How did the customs and norms of the City shape the boundaries of what was deemed possible, let alone appropriate in the extra-European world? I aim to show that company colonisation was as much an act of the imagination as it was the product of prudent capital investment.

My research engages with large questions of contemporary relevance: the role of corporations in the making of the modern world; the relationship between empire and global capitalism; and the salience of social and cultural factors in the development of corporate enterprise. I hope to enrich these debates by injecting the discussion with greater historical context.