How Indian cottons steered British industrialisation

By Alka Raman (LSE)

This blog is part of a series of New Researcher blogs.

“Methods of Conveying Cotton in India to the Ports of Shipment,” from the Illustrated London News, 1861. Available at Wikimedia Commons.

Technological advancements within the British cotton industry have widely been acknowledged as the beginning of industrialisation in eighteenth and nineteenth century Britain. My research reveals that these advances were driven by a desire to match the quality of handmade cotton textiles from India.

I highlight how the introduction of Indian printed cottons into British markets created a frenzy of demand for these exotic goods. This led to immediate imitations by British textile manufacturers, keen to gain footholds in the domestic and world markets where Indian cottons were much desired.

The process of imitation soon revealed that British spinners could not spin the fine cotton yarn required to hand make the fine cotton cloth needed for fine printing. And British printers could not print cloth in the multitudes of colourfast colours that the Indian artisans had mastered over centuries.

These two key limitations in British textile manufacturing spurred demand-induced technological innovations to match the quality of Indian handmade printed cottons.

In order to test this, I chart the quality of English cotton textiles from 1740-1820 and compare them with Indian cottons of the same period. Thread per inch count is used as the measure of quality, and digital microscopy is deployed to establish their yarn composition to determine whether they are all-cotton textiles or mixed linen-cottons.

My findings show that the earliest British ‘cotton’ textiles were mixed linen-cottons and not all-cottons. Technological evolution in the British cotton industry was a pursuit of first the coarse, yet all-cotton cloth, followed by the fine all-cotton cloth such as muslin.

The evidence shows that British cotton cloth quality improved by 60% between 1747 and 1782 during the decades of the famous inventions of James Hargreaves’ spinning jenny, Richard Arkwright’s waterframe and Samuel Crompton’s mule. It further improved by 24% between 1782 and 1816. Overall, cloth quality improved by a staggering 99% between 1747 and 1816.

My research challenges our current understanding of industrialisation as a British and West European phenomenon, commonly explained using rationales such as high wages, availability of local energy sources or access to New World resources. Instead, it reveals that learning from material goods and knowledge brought into Britain and Europe from the East directly and substantially affected the foundations of the modern world as we know it.

The results also pose a more fundamental question: how does technological change take place? Based on my findings, learning from competitor products – especially imitation of novel goods using indigenous processes – may be identified as one crucial pathway for the creation of new ideas that shape technological change.

Workshop – Bullets and Banknotes: War Financing through the 20th Century

By David Foulk (University of Oxford)

This workshop intends to bring together military historians, international relations researchers, and economic historians, and aims to explore the financing of conventional and irregular warfare.

Martial funds originated from a variety of legitimate and illegitimate sources. The former includes direct provision by government or central banking activity. Private donations also need to be considered, as they have also proven a viable means of financing paramilitary activity.  Illegitimate sources in the context of war refer to the ability of an occupying force to extract economic and monetary resources and include, for example, ‘patriotic hold-ups’ of financial institutions, and spoliation.

This workshop seeks to provide answers to three central questions. First, who paid for war? Second, how did belligerents finance war – by borrowing, or printing money?  Finally, was there a juncture between resistance financing and the funding of conventional forces?

In the twentieth century, the global nature of conflict drastically altered existing power blocs and fostered ideologically motivated regimes. These changes were aided by improvements in mass communication technology, and a nascent corporatism that replaced the empire-building of the long nineteenth century. 

What remained unchanged, however, was the need for money in the waging of war. Throughout history, success in war depended on financial support. With it, armies can be paid and fed; research can be encouraged; weapons can be bought, and ordnance shipped. Without it, troops, arms, and supplies become scarcer and more difficult to acquire. Many of these considerations are just as applicable to clandestine forces. Nonetheless, there is an obvious constraint for the latter: their activity takes place in secret. This engenders important operational differences compared to state-sanctioned warfare and generates its own specific problems.

Traditionally, banking operations are predicated on an absence of confidence between parties to a transaction. Banks are institutional participants who act as trusted intermediaries, but what substitute intermediaries exist if the banking system has failed?  This was the quandary faced by members of the internal French resistance during the Second World War. Who could they trust to supply them regularly with funds? Where could they safely store their money, and who would change foreign currency into francs on their behalf?

Members of resistance groups could not acquire funds from the remnants of the French government while Marshal Pétain’s regime retained nominal control over the Non-Occupied Zone, nor could they obtain credit from the banking system.  Instead, resistance forces came to depend on external donations which were either airdropped or handed over by agents working on behalf of the British, American and Free French secret services. The traditional role of the banking sector was supplanted by military agents; the few bankers involved in resistance activities acted more as moneychangers, rather than as issuers of credit.

Without funding, clandestine operatives were unable to purchase food from the black market, or to rent rooms. Wages were indeed paid to resistance members, but there were disparities between the different groups and no official pay-scale existed.  Instead, leaders of the various groups decided on the salary range of their subordinates, which varied during the Second World War.

As liberation approached, a fifty-franc note, was produced on the orders of the Supreme Headquarters Allied Expeditionary Force (S.H.A.E.F.) in anticipation of being used within the Allied Military Government for Occupied Territories, in 1944, once the invasion of France was underway (Figure 1).

Figure 1. Allied Military Government for Occupied Territories (A.M.G.O.T.) fifty-
franc note (1944). Source: author’s collection

Clearly, there are many aspects of resistance financing, and the funding of conventional forces, that remain to be investigated. This workshop intends to facilitate ongoing discussions.

Due to the pandemic, this workshop will take place online on 6th November 2020. The keynote speech will be given via webinar and participants’ contributions will be uploaded before the event.

The workshop is financed by the ‘Initiatives & Conference Fund’ from the Economic History Society, a ‘Conference Organisation’ bursary from Royal Historical Society, and Oriel College, Oxford. 

More information about the Economic History Society’s grants opportunities can be found here

For more information: david.foulk@oriel.ox.ac.uk                          

@DavidFoulk9

Industrial, regional, and gender divides in British unemployment between the wars

By Meredith M. Paker (Nuffield College, Oxford)

This blog is part of a series of New Researcher blogs.

A view from Victoria Tower, depicts the position of London on both sides of the Thames, 1930. Available at Wikimedia Commons.

‘Sometimes I feel that unemployment is too big a problem for people to deal with … It makes things no better, but worse, to know that your neighbours are as badly off as yourself, because it shows to what an extent the evil of unemployment has grown. And yet no one does anything about it’.

A skilled millwright, Memoirs of the Unemployed, 1934.

At the end of the First World War, an inflationary boom collapsed into a global recession, and the unemployment rate in Britain climbed to over 20 per cent. While the unemployment rate in other countries recovered during the 1920s, in Britain it remained near 10 per cent for the entire decade before the Great Depression. This persistently high unemployment was then intensified by the early 1930s slump, leading to an additional two million British workers becoming unemployed.

What caused this prolonged employment downturn in Britain during the 1920s and early 1930s? Using newly digitized data and econometrics, my project provides new evidence that a structural transformation of the economy away from export-oriented heavy manufacturing industries toward light manufacturing and service industries contributed to the employment downturn.

At a time when few countries collected any reliable national statistics at all, in every month of the interwar period the Ministry of Labour published unemployment statistics for men and women in 100 industries. These statistics derived from Britain’s unemployment benefit program established in 1911—the first such program in the world. While many researchers have used portions of this remarkable data by manually entering the data into a computer, I was able to improve on this technique by developing a process using an optical-character recognition iPhone app. The digitization of all the printed tables in the Ministry of Labour’s Gazette from 1923 through 1936 enables the econometric analysis of four times as many industries as in previous research and permits separate analyses for male and female workers (Figure 1).

Figure 1: Data digitization. Left-hand side is a sample printed table in the Ministry of Labour Gazette. Right-hand side is the cleaned digitized table in Excel.

This new data and analysis reveal four key findings about interwar unemployment.  First, the data show that unemployment was different for men and women. The unemployment rate for men was generally higher than for women, averaging 16.1 percent and 10.3 per cent, respectively.  Unemployment increased faster for women at the onset of the Great Depression but also recovered quicker (Figure 2). One reason for these distinct experiences is that men and women generally worked in different industries. Many unemployed men had previously worked in coal mining, building, iron and steel founding, and shipbuilding, while many unemployed women came from the cotton-textile industry, retail, hotel and club services, the woolen and worsted industry, and tailoring.

Figure 2: Male and female monthly unemployment rates. Source: Author’s digitization of Ministry of Labour Gazettes.

Second, regional differences in unemployment rates in the interwar period were not due only to the different industries located in each region. There were large regional differences in unemployment above and beyond the effects of the composition of industries in a region.

Third, structural change played an important role in interwar unemployment. A series of regression models indicate that, ceteris paribus, industries that expanded to meet production needs during World War I had higher unemployment rates in the 1920s. Additionally, industries that exported much of their production also faced more unemployment. An important component of the national unemployment problem was thus the adjustments that some industries had to make due to the global trade disturbances following World War I.

Finally, the Great Depression accelerated this structural change. In almost every sector, more adjustment occurred in the early 1930s than in the 1920s. Workers were drawn into growing industries from declining industries, at a particularly fast rate during the Great Depression.

Taken together, these results suggest that there were significant industrial, regional, and gender divides in interwar unemployment that are obscured by national unemployment trends. The employment downturn between the wars was thus intricately linked with the larger structural transformation of the British economy.


Meredith M. Paker

meredith.paker@nuffield.ox.ac.uk

Twitter: @mmpaker

Italy and the Little Divergence in Wages and Prices: Evidence from Stable Employment in Rural Areas, 1500-1850

by Mauro Rota (Sapienza University of Rome) and Jacob Weisdorf (Sapienza University of Rome)

The full article from this post was now published on The Economic History Review and it is available on Early View at this link

The Medieval Plow (Moldboard Plow). Farming in the Middle Ages. Available at Wikimedia Commons

More than half a century ago, Carlo Cipolla advocated that early-modern Italy suffered a prolonged economic downturn. Subsequently, Cipolla’s view was challenged by Domenico Sella, who contended that Italy’s downturn was mainly an urban experience, with the countryside witnessing both rising agricultural productivity and growing proto-industry at the time. If Sella’s view is correct, it is no longer certain  that rural Italy performed differently to its rural counterparts in North-Western Europe.  This potentially implicates how to think about long-run trends in historical workers’ living standards and how these varied across Europe.

The common narrative – that early-modern Europe witnessed a little divergence in living standards – is underpinned by daily wages paid to urban labour. These show that London workers earned considerably more than those in other leading European cities. There are two important reasons, however, why casual urban wages might overstate the living standards of most early-modern workers. First, urban workers made up only a modest fraction of the total workforce. They also received an urban wage premium to cover their urban living expenses – a premium that most workers therefore did not enjoy. Second, many workers were employed on casual terms and had to piece their annual earnings together from daily engagements. This entailed a risk of involuntary underemployment for which workers without stable engagements were compensated. Unless this compensation is accounted for, day wages, on the usual but potentially ahistorical assumption of full-year casual employment, will overstate historical workers’ annual earnings and thus their living standards.

We present an early-modern wage index for ‘stable’ rural workers in the Grand Duchy of Tuscany, an important region in pre-industrial Italy (Figure 1).  Since these wages avoid the premiums described above, we argue that our wages offer a more suitable estimate of most historical workers’ living standards, and that the little divergence therefore should be considered using such wages instead. We draw a number of important conclusions on the basis of the new data and their comparison with pre-existing urban casual wages for Italy and rural stable wages for England.

Figure 1: Implied daily real wages of unskilled urban and rural workers in Italy, 1500-1850. Source: as per article.

First, we observe that early modern Italy’s downturn was effectively an urban one, with stable rural workers able to largely maintain their real annual income across the entire early-modern period (Figure 1). Indeed, the urban decline came from a pedestal of unprecedented high wages, possibly the highest in early-modern Europe, and certainly at heights that suggests that urban casual workers were paid considerable wage premiums to cover urban penalties alongside the risk of underemployment.

Our ‘apple-to-apple’ wage comparison within the Grand Duchy of Tuscany gives a precise indication of the size of the wage premiums discussed above. Figure 2 suggests that casual workers received a premium for job insecurity, and that urban workers, unlike rural workers, also received a wage premium. Further, when we compare the premium-free wages in the Grand Duchy of Tuscany with similar ones for England, we find that annual English earnings increased from 10 per cent higher than those in Italy in 1650, to 150 per cent higher by 1800 (Figure 3). If wages reflected labour productivity, then unskilled English workers – but not their Italian equals – grew increasingly more productive in the period preceding the Industrial Revolution.

Figure 2: The implied daily real wages of unskilled casual and stable workers in Tuscany, 1500-1850. Source: As per article.
Figure 3: Real annual income of unskilled workers in Italy and England, 1500-1850. Source: As per article.

We make three main conclusions based on our findings. First, our data support the hypothesis that early-modern Italy’s downturn was mainly an urban experience. Real rural earnings in Tuscany stayed flat between 1500 and 1850. Second, we find that rural England pulled away from Italy (Tuscany) after c. 1650. This divergence happened not because our sample of Italian workers lagged behind their North-Western European counterparts, as earlier studies based on urban casual wages have suggested, but because English workers were paid increasingly more than their Southern European peers. This observation brings us to our final conclusion: to the extent that annual labour productivity in England was reflected in the development of annual earnings, it increasingly outgrew  Italian achievements.

To contact the authors:

Mauro Rota, mauro.rota@uniroma1.it

Jacob Weisdorf, jacob.weisdorf@uniroma1.it

Seeing like the Chinese imperial state: how many government employees did the empire need?

By Ziang Liu (LSE)

This blog is part of a series of New Researcher blogs.

The Qianlong Emperor’s Southern Inspection Tour, Scroll Six Entering Suzhou and the Grand Canal. Available at Wikimedia Commons

How many government employees do we need? This has always been a question for both politicians and the public. We often see the debates from both sides arguing about whether the government should hire or reduce more employees for many reasons.

This was also a question for the Chinese imperial government centuries ago. As the Chinese state governs a vast territory with great cultural and socio-economic diversity, the size of government concerned not only the empire’s fiscal challenges, but also the effectiveness of the governance. My research finds that while a large-scale reduction in government expenditure may have short-term benefits of improving fiscal conditions, in the long term, the lack of investments in administration may harm the state’s ability to govern.

Using the Chinese case, we are interested to see how much the imperial central government counted as a ‘sufficient’ number of employees. How did the Chinese central government make the calculation? After all, the government has to know the ‘numbers’ before it takes any further actions.

Long before the late sixteenth century, the Chinese central government did not have a clear account of how much was spent on its local governments. It was only then, when the marketisation trend of China’s economy enabled the state to calculate the costs of its spending in silver currency, that the imperial central government began to ‘see’ the previously unknown amount of local spending in a unified and legible form.

Consequently, my research finds that over the sixteenth and eighteenth centuries, the Chinese imperial central state significantly improved its fiscal circumstances at the expense of local finance. For roughly a century’s long fiscal pressure between the late sixteenth and late seventeenth century (see Figure A), the central government continuously expanded its incomes and cut off local spending on government employees.

Eventually, at the turn of the eighteenth century, the central treasury’s annual income was roughly four to five times larger than the late sixteenth century level (see Figure B), and the accumulated fiscal surplus was in general one to two times greater than its annual budgetary income (see Figure C).

But what the central government left to the local, both manpower and funding, seems to have been too little to govern the empire. My research finds that either in terms of total numbers of government employees (see Figure D) or employees per thousand population (see Figure E), the size of China’s local states shrank quite dramatically from the late sixteenth century.

In the sample regions, we find that in the eighteenth century, every one to two government employees had to serve one thousand local population (Figure E). In the meantime, records also show that salary payments for local government employees remained completely unchanged from the late seventeenth century.

Therefore, my research considers that when the Chinese central state attempted to intervene local finance, it had the stronger intention of constraining rather than rationalising local finance. Even in the eighteenth century, when the empire’s fiscal circumstances were unprecedentedly good, the central state did not consider increasing investments in local administration.

Given the constant population growth in China from 100 million in the early seventeenth century to more than 300 million in the early nineteenth century, it is hardly persuasive that the size of China’s local governments could be effective in local governance, not even to mention that due to reductions in local finance, the Chinese local states from the late seventeenth century kept more personnel for state logistics and information network instead of local public services such as education and local security.

Britain’s inter-war super-rich, the 1928/9 ‘millionaire list’.

by Peter Scott (Henley Business School at the University of Reading)

The roaring 1920s. Available at <https://www.lovemoney.com/gallerylist/87193/the-roaring-1920s-richest-people-and-how-they-made-their-money&gt;

Most of our information on wealth distribution and top incomes is derived from data on wealth left at death, recorded in probates and estate duty statistics. This study utilises a unique list of all living millionaires for the 1928/9 tax year, compiled by the Inland Revenue to estimate how much a 40 per cent estate duty on them would raise in government revenue. Millionaires were identified by their incomes (over £50,000, or £3 million in 2018 prices), equivalent to a capitalised sum of over £1 million (£60 million in 2018 prices). Data for living millionaires are particularly valuable, given that even in the 1930s millionaires often had considerable longevity, and data on wealth at death typically reflected fortunes made, or inherited, several decades previously. Some millionaires’ names had been redacted – where their dates of birth or marriage were known – but cross-referencing with various data sources enabled the identification of 319 millionaires, equivalent to 72.8 percent of the number appearing on the millionaire list.

The tax year 1928 to 1929 is a very useful bench-mark for assessing the impact of the First World War and its aftermath on the composition of the super-rich. Prior to the 20th century, the highest echelons of wealth were dominated by the great landowners; reflecting a concentration of land-ownership unparalleled in Europe. William Rubinstein found that the wealth of the greatest landowners exceeded that of the richest businessmen until 1914, if not later. However, war-time inflation, higher taxes, and the post-war agricultural depression negatively impacted their fortunes. Meanwhile some industrialists benefitted enormously from the War.

By 1928 business fortunes had pushed even the wealthiest aristocrats, the Dukes of Bedford and Westminster, into seventh and eighth place on the list of top incomes. Their taxable incomes, £360,000 and £336,000 respectively, were dwarfed by those of the richest businessmen, such as the shipping magnate Sir John Ellerman (Britain’s richest man; the son of an immigrant corn broker who died in 1871, leaving £600) with a 1928 income of £1,553,000, or James Williamson, the first Baron Ashton, who  pioneered the mass production of linoleum – second on the list, with £760,000. Indeed, some 90 percent of named 1928/9 millionaires had fortunes based on (non-landed) business incomes. Moreover, the vast majority – 85.6 percent of non-landed males on the list – were active businesspeople, rather than rentiers.

 “Businesspeople millionaires” were highly clustered in certain sectors (relative to those sectors’ shares of all corporate profits): tobacco (5.40 times over-represented); shipbuilding (4.79); merchant and other banking (3.42); foods (3.20); ship-owning (3.02); other textiles (2.98); distilling (2.67), and brewing (2.59). These eight sectors collectively comprised 42.4 percent of all 1928/9 millionaires, but only 15.5 percent of aggregate profits. Meanwhile important sectors such as chemicals, cotton and woollen textiles, construction, and, particularly, distribution, are substantially under-represented.

The over-represented sectors were characterised by either rapid cartelisation and/or integration which, in most cases, had intensified during the War and its aftermath. Given that Britain had very limited tariffs, cartels and monopolies could only raise prices in sectors with other barriers to imports, principally “strategic assets”: assets that sustain competitive advantage through being valuable, rare, inimitable, and imperfectly substitutable. These included patents (rayon); control of distribution (brewing and tobacco); strong brands (whiskey; branded packaged foods); reputational assets (merchant banking); or membership of international cartels that granted territorial monopolies  (shipping; rayon). Conversely, there is very little evidence of “technical” barriers such as L-shaped cost curves that could have offset the welfare costs of industrial combination/concentration through scale economies. Instead, amalgamation or cartelisation were typically followed by rising real prices.

Another, less widespread but important tactic for gaining a personal and corporate competitive edge, was the use of sophisticated tax avoidance/evasion techniques to reduce tax liablity to a fraction of its headline rate. Tax avoidance was commonplace among Britain’s economic elite by the late 1920s, but a small proportion of business millionaires developed it to a level where most of their tax burden was removed, mainly via transmuting income into non-taxable capital gains and/or creating excessive depreciation tax allowances. Several leading British millionaires, including Ellerman, Lord Nuffield, Montague Burton, and the Vestey brothers (Union Cold Storage) were known to the Inland Revenue as skilled and successful tax avoiders.

These findings imply that the composition of economic elites should not simply be conflated with ‘wealth-creation’ prosperity (except for those elites), especially where their incomes include a substantial element of rent-seeking. Erecting or defending barriers to competition (through cartels, mergers, and strategic assets) may increase the number of very wealthy people, but  unlikely to have generated a positive influence on national economic growth and living standards — unless accompanied by rationalisation to substantially lower costs. In this respect typical inter-war business millionaires had strong commonalities with earlier, landed, British elites, in that they sustained their wealth through creating, and then perpetuating, scarcity in the markets for the goods and services they controlled.

To contact the author:

p.m.scott@henley.ac.uk

Spain’s tourism boom and the social mobility of migrant workers

By José Antonio García Barrero (University of Barcelona)

This blog is part of a series of New Researcher blogs.

Spain Balearic Islands Mediterranean Menorca. Available at Wikimedia Commons.

My research, which is based on a new database of the labour force in Spain’s tourism industry, analyses the assimilation of internal migrants in the Balearic Islands during the tourism boom between 1959 and 1973.

I show that tourism represented a context for upward social mobility for natives and migrants. But the extent of upward mobility was uneven among the different groups. While natives, foreigners and internal urban migrants achieved a significant level of upward mobility, the majority faced more difficulties to improve. The transferability of human capital to the services economy and the characteristics of their migratory fluxes determined the extent of the labour attainment of the migrants.

The tourism boom constituted one of the main scenarios of the path to modernisation in Spain in the twentieth century. Between 1959 and 1973, the country transformed into one of the top tourist economies of the world, mobilising a rapid and intense demographic and landscape transformation among coastal regions of the peninsula and the archipelagos.

The increasing demand for tourism services from West European societies triggered the massive arrival of tourists to the country. In 1959, four million tourists visited Spain; by 1973, the country hosted 31 million visitors. The epicentre of this phenomenon was the Balearic Islands.

In the Balearics, a profound transformation took place. In more than a decade, the capacity of the tourism industry skyrocketed from 215 to 1,534 hotels and pensions, and from 11,496 to 216,113 hotel beds. Between 1950 and 1981, the number of Spanish-born people from outside the Balearics increased from 33,000 inhabitants to 150,000, attracted by the high labour demand for tourism services. In 1950, they accounted for 9% of the total population; in 1981, that share had reached 34.4%.

In my research, I analyse whether the internal migrants who arrived at the archipelago – mostly seasonal migrants who ended up becoming permanent residents from stagnant rural agrarian areas in southern Spain – were able to take advantage of the rapid and profound transformation of the tourism industry. Instead of putting my focus on the process of movement from agrarian to services activities, my interest was in the potential possibilities of upward mobility in the host society.

I use a new database of the workforce, both men and women, in the tourism industry, comprising a total of 10,520 observations with a wide range of personal, professional and business data for each individual up to 1970. The features of this data make it possible to analyse the careers of these workers in the emerging service industry by cohort characteristics, including variables such as gender, place of birth, language skills or firm, among others. Using these variables, I examine the likelihood of belonging to four income categories.

My results suggest that the tourism explosion opened significant opportunities for upward labour mobility. Achieving high-income jobs was possible for workers involved in hospitality and tourism-related activities. But those who took advantage of this scenario were mainly male natives and urban migrants coming from northern Spain, mainly from Catalonia, and especially from European countries with clear advantages in terms of language skills.

For natives, human and social capital made the difference. For migrants, the importance of self-selection and the transferability of skills from urban cities to the new leisure economies were decisive.

Likewise, despite lagging behind, those from rural areas in southern Spain were able to achieve some degree of upward mobility, thus reducing progressively although not completely the gap with natives. Acquiring human capital through learning-by-doing and the formation of networks of support and information among migrants from the same areas increased the chances of improvement. Years of experience, knowing where to find job opportunities and holding personal contacts in the firms were important skills.

In that sense, the way that the migrants arrived at the archipelago mattered. Those more exposed to seasonal flows of migrants faced a lower capacity for upward mobility since they were recruited in their place of origin rather than through migrant networks or returned to their homes at the end of each season.

In comparison, those who relied on migratory networks and remained as residents in the archipelago had a greater chance of getting better jobs and reducing their socio-economic distance from the natives.

Baumol, Engel, and Beyond: Accounting for a century of structural transformation in Japan, 1885-1985

by Kyoji Fukao (Hitotsubashi University) and Saumik Paul (Newcastle University and IZA)

The full article from this blog post was published on The Economic History Review, and it is now available on Early View at this link

Bank of Japan, silver convertible yen. Available on Wiki Commons

Over the past two centuries, many industrialized countries have experienced dramatic changes in the sectoral composition of output and employment. The pattern of structural transformation, depicted for most of the developed countries, entails a steady fall in the primary sector, a steady increase in the tertiary sector, and a hump shape in the secondary sector. In the literature, the process of structural transformation is explained through two broad channels: the income effect, driven by the generalization of Engel’s law, and the substitution effect, following the differences in the rate of productivity across sectors, also known as “Baumol’s cost disease effect”.

At the same time, an input-output (I-O) model provides a comprehensive way to study the process of structural transformation. The input-output analysis accounts for intermediate input production by a sector, as many sectors predominantly produce intermediate inputs, and their outputs rarely enter directly into consumer preferences. Moreover, an input-output analysis relies on observed data and a national income identity to handle imports and exports. The input-output analysis has considerable advantages in the context of Japanese structural transformation first from agriculture to manufactured final consumption goods, and then to services, alongside transformations in Japanese exports and imports that have radically changed over time.

We examine the drivers of the long-run structural transformation in Japan over a period of 100 years, from 1885 to 1985. During this period, the value-added share of the primary sector dropped from 60 per cent  to less than 1 per cent, whereas that of the tertiary sector rose from 27 to nearly 60 per cent in Japan (Figure 1). We apply the Chenery, Shishido, and Watanabe framework to examine changes in the composition of sectoral output shares. Chenery, Shishido, and Watanabe used an inter-industry model to explain deviations from proportional growth in output in each sector and decomposed the deviation in sectoral output into two factors: the demand side effect, a combination of the Engel and Baumol effects (discussed above), and  the supply side effect, a change in the technique of production. However, the current input-output framework is unable to uniquely separate the demand side effect into forces labelled under the Engel and Baumol effects.

Figure 1. Structural transformation in Japan, 1874-2008. Source: Fukao and Paul (2017). 
Note: Sectoral shares in GDP are calculated using real GDP in constant 1934-36 prices for 1874-1940 and constant 2000 prices for 1955-2008. In the current study, the pre-WWII era is from 1885 to1935, and the post-WWII era is from 1955 to 1985. 

To conduct the decomposition analysis, we use seven I-O tables (every 10 years) in the prewar era from 1885 to 1935 and six I-O tables (every 5 years) in the postwar era from 1955 to 1985. These seven sectors include: agriculture, forestry, and fishery; commerce and services; construction;  food;  mining and manufacturing (excluding food and textiles); textiles, and  transport, communication, and utilities.

The results show that the annual growth rate of GDP more than doubled in the post-WWII era compared to the pre-WWII era. The real output growth was the highest in the commerce and services sector throughout the period under study, but there was also rapid growth of output in mining and manufacturing, especially in the second half of the 20th century. Sectoral output growth in mining and manufacturing (textile, food, and the other manufacturing), commerce and services, and transport, communications, and utilities outgrew the pace of growth in GDP in most of the periods. Detailed decomposition results show that in most of the sectors (agriculture, commerce and services, food, textiles, and transport, communication, and utilities), changes in private consumption were the dominant force behind the demand-side explanations. The demand-side effect was strongest in the commerce and services sector.

Overall, demand-side factors — a combination of the Baumol and Engel effects, were the main explanatory factors in the pre-WWII period, whereas  supply-side factors were the key driver of structural transformation in the post-WWII period.

To contact the authors:

Kyoji Fukao, k.fukao@r.hit-u.ac.jp

Saumik Paul, paulsaumik@gmail.com, @saumik78267353

Notes

Baumol, William J., “Macroeconomics of unbalanced growth: the anatomy of urban crisis”. American Economic Review 57, (1967) 415–426.

Chenery, Hollis B., Shuntaro Shishido and Tsunehiko Watanabe. “The pattern of Japanese growth, 1914−1954”, Econometrica30 (1962), 1, 98−139.

Fukao, Kyoji and Saumik Paul “The Role of Structural Transformation in Regional Convergence in Japan: 1874-2008.” Institute of Economic Research Discussion Paper No. 665. Tokyo: Institute of Economic Research (2017).

Colonialism, institutional quality and the resource curse

by Jubril Animashaun (University of Manchester)

This blog is part of a series of New Researcher blogs.

Why are so many oil-rich countries characterised by slow economic growth and corruption? Are they cursed by the resource endowment per se or is it the mismanagement of oil wealth? We used to think that it is mostly the latter. These days, however, we know that it is far more complicated than that: institutional reform is challenging because institutions are multifaceted and path-dependent.

A primary objective of European colonialism was to expand the economic base of the home country through the imposition of institutions that favoured rent-seeking in the colony. If inherited, such structures can constitute a significant reason for the resource curse and why post-colonial institutional reform is hard. Following this argument, post-colonial groups that benefitted from the institutional system may be able to reproduce this system after independence.

Our study finds support for this argument in oil-rich countries. This suggests the enduring impact of the sixteenth to nineteenth century European colonial practices as an obstacle to institutional reforms in oil-rich countries today.

We come to this conclusion by investigating the changes in economic development over the period 1960-2015 in 69 countries. Our results show that the variation in economic development over these 45 years can be explained to a large extent by institutional quality and oil abundance and their interaction. Our findings are unchanged after controlling for countries that became independent after 1960 (many former Portuguese colonies are in this category).

In our study, we define colonial experience if an oil-rich country had European colonial settlement (for example, settler mortality records) and/or if any of the colonial European languages (English, French, Spanish, etc.) persist as official post-independence language. Persistence of the colonial language helps to distinguish colonies based on the depth of colonial economic engagement.

We further capture colonialism with a dummy variable to reduce the measurement error with estimates of both settler mortality and language. Institutions are measured as the unweighted averages of executive constraints, expropriation risk and government effectiveness (institutional quality index).

Figure 1: Log of settler mortality on institutional quality in oil and gas-rich countries that were former European colonies

It is important in our kind of research to distinguish the impact of colonial legacy from the pre-colonial conditions in the colonised states to validate our result. This is because places with sophisticated technologies could have resisted colonial occupation, and such historical technologies may also have a persistent long-term effect. As our sample comprises countries with giant oil discoveries, and because oil discoveries did not drive the sixteenth to nineteenth century European colonialism, our findings rule out other backdoor effects of colonial and pre-colonial impact on current performance.

Figure 2: Log illiteracy and experience of colonialism in oil-rich countries with control for log GDP and population

We find a significant gap in illiteracy levels between colonised and non-colonised countries. We also find that countries with colonial heritage have less trust. We suggest that to reverse the resource curse, higher priorities should be placed on investment in human capital and education. These will boost citizens’ ability to demand accountability and good governance from elected officials and improve the quality of discourse on civic engagement on institutional reforms.

Figure 3: Social trust index and the experience of colonialism

Growth Before Birth: The Relationship between Placental Weights and Infant and Maternal Health in early-twentieth century Barcelona

By Gregori Galofré-Vilà (Universitat Pompeu Fabra and Barcelona Graduate School of Economics) and Bernard Harris (University of Strathclyde)

R. Alcaraz, Maternitat, Ayuda al desvalido. Available at Wikicommons.

It is now widely accepted that early-life conditions have a significant effect on lifelong health (see e.g. Wells 2016).  Many researchers have sought to examine intrauterine health by studying birth weights, but the evidence of historical changes is mixed.  Although some researchers have argued that birth weights have increased over time (e.g. O’Brien et al. 2020), others have found little evidence of any significant change over the course of the last century (Roberts and Wood 2014).  These findings have led Schneider (2017: 25) to conclude either that, ‘fetal health has remained stagnant’ or that ‘the indicators used to measure fetal health … are not as helpful as research might hope’.

The absence of unequivocal evidence of changes in birth weight has encouraged researchers to pay more attention to other intrauterine health indicators, including the size and shape of the placenta and the ratio of placental weight to birth weight (e.g. Burton et al. 2010).  The placenta transfers oxygen and nutrients from the mother to foetus and provides the means of removing waste products.  Although the evidence regarding changes in placental weight is also mixed, it has been described as a ‘mirror’ reflecting the foetus’ intrauterine status (Kaur 2016: 185).

Historical studies of changes in placental weights are still very rare.  However, we have collected data on almost 4000 placentas which were weighed and measured at Barcelona’s Provincial House (La Casa Provincial de Maternitat i Expósits) between 1905 and 1920.  Our new paper (Galofré-Vilà and Harris, in press) examines the impact of short-term fluctuations in economic conditions on placental weights immediately before and during the First World War, together with the relationship between placental weights and other maternal and neonatal health indicators and long-term changes in placental weight over the course of the century.

Our first aim was to compare changes in birth weight with changes in placental weight.  As we can see from Figure 1, there was little change in average birth weights, but placental weights fluctuated more markedly.  In our paper, we show how these fluctuations may have been related to changes in real wage rates over the same period.

Figure 1. The development of birthweight and placental weight, 1905-1920. Source: as per article. Note: The dark blue line shows the monthly data and the red lines show the yearly averages with their associated 95 percent confidence intervals.

These findings support claims that the placenta is able to ‘adapt’ to changing economic circumstances, but our evidence also shows that such ‘adaptations’ may not be able to counteract the impact of maternal undernutrition entirely.  As Figure 2 demonstrates, although most neonatal markers show a reverse J-shaped curve (a higher risk of perinatal mortality with premature or small-for-gestational-age births), the relationship between placental weight and early-life mortality is U-shaped.

We also control for maternal characteristics using a Cox proportional hazards model.  Even if increases in placental weight can be regarded as a form of ‘adaptive response’, they are not cost-free, as both very low and very high placental weights are associated with increased risks of early-life mortality.  These findings are consistent with David Barker’s conclusion that elevated placental weight ratios lead to adverse outcomes in later life (Barker et al. 2010).

Figure 2. Early-life Mortality, Birthweight, Birth Length, Placental weight and BW:PW ratio. Source: as per article.

We have also compared the average value of placental weights in the Provincial House with modern Spanish data.  These data suggest that average placental weights have declined over the course of the last century.  However, the data from other countries are more mixed.  Placental weight also seems to have declined in Finland and Switzerland, but this is less obvious in other countries such as the United Kingdom and the United States.

Overall, whilst placental weights may well provide a sensitive guide to the intrauterine environment, we still know relatively little about the ways in which they may, or may not, have changed over time.  However, this picture may change if more historical series come to light.

To contact the authors: 

Gregori Galofré-Vilà, gregori.galofre@upf.edu, Twitter: @gregorigalofre

Bernard Harris, bernard.harris@strath.ac.uk

References: 

Barker, D. J. P., Thornburg, K. L., Osmond, C., Kajantie, E., and Eriksson, J. G. (2010), ‘The Surface Area of the Placenta and Hypertension in the Offspring in Later Life’, International Journal of Developmental Biology, 54, 525-530.

Burton, G., Jauniaux, E. and and Charnock-Jones, D.S. (2010), ‘The influence of the intrauterine environment on human placental development’, International Journal of Developmental Biology, 54, 303-11.

Galofré-Vilà, G. and Harris, B. (in press), ‘Growth Before birth: the relationship between placental weights and infant and maternal health in early-twentieth century Barcelona’, Economic History Review.

Kaur, D. (2016), ‘Assessment of placental weight, newborn birth weight in normal pregnant women and anaemic pregnant women: a correlation and comparative study’, International Journal of Health Sciences and Research, 6, 180-7.

O’Brien, O., Higgins, M. and Mooney, E. (2020), ‘Placental weights from normal deliveries in Ireland’, Irish Journal of Medical Science, 189, 581-3.

Roberts, E., and Wood, P. (2014), ‘Birth weight and adult health in historical perspective: Evidence from a New Zealand Cohort, 1907-1922’, Social Science and Medicine, 107, 154-161.

Schneider, E. (2017), ‘Fetal health stagnation: have health conditions in utero improved in the US and Western and Northern Europe over the past 150 years?’, Social Science and Medicine, 179, 18-26.

Wells, J.C.K. (2016), The metabolic ghetto: evolutionary perspectives on nutrition, power relations and chronic disease, Cambridge: Cambridge University Press.