North & South in the 1660s and 1670s: new understanding of the long-run origins of wealth inequality in England

By Andrew Wareham (University of Roehampton)

This blog is part of a series of New Researcher blogs.

Maps of England circa 1670, Darbie 10 of 40. Available at Wikimedia Commons.

New research shows that before the industrial revolution many more houses in south-east England had more fireplaces than houses in the Midlands and northern England. When Mrs Gaskell wrote North and South, she reflected on a theme which was nearly two centuries old and which continues to divide England.

Since the 1960s, historians have wanted to use the Restoration hearth tax to provide a national survey of distributions of population and wealth. But, for technical reasons until now, it has not been possible to move beyond city and county boundaries to make comparisons. 

Hearth Tax Digital, arising from a partnership between the Centre for Hearth Tax Research (Roehampton University, UK) and the Centre for Information Modelling (Graz University, Austria) overcomes these technical barriers. This digital resource provides free access to the tax returns, with full transcription of the records and links to archival shelf marks and location by county and parish. Data on around 188,000 households in London and 15 cities/counties can be searched, with the capacity to download search queries into a databasket, and work on GIS mapping is in development.

In the 1660s and 1670s, after London, the West Riding of Yorkshire and Norfolk stand out as densely populated regions. The early stages of industrialization meant that Leeds, Sheffield, Doncaster and Halifax were overtaking the former leading towns of Hull, Malton and Beverley. But the empty landscapes of north and east Norfolk, enjoyed by holiday makers today, were also densely populated then.

The hearth tax, as a nation-wide levy on domestic fireplaces, was charged against every hearth in each property, and the tax was collected twice a year at Lady Day (March) and Michaelmas (September).  In 1689 after 27 years it was abolished in perpetuity in England and Wales, but it continued to be levied in Ireland until the early nineteenth century and it was levied as a one- off tax in Scotland in 1691. Any property with three hearths and over was liable to pay the tax, and many properties with one or two hearths, such as those occupied by the ordinary poor, were exempt from the tax. (The destitute and those in receipt of poor relief were not included in the tax registers). A family living in a home with one hearth had to use it for all their cooking, heating and leisure purposes, but properties with more than three  hearths had at least one hearth in the kitchen, one in the parlour and one in an upstairs chamber. 

In a  substantial majority of parishes in northern England (County Durham, Westmorland, the East and North Ridings of Yorkshire) less than 20 per cent of households had three hearths and over, and only in the West Riding was there a significant number of parishes where 30 percent and more of households had three hearths and over. But, in southern England, across Middlesex, Surrey, southern Essex, western Kent and a patchwork of parishes across Norfolk, it was common for at least a third of the properties to have three hearths and over. 

There are many local contrasts to explore further. South-east Norfolk and north-east Essex were notably more prosperous than north-west Essex, independent of the influence of London, and the patchwork pattern of wealth distribution in Norfolk around its market towns and prosperous villages is repeated in the Midlands. Nonetheless, the general pattern is clear enough: the distribution of population in the late seventeenth century was quite different from patterns found today, but Samuel Pepys and Daniel Defoe would have recognized a world in which south-east England abounded with the signs of prosperity and comfort in contrast to the north.

A century of wind power: why did it take so long to develop to utility scale?

by Mercedes Galíndez, University of Cambridge

This blog is based on research funded by a bursary from the Economic History Society. More information here

Marcellus Jacobs on a 2.5kW machine in the 1940s. Available at <http://www.jacobswind.net/history&gt;

Seventeen years passed between Edison patenting his revolutionary incandescent light bulb in 1880, and Poul la Cour’s first test of a wind turbine for generating electricity. Yet it would be another hundred years before wind power would become an established industry in the 2000s. How can we explain the delay in harvesting the cheapest source of electricity generation?

In the early twentieth century wind power emerged to fill the gaps of nascent electricity grids. This technology was first adopted in rural areas. The incentive was purely economic: the need for decentralised access to electricity. In this early stage there were no concerns about the environmental implications of wind power.

The Jacobs Wind Electricity Company delivered 30,000 three-blade wind turbines in the US  between 1927 and 1957.[1] The basic mechanics of these units did not differ too much from their modern counterparts. Once the standard electrical grid reached rural areas, however, the business case for wind power weakened. Soon it became more economic to buy electricity from centralised utilities, which benefited from significant economics of scale.

It was not until the late 1970s that wind power became a potential substitute for electricity generated by fossil fuels or hydropower. Academic literature agrees on two main triggers for this change: the oil crises in the 1970s, and the politicisation of Climate Change. When the price of oil quadrupled in 1973, rising to nearly US $12 per barrel, industrialised countries’ dependency on foreign producers of oil was exposed. The reaction was to find new domestic sources of energy. Considerable  effort was devoted to nuclear power, but technologies like wind power were also revived.

In the late 1980s Climate Change became more politicised, and interest in wind energy as a technology that could mitigate environmental damage, was renewed.  California’s governor, Jerry Brown, was aligned with these ideals and in 1978, in a move ahead of its time, he provided extra tax incentives to renewable energy producers in his tate.[2] This soon created a ‘California Wind Rush’ which saw both local and European turbine manufacturers burst onto the market, with $1 billion US dollars being  invested in the region of Altamont Pass between 1981 and 1986.[3]

The California Wind Rush ended suddenly when central government support was withdrawn.   However, the European Union (EU) accepted the challenge to maintain the industry. In 2001, the EU introduced Directive 2001/77/EC for the promotion of renewable energy sources. This Directive required Member States to set renewable energy targets.[4] Many directives followed which triggered renewable energy programmes throughout the EU. Following the first directive in 2001, the installed capacity of wind power in the EU increased thirteen-fold, from 13GW to 169GW in 2017.

Whilst there is no doubt that the EU regulatory framework played a key role in the development of wind power, other factors were also at play. Nicolas Rochon, a green investment manager, published a memoir in 2020 in which he argued that clean energy development was also enabled by a change in the investment community. As interest rates decreased during the first two decades of the twenty-first century, investment managers revised downwards their expectations on future returns – which fostered more attention to clean energy assets offering lower profitability. Growing competition in the sector reduced the price of electricity obtained from renewable energy. [5]

My research aims to understand the macroeconomic conditions that enabled wind power to develop to national scale. In particular, how wind power developers accessed capital, and how bankers and investors took a leap of faith to invest in the technology. My research will utilise  oral history interviews with subjects like Nicolas Rochon, who made  financial decisions on wind power projects.

To contact the author:

Mercedes Galíndez (mg570@cam.ac.uk)


[1] Righter, Robert, Wind Energy in America: A History, Norman, University of Oklahoma Press, 1996, page 93

[2] Madrigal, Alexis. Powering the Dream: The History and Promise of Green Technology. Cambridge, MA: Da Capo Press, 2011, pages 239-239

[3] Jones, Geoffrey. Profits and Sustainability. A History of Green Entrepreneurship. Oxford: Oxford University Press, 2017, page 330

[4] EU Directive 2001/77/EC

[5] Rochon, Nicolas, Ma transition énergétique 2005 – 2020, Les Papiers Verts, Paris, 2020

How Indian cottons steered British industrialisation

By Alka Raman (LSE)

This blog is part of a series of New Researcher blogs.

“Methods of Conveying Cotton in India to the Ports of Shipment,” from the Illustrated London News, 1861. Available at Wikimedia Commons.

Technological advancements within the British cotton industry have widely been acknowledged as the beginning of industrialisation in eighteenth and nineteenth century Britain. My research reveals that these advances were driven by a desire to match the quality of handmade cotton textiles from India.

I highlight how the introduction of Indian printed cottons into British markets created a frenzy of demand for these exotic goods. This led to immediate imitations by British textile manufacturers, keen to gain footholds in the domestic and world markets where Indian cottons were much desired.

The process of imitation soon revealed that British spinners could not spin the fine cotton yarn required to hand make the fine cotton cloth needed for fine printing. And British printers could not print cloth in the multitudes of colourfast colours that the Indian artisans had mastered over centuries.

These two key limitations in British textile manufacturing spurred demand-induced technological innovations to match the quality of Indian handmade printed cottons.

In order to test this, I chart the quality of English cotton textiles from 1740-1820 and compare them with Indian cottons of the same period. Thread per inch count is used as the measure of quality, and digital microscopy is deployed to establish their yarn composition to determine whether they are all-cotton textiles or mixed linen-cottons.

My findings show that the earliest British ‘cotton’ textiles were mixed linen-cottons and not all-cottons. Technological evolution in the British cotton industry was a pursuit of first the coarse, yet all-cotton cloth, followed by the fine all-cotton cloth such as muslin.

The evidence shows that British cotton cloth quality improved by 60% between 1747 and 1782 during the decades of the famous inventions of James Hargreaves’ spinning jenny, Richard Arkwright’s waterframe and Samuel Crompton’s mule. It further improved by 24% between 1782 and 1816. Overall, cloth quality improved by a staggering 99% between 1747 and 1816.

My research challenges our current understanding of industrialisation as a British and West European phenomenon, commonly explained using rationales such as high wages, availability of local energy sources or access to New World resources. Instead, it reveals that learning from material goods and knowledge brought into Britain and Europe from the East directly and substantially affected the foundations of the modern world as we know it.

The results also pose a more fundamental question: how does technological change take place? Based on my findings, learning from competitor products – especially imitation of novel goods using indigenous processes – may be identified as one crucial pathway for the creation of new ideas that shape technological change.

Workshop – Bullets and Banknotes: War Financing through the 20th Century

By David Foulk (University of Oxford)

This workshop intends to bring together military historians, international relations researchers, and economic historians, and aims to explore the financing of conventional and irregular warfare.

Martial funds originated from a variety of legitimate and illegitimate sources. The former includes direct provision by government or central banking activity. Private donations also need to be considered, as they have also proven a viable means of financing paramilitary activity.  Illegitimate sources in the context of war refer to the ability of an occupying force to extract economic and monetary resources and include, for example, ‘patriotic hold-ups’ of financial institutions, and spoliation.

This workshop seeks to provide answers to three central questions. First, who paid for war? Second, how did belligerents finance war – by borrowing, or printing money?  Finally, was there a juncture between resistance financing and the funding of conventional forces?

In the twentieth century, the global nature of conflict drastically altered existing power blocs and fostered ideologically motivated regimes. These changes were aided by improvements in mass communication technology, and a nascent corporatism that replaced the empire-building of the long nineteenth century. 

What remained unchanged, however, was the need for money in the waging of war. Throughout history, success in war depended on financial support. With it, armies can be paid and fed; research can be encouraged; weapons can be bought, and ordnance shipped. Without it, troops, arms, and supplies become scarcer and more difficult to acquire. Many of these considerations are just as applicable to clandestine forces. Nonetheless, there is an obvious constraint for the latter: their activity takes place in secret. This engenders important operational differences compared to state-sanctioned warfare and generates its own specific problems.

Traditionally, banking operations are predicated on an absence of confidence between parties to a transaction. Banks are institutional participants who act as trusted intermediaries, but what substitute intermediaries exist if the banking system has failed?  This was the quandary faced by members of the internal French resistance during the Second World War. Who could they trust to supply them regularly with funds? Where could they safely store their money, and who would change foreign currency into francs on their behalf?

Members of resistance groups could not acquire funds from the remnants of the French government while Marshal Pétain’s regime retained nominal control over the Non-Occupied Zone, nor could they obtain credit from the banking system.  Instead, resistance forces came to depend on external donations which were either airdropped or handed over by agents working on behalf of the British, American and Free French secret services. The traditional role of the banking sector was supplanted by military agents; the few bankers involved in resistance activities acted more as moneychangers, rather than as issuers of credit.

Without funding, clandestine operatives were unable to purchase food from the black market, or to rent rooms. Wages were indeed paid to resistance members, but there were disparities between the different groups and no official pay-scale existed.  Instead, leaders of the various groups decided on the salary range of their subordinates, which varied during the Second World War.

As liberation approached, a fifty-franc note, was produced on the orders of the Supreme Headquarters Allied Expeditionary Force (S.H.A.E.F.) in anticipation of being used within the Allied Military Government for Occupied Territories, in 1944, once the invasion of France was underway (Figure 1).

Figure 1. Allied Military Government for Occupied Territories (A.M.G.O.T.) fifty-
franc note (1944). Source: author’s collection

Clearly, there are many aspects of resistance financing, and the funding of conventional forces, that remain to be investigated. This workshop intends to facilitate ongoing discussions.

Due to the pandemic, this workshop will take place online on 13th November 2020. The keynote speech will be given via webinar and participants’ contributions will be uploaded before the event. To register for the event, click here

The workshop is financed by the ‘Initiatives & Conference Fund’ from the Economic History Society, a ‘Conference Organisation’ bursary from Royal Historical Society, and Oriel College, Oxford. 

More information about the Economic History Society’s grants opportunities can be found here

For more information: david.foulk@oriel.ox.ac.uk                          

@DavidFoulk9

Industrial, regional, and gender divides in British unemployment between the wars

By Meredith M. Paker (Nuffield College, Oxford)

This blog is part of a series of New Researcher blogs.

A view from Victoria Tower, depicts the position of London on both sides of the Thames, 1930. Available at Wikimedia Commons.

‘Sometimes I feel that unemployment is too big a problem for people to deal with … It makes things no better, but worse, to know that your neighbours are as badly off as yourself, because it shows to what an extent the evil of unemployment has grown. And yet no one does anything about it’.

A skilled millwright, Memoirs of the Unemployed, 1934.

At the end of the First World War, an inflationary boom collapsed into a global recession, and the unemployment rate in Britain climbed to over 20 per cent. While the unemployment rate in other countries recovered during the 1920s, in Britain it remained near 10 per cent for the entire decade before the Great Depression. This persistently high unemployment was then intensified by the early 1930s slump, leading to an additional two million British workers becoming unemployed.

What caused this prolonged employment downturn in Britain during the 1920s and early 1930s? Using newly digitized data and econometrics, my project provides new evidence that a structural transformation of the economy away from export-oriented heavy manufacturing industries toward light manufacturing and service industries contributed to the employment downturn.

At a time when few countries collected any reliable national statistics at all, in every month of the interwar period the Ministry of Labour published unemployment statistics for men and women in 100 industries. These statistics derived from Britain’s unemployment benefit program established in 1911—the first such program in the world. While many researchers have used portions of this remarkable data by manually entering the data into a computer, I was able to improve on this technique by developing a process using an optical-character recognition iPhone app. The digitization of all the printed tables in the Ministry of Labour’s Gazette from 1923 through 1936 enables the econometric analysis of four times as many industries as in previous research and permits separate analyses for male and female workers (Figure 1).

Figure 1: Data digitization. Left-hand side is a sample printed table in the Ministry of Labour Gazette. Right-hand side is the cleaned digitized table in Excel.

This new data and analysis reveal four key findings about interwar unemployment.  First, the data show that unemployment was different for men and women. The unemployment rate for men was generally higher than for women, averaging 16.1 percent and 10.3 per cent, respectively.  Unemployment increased faster for women at the onset of the Great Depression but also recovered quicker (Figure 2). One reason for these distinct experiences is that men and women generally worked in different industries. Many unemployed men had previously worked in coal mining, building, iron and steel founding, and shipbuilding, while many unemployed women came from the cotton-textile industry, retail, hotel and club services, the woolen and worsted industry, and tailoring.

Figure 2: Male and female monthly unemployment rates. Source: Author’s digitization of Ministry of Labour Gazettes.

Second, regional differences in unemployment rates in the interwar period were not due only to the different industries located in each region. There were large regional differences in unemployment above and beyond the effects of the composition of industries in a region.

Third, structural change played an important role in interwar unemployment. A series of regression models indicate that, ceteris paribus, industries that expanded to meet production needs during World War I had higher unemployment rates in the 1920s. Additionally, industries that exported much of their production also faced more unemployment. An important component of the national unemployment problem was thus the adjustments that some industries had to make due to the global trade disturbances following World War I.

Finally, the Great Depression accelerated this structural change. In almost every sector, more adjustment occurred in the early 1930s than in the 1920s. Workers were drawn into growing industries from declining industries, at a particularly fast rate during the Great Depression.

Taken together, these results suggest that there were significant industrial, regional, and gender divides in interwar unemployment that are obscured by national unemployment trends. The employment downturn between the wars was thus intricately linked with the larger structural transformation of the British economy.


Meredith M. Paker

meredith.paker@nuffield.ox.ac.uk

Twitter: @mmpaker

Italy and the Little Divergence in Wages and Prices: Evidence from Stable Employment in Rural Areas, 1500-1850

by Mauro Rota (Sapienza University of Rome) and Jacob Weisdorf (Sapienza University of Rome)

The full article from this post was now published on The Economic History Review and it is available on Early View at this link

The Medieval Plow (Moldboard Plow). Farming in the Middle Ages. Available at Wikimedia Commons

More than half a century ago, Carlo Cipolla advocated that early-modern Italy suffered a prolonged economic downturn. Subsequently, Cipolla’s view was challenged by Domenico Sella, who contended that Italy’s downturn was mainly an urban experience, with the countryside witnessing both rising agricultural productivity and growing proto-industry at the time. If Sella’s view is correct, it is no longer certain  that rural Italy performed differently to its rural counterparts in North-Western Europe.  This potentially implicates how to think about long-run trends in historical workers’ living standards and how these varied across Europe.

The common narrative – that early-modern Europe witnessed a little divergence in living standards – is underpinned by daily wages paid to urban labour. These show that London workers earned considerably more than those in other leading European cities. There are two important reasons, however, why casual urban wages might overstate the living standards of most early-modern workers. First, urban workers made up only a modest fraction of the total workforce. They also received an urban wage premium to cover their urban living expenses – a premium that most workers therefore did not enjoy. Second, many workers were employed on casual terms and had to piece their annual earnings together from daily engagements. This entailed a risk of involuntary underemployment for which workers without stable engagements were compensated. Unless this compensation is accounted for, day wages, on the usual but potentially ahistorical assumption of full-year casual employment, will overstate historical workers’ annual earnings and thus their living standards.

We present an early-modern wage index for ‘stable’ rural workers in the Grand Duchy of Tuscany, an important region in pre-industrial Italy (Figure 1).  Since these wages avoid the premiums described above, we argue that our wages offer a more suitable estimate of most historical workers’ living standards, and that the little divergence therefore should be considered using such wages instead. We draw a number of important conclusions on the basis of the new data and their comparison with pre-existing urban casual wages for Italy and rural stable wages for England.

Figure 1: Implied daily real wages of unskilled urban and rural workers in Italy, 1500-1850. Source: as per article.

First, we observe that early modern Italy’s downturn was effectively an urban one, with stable rural workers able to largely maintain their real annual income across the entire early-modern period (Figure 1). Indeed, the urban decline came from a pedestal of unprecedented high wages, possibly the highest in early-modern Europe, and certainly at heights that suggests that urban casual workers were paid considerable wage premiums to cover urban penalties alongside the risk of underemployment.

Our ‘apple-to-apple’ wage comparison within the Grand Duchy of Tuscany gives a precise indication of the size of the wage premiums discussed above. Figure 2 suggests that casual workers received a premium for job insecurity, and that urban workers, unlike rural workers, also received a wage premium. Further, when we compare the premium-free wages in the Grand Duchy of Tuscany with similar ones for England, we find that annual English earnings increased from 10 per cent higher than those in Italy in 1650, to 150 per cent higher by 1800 (Figure 3). If wages reflected labour productivity, then unskilled English workers – but not their Italian equals – grew increasingly more productive in the period preceding the Industrial Revolution.

Figure 2: The implied daily real wages of unskilled casual and stable workers in Tuscany, 1500-1850. Source: As per article.
Figure 3: Real annual income of unskilled workers in Italy and England, 1500-1850. Source: As per article.

We make three main conclusions based on our findings. First, our data support the hypothesis that early-modern Italy’s downturn was mainly an urban experience. Real rural earnings in Tuscany stayed flat between 1500 and 1850. Second, we find that rural England pulled away from Italy (Tuscany) after c. 1650. This divergence happened not because our sample of Italian workers lagged behind their North-Western European counterparts, as earlier studies based on urban casual wages have suggested, but because English workers were paid increasingly more than their Southern European peers. This observation brings us to our final conclusion: to the extent that annual labour productivity in England was reflected in the development of annual earnings, it increasingly outgrew  Italian achievements.

To contact the authors:

Mauro Rota, mauro.rota@uniroma1.it

Jacob Weisdorf, jacob.weisdorf@uniroma1.it

Seeing like the Chinese imperial state: how many government employees did the empire need?

By Ziang Liu (LSE)

This blog is part of a series of New Researcher blogs.

The Qianlong Emperor’s Southern Inspection Tour, Scroll Six Entering Suzhou and the Grand Canal. Available at Wikimedia Commons

How many government employees do we need? This has always been a question for both politicians and the public. We often see the debates from both sides arguing about whether the government should hire or reduce more employees for many reasons.

This was also a question for the Chinese imperial government centuries ago. As the Chinese state governs a vast territory with great cultural and socio-economic diversity, the size of government concerned not only the empire’s fiscal challenges, but also the effectiveness of the governance. My research finds that while a large-scale reduction in government expenditure may have short-term benefits of improving fiscal conditions, in the long term, the lack of investments in administration may harm the state’s ability to govern.

Using the Chinese case, we are interested to see how much the imperial central government counted as a ‘sufficient’ number of employees. How did the Chinese central government make the calculation? After all, the government has to know the ‘numbers’ before it takes any further actions.

Long before the late sixteenth century, the Chinese central government did not have a clear account of how much was spent on its local governments. It was only then, when the marketisation trend of China’s economy enabled the state to calculate the costs of its spending in silver currency, that the imperial central government began to ‘see’ the previously unknown amount of local spending in a unified and legible form.

Consequently, my research finds that over the sixteenth and eighteenth centuries, the Chinese imperial central state significantly improved its fiscal circumstances at the expense of local finance. For roughly a century’s long fiscal pressure between the late sixteenth and late seventeenth century (see Figure A), the central government continuously expanded its incomes and cut off local spending on government employees.

Eventually, at the turn of the eighteenth century, the central treasury’s annual income was roughly four to five times larger than the late sixteenth century level (see Figure B), and the accumulated fiscal surplus was in general one to two times greater than its annual budgetary income (see Figure C).

But what the central government left to the local, both manpower and funding, seems to have been too little to govern the empire. My research finds that either in terms of total numbers of government employees (see Figure D) or employees per thousand population (see Figure E), the size of China’s local states shrank quite dramatically from the late sixteenth century.

In the sample regions, we find that in the eighteenth century, every one to two government employees had to serve one thousand local population (Figure E). In the meantime, records also show that salary payments for local government employees remained completely unchanged from the late seventeenth century.

Therefore, my research considers that when the Chinese central state attempted to intervene local finance, it had the stronger intention of constraining rather than rationalising local finance. Even in the eighteenth century, when the empire’s fiscal circumstances were unprecedentedly good, the central state did not consider increasing investments in local administration.

Given the constant population growth in China from 100 million in the early seventeenth century to more than 300 million in the early nineteenth century, it is hardly persuasive that the size of China’s local governments could be effective in local governance, not even to mention that due to reductions in local finance, the Chinese local states from the late seventeenth century kept more personnel for state logistics and information network instead of local public services such as education and local security.

Britain’s inter-war super-rich, the 1928/9 ‘millionaire list’.

by Peter Scott (Henley Business School at the University of Reading)

The roaring 1920s. Available at <https://www.lovemoney.com/gallerylist/87193/the-roaring-1920s-richest-people-and-how-they-made-their-money&gt;

Most of our information on wealth distribution and top incomes is derived from data on wealth left at death, recorded in probates and estate duty statistics. This study utilises a unique list of all living millionaires for the 1928/9 tax year, compiled by the Inland Revenue to estimate how much a 40 per cent estate duty on them would raise in government revenue. Millionaires were identified by their incomes (over £50,000, or £3 million in 2018 prices), equivalent to a capitalised sum of over £1 million (£60 million in 2018 prices). Data for living millionaires are particularly valuable, given that even in the 1930s millionaires often had considerable longevity, and data on wealth at death typically reflected fortunes made, or inherited, several decades previously. Some millionaires’ names had been redacted – where their dates of birth or marriage were known – but cross-referencing with various data sources enabled the identification of 319 millionaires, equivalent to 72.8 percent of the number appearing on the millionaire list.

The tax year 1928 to 1929 is a very useful bench-mark for assessing the impact of the First World War and its aftermath on the composition of the super-rich. Prior to the 20th century, the highest echelons of wealth were dominated by the great landowners; reflecting a concentration of land-ownership unparalleled in Europe. William Rubinstein found that the wealth of the greatest landowners exceeded that of the richest businessmen until 1914, if not later. However, war-time inflation, higher taxes, and the post-war agricultural depression negatively impacted their fortunes. Meanwhile some industrialists benefitted enormously from the War.

By 1928 business fortunes had pushed even the wealthiest aristocrats, the Dukes of Bedford and Westminster, into seventh and eighth place on the list of top incomes. Their taxable incomes, £360,000 and £336,000 respectively, were dwarfed by those of the richest businessmen, such as the shipping magnate Sir John Ellerman (Britain’s richest man; the son of an immigrant corn broker who died in 1871, leaving £600) with a 1928 income of £1,553,000, or James Williamson, the first Baron Ashton, who  pioneered the mass production of linoleum – second on the list, with £760,000. Indeed, some 90 percent of named 1928/9 millionaires had fortunes based on (non-landed) business incomes. Moreover, the vast majority – 85.6 percent of non-landed males on the list – were active businesspeople, rather than rentiers.

 “Businesspeople millionaires” were highly clustered in certain sectors (relative to those sectors’ shares of all corporate profits): tobacco (5.40 times over-represented); shipbuilding (4.79); merchant and other banking (3.42); foods (3.20); ship-owning (3.02); other textiles (2.98); distilling (2.67), and brewing (2.59). These eight sectors collectively comprised 42.4 percent of all 1928/9 millionaires, but only 15.5 percent of aggregate profits. Meanwhile important sectors such as chemicals, cotton and woollen textiles, construction, and, particularly, distribution, are substantially under-represented.

The over-represented sectors were characterised by either rapid cartelisation and/or integration which, in most cases, had intensified during the War and its aftermath. Given that Britain had very limited tariffs, cartels and monopolies could only raise prices in sectors with other barriers to imports, principally “strategic assets”: assets that sustain competitive advantage through being valuable, rare, inimitable, and imperfectly substitutable. These included patents (rayon); control of distribution (brewing and tobacco); strong brands (whiskey; branded packaged foods); reputational assets (merchant banking); or membership of international cartels that granted territorial monopolies  (shipping; rayon). Conversely, there is very little evidence of “technical” barriers such as L-shaped cost curves that could have offset the welfare costs of industrial combination/concentration through scale economies. Instead, amalgamation or cartelisation were typically followed by rising real prices.

Another, less widespread but important tactic for gaining a personal and corporate competitive edge, was the use of sophisticated tax avoidance/evasion techniques to reduce tax liablity to a fraction of its headline rate. Tax avoidance was commonplace among Britain’s economic elite by the late 1920s, but a small proportion of business millionaires developed it to a level where most of their tax burden was removed, mainly via transmuting income into non-taxable capital gains and/or creating excessive depreciation tax allowances. Several leading British millionaires, including Ellerman, Lord Nuffield, Montague Burton, and the Vestey brothers (Union Cold Storage) were known to the Inland Revenue as skilled and successful tax avoiders.

These findings imply that the composition of economic elites should not simply be conflated with ‘wealth-creation’ prosperity (except for those elites), especially where their incomes include a substantial element of rent-seeking. Erecting or defending barriers to competition (through cartels, mergers, and strategic assets) may increase the number of very wealthy people, but  unlikely to have generated a positive influence on national economic growth and living standards — unless accompanied by rationalisation to substantially lower costs. In this respect typical inter-war business millionaires had strong commonalities with earlier, landed, British elites, in that they sustained their wealth through creating, and then perpetuating, scarcity in the markets for the goods and services they controlled.

To contact the author:

p.m.scott@henley.ac.uk

Spain’s tourism boom and the social mobility of migrant workers

By José Antonio García Barrero (University of Barcelona)

This blog is part of a series of New Researcher blogs.

Spain Balearic Islands Mediterranean Menorca. Available at Wikimedia Commons.

My research, which is based on a new database of the labour force in Spain’s tourism industry, analyses the assimilation of internal migrants in the Balearic Islands during the tourism boom between 1959 and 1973.

I show that tourism represented a context for upward social mobility for natives and migrants. But the extent of upward mobility was uneven among the different groups. While natives, foreigners and internal urban migrants achieved a significant level of upward mobility, the majority faced more difficulties to improve. The transferability of human capital to the services economy and the characteristics of their migratory fluxes determined the extent of the labour attainment of the migrants.

The tourism boom constituted one of the main scenarios of the path to modernisation in Spain in the twentieth century. Between 1959 and 1973, the country transformed into one of the top tourist economies of the world, mobilising a rapid and intense demographic and landscape transformation among coastal regions of the peninsula and the archipelagos.

The increasing demand for tourism services from West European societies triggered the massive arrival of tourists to the country. In 1959, four million tourists visited Spain; by 1973, the country hosted 31 million visitors. The epicentre of this phenomenon was the Balearic Islands.

In the Balearics, a profound transformation took place. In more than a decade, the capacity of the tourism industry skyrocketed from 215 to 1,534 hotels and pensions, and from 11,496 to 216,113 hotel beds. Between 1950 and 1981, the number of Spanish-born people from outside the Balearics increased from 33,000 inhabitants to 150,000, attracted by the high labour demand for tourism services. In 1950, they accounted for 9% of the total population; in 1981, that share had reached 34.4%.

In my research, I analyse whether the internal migrants who arrived at the archipelago – mostly seasonal migrants who ended up becoming permanent residents from stagnant rural agrarian areas in southern Spain – were able to take advantage of the rapid and profound transformation of the tourism industry. Instead of putting my focus on the process of movement from agrarian to services activities, my interest was in the potential possibilities of upward mobility in the host society.

I use a new database of the workforce, both men and women, in the tourism industry, comprising a total of 10,520 observations with a wide range of personal, professional and business data for each individual up to 1970. The features of this data make it possible to analyse the careers of these workers in the emerging service industry by cohort characteristics, including variables such as gender, place of birth, language skills or firm, among others. Using these variables, I examine the likelihood of belonging to four income categories.

My results suggest that the tourism explosion opened significant opportunities for upward labour mobility. Achieving high-income jobs was possible for workers involved in hospitality and tourism-related activities. But those who took advantage of this scenario were mainly male natives and urban migrants coming from northern Spain, mainly from Catalonia, and especially from European countries with clear advantages in terms of language skills.

For natives, human and social capital made the difference. For migrants, the importance of self-selection and the transferability of skills from urban cities to the new leisure economies were decisive.

Likewise, despite lagging behind, those from rural areas in southern Spain were able to achieve some degree of upward mobility, thus reducing progressively although not completely the gap with natives. Acquiring human capital through learning-by-doing and the formation of networks of support and information among migrants from the same areas increased the chances of improvement. Years of experience, knowing where to find job opportunities and holding personal contacts in the firms were important skills.

In that sense, the way that the migrants arrived at the archipelago mattered. Those more exposed to seasonal flows of migrants faced a lower capacity for upward mobility since they were recruited in their place of origin rather than through migrant networks or returned to their homes at the end of each season.

In comparison, those who relied on migratory networks and remained as residents in the archipelago had a greater chance of getting better jobs and reducing their socio-economic distance from the natives.

Baumol, Engel, and Beyond: Accounting for a century of structural transformation in Japan, 1885-1985

by Kyoji Fukao (Hitotsubashi University) and Saumik Paul (Newcastle University and IZA)

The full article from this blog post was published on The Economic History Review, and it is now available on Early View at this link

Bank of Japan, silver convertible yen. Available on Wiki Commons

Over the past two centuries, many industrialized countries have experienced dramatic changes in the sectoral composition of output and employment. The pattern of structural transformation, depicted for most of the developed countries, entails a steady fall in the primary sector, a steady increase in the tertiary sector, and a hump shape in the secondary sector. In the literature, the process of structural transformation is explained through two broad channels: the income effect, driven by the generalization of Engel’s law, and the substitution effect, following the differences in the rate of productivity across sectors, also known as “Baumol’s cost disease effect”.

At the same time, an input-output (I-O) model provides a comprehensive way to study the process of structural transformation. The input-output analysis accounts for intermediate input production by a sector, as many sectors predominantly produce intermediate inputs, and their outputs rarely enter directly into consumer preferences. Moreover, an input-output analysis relies on observed data and a national income identity to handle imports and exports. The input-output analysis has considerable advantages in the context of Japanese structural transformation first from agriculture to manufactured final consumption goods, and then to services, alongside transformations in Japanese exports and imports that have radically changed over time.

We examine the drivers of the long-run structural transformation in Japan over a period of 100 years, from 1885 to 1985. During this period, the value-added share of the primary sector dropped from 60 per cent  to less than 1 per cent, whereas that of the tertiary sector rose from 27 to nearly 60 per cent in Japan (Figure 1). We apply the Chenery, Shishido, and Watanabe framework to examine changes in the composition of sectoral output shares. Chenery, Shishido, and Watanabe used an inter-industry model to explain deviations from proportional growth in output in each sector and decomposed the deviation in sectoral output into two factors: the demand side effect, a combination of the Engel and Baumol effects (discussed above), and  the supply side effect, a change in the technique of production. However, the current input-output framework is unable to uniquely separate the demand side effect into forces labelled under the Engel and Baumol effects.

Figure 1. Structural transformation in Japan, 1874-2008. Source: Fukao and Paul (2017). 
Note: Sectoral shares in GDP are calculated using real GDP in constant 1934-36 prices for 1874-1940 and constant 2000 prices for 1955-2008. In the current study, the pre-WWII era is from 1885 to1935, and the post-WWII era is from 1955 to 1985. 

To conduct the decomposition analysis, we use seven I-O tables (every 10 years) in the prewar era from 1885 to 1935 and six I-O tables (every 5 years) in the postwar era from 1955 to 1985. These seven sectors include: agriculture, forestry, and fishery; commerce and services; construction;  food;  mining and manufacturing (excluding food and textiles); textiles, and  transport, communication, and utilities.

The results show that the annual growth rate of GDP more than doubled in the post-WWII era compared to the pre-WWII era. The real output growth was the highest in the commerce and services sector throughout the period under study, but there was also rapid growth of output in mining and manufacturing, especially in the second half of the 20th century. Sectoral output growth in mining and manufacturing (textile, food, and the other manufacturing), commerce and services, and transport, communications, and utilities outgrew the pace of growth in GDP in most of the periods. Detailed decomposition results show that in most of the sectors (agriculture, commerce and services, food, textiles, and transport, communication, and utilities), changes in private consumption were the dominant force behind the demand-side explanations. The demand-side effect was strongest in the commerce and services sector.

Overall, demand-side factors — a combination of the Baumol and Engel effects, were the main explanatory factors in the pre-WWII period, whereas  supply-side factors were the key driver of structural transformation in the post-WWII period.

To contact the authors:

Kyoji Fukao, k.fukao@r.hit-u.ac.jp

Saumik Paul, paulsaumik@gmail.com, @saumik78267353

Notes

Baumol, William J., “Macroeconomics of unbalanced growth: the anatomy of urban crisis”. American Economic Review 57, (1967) 415–426.

Chenery, Hollis B., Shuntaro Shishido and Tsunehiko Watanabe. “The pattern of Japanese growth, 1914−1954”, Econometrica30 (1962), 1, 98−139.

Fukao, Kyoji and Saumik Paul “The Role of Structural Transformation in Regional Convergence in Japan: 1874-2008.” Institute of Economic Research Discussion Paper No. 665. Tokyo: Institute of Economic Research (2017).