Demand slumps and wages: History says prepare to bargain

by Judy Z. Stephenson (Bartlett Faculty of the Built Environment, UCL)

This blog is part of the  EHS series on The Long View on Epidemics, Disease and Public Health:Research from Economic History).

Big shifts and stops in supply, demand, and output hark back to pre-industrial days, and they carry lessons for today’s employment contracts and wage bargains.

Canteen at the National Projectile Factory
Munitions factory in Lancaster, 1917 ca.
Image courtesy of Lancaster City Museum. Available at <http://www.documentingdissent.org.uk/munitions-factories-in-lancaster-and-morecambe/&gt;

Covid-19 has brought the world to a slump of unprecedented proportions. Beyond immediate crises in healthcare and treatment, the biggest impact is on employment. Employers, shareholders and policymakers are struggling to come to terms with the implications of ‘closed-for-business’ for an unspecified length of time, and laying-off workers seems the most common response, even though unprecedented government support packages for firms and workers have heralded the ‘return of the state’, and the fiscal implications have provoked wartime comparisons.

There is one very clear difference between war and the current pandemic: that of mobilisation. Historians tend to look on times of war as times of full employment and high demand. (1). A concomitant slump in demand and a huge surplus of de-mobilised labour were associated with the depression in real wages and labour markets in the peacetime years after 1815. That slump accompanied increasing investment in large scale factory production, particularly in the textile industry. The decades afterwards are some of the best documented in labour history (2), and they are characterised by frequent stoppages, down-scaling and restarts in production. They should be of interest now because they are the story of how modern capitalist producers learned to set and bargain for wages to ensure they had the skills they needed, when they needed to produce efficiently. Much of what employers and workers learned over the nineteenth century are directly pertinent to problems that currently face employers, workers, and the state.

Before the early nineteenth century in England – or elsewhere for that matter – most people were simply not paid a regular weekly wage, or in fact paid for their time at all (3). Very few people had a ‘job’, and shipwrights, building workers, some common labourers, (in all maybe 15% of workers in early modern economies) were paid ‘by the day’, but the hours or output that a ‘day’ involved were varied and indeterminate. The vast majority of pre-industrial workers were not paid for their time, but for what they produced.

These workers  earned piece rates, like today’s delivery riders earn ‘per drop’, and uber drivers earn ‘per ride’, or garment workers per unit made. When supply of materials failed, or demand for output stalled, workers were not paid, irrespective of whether they could work or not. Blockades, severe weather, famine, plague, financial crises, and unreliable supplies, all stopped work, and so payment of wages ended.  Stoppages were natural and expected. Historical records indicate that in many years commercial activity and work slowed to a trickle in January and February. Households subsisted on savings or credit before they could start earning again, or parishes and the poor law provided bare subsistence in the interim. Notable characteristics of pre-industrial wages – by piecework and otherwise – were wage posting and nominal rate rigidity, or lack of wage bargaining. Rates for some work didn’t change for almost a century, and the risk of no work seems to have been accounted for on both sides. (4).

Piecework, or payment for output is a system of wage formation is of considerable longevity   and its purpose was always to protect employers from labour costs in uncertain conditions. It seems attractive because it transfers  the risks associated with output volatility from the employer to the worker.  Such a practices are the basis of today’s  ‘gig’ economy.  Some workers – those in their prime who are skilled and strong – tend to do well out of the system, and enjoy being able to increase their earnings with effort. This is the flexibility of the gig economy that some relish today.  But its less effective for those who need to be trained or managed, older workers, or anyone who has to limit their hours.

However, piecework or gig wage systems have risks for the employer. In the long run, we know piece bargains break down, or become unworkably complex as both workers and employers behave opportunistically (5). Where firms need skilled workers to produce quickly, or they want to invest in firm or industry specific human capital to increase competitiveness through technology, they can suddenly find themselves outpriced by competitors, or with a labour force with a strong leisure preference or, indeed,  a labour shortage. Such conditions characterised early industrialisation. In the British textile industry this opportunism created and exacerbated stoppages throughout the nineteenth century. After each stoppage both employers and workers sought to change rates. But new bargains were difficult to agree. Employers tried to cut costs. Labour struck. Bargaining for wages impeded efficient production.

Eventually, piecework bargains formed implicit, more stable contracts and ‘invisible handshakes’ paved the way to the relative stability of hourly wages and hierarchy of skills in factories (though the mechanism by which this happened is contested) (6). The form of the wage slowly changed to payment by the hour or unit of time.  Employers worked out that ‘fair’ regular wages (or efficiency wages),  and a regular workforce served them better in the long run than trying to save labour costs through stoppages. Unionisation bettered working conditions and the security of contracts. The Trade Board Act of 1909 regulated the wages of industries still operating minimal piece rates, and ushered in the era of collective wage bargaining as the norm, which only ended with the labour market policies of Thatcherism and subsequent governments.

So far in the twenty-first century, although there has been a huge shift to self-employment, gig wage formation and non-traditional jobs (7) we have not experienced the bitter bargaining that characterised the shift from piecework to time work two hundred years ago, or the unrest of the 1970s and early 1980s. Some of this is probably down to the decline of output volatility that accompanied increased globalisation since the ‘Great Moderation’ and the extraordinarily low levels of unemployment in most economies in the last decade (8). Covid-19 brings output volatility back, in a big, unpredictable way, and the history of wage bargaining indicates that when factors of production are subject to shocks, bargaining is costly. Employers who want to rehire workers who have been unpaid for months, may find established wage bargains no longer hold. Now, shelf stackers who have risked their lives on zero hours contracts may think that their pay rate per hour should reflect this risk. Well-paid professionals incentivised by performance related pay are discovering the precarity of ‘eat what you kill’, and may find that their basic pay doesn’t reflect the preparatory work they need to do in conditions that will not let them perform. Employers facing the same volatility might try to change rates, and many employers have already moved to cut wages.

Today’s state guarantee of many worker’s income, unthinkable in the nineteenth century laissez-faire state, are welcome and necessary. That today’s gig economy workers have made huge strides towards attaining full employment rights would also appear miraculous to most pre-industrial workers. Yet, contracts and wage formation matter. With increasing numbers of workers without job security, and essential services suffering demand and supply shocks, many workers and employers are likely to confront significant shifts in employment.  History suggests bargaining for them is not as easy a process as the last thirty years have led us to believe.

 

To contact the author: 

j.stephenson@ucl.ac.uk

@judyzara

 

References:

(1). Allen, R. (2009). Engels’ pause: Technical change, capital accumulation, and inequality in the British industrial revolution. Explorations in Economic History, 46(4), 418-435; Broadberry et al, (2015). British Economic Growth, 1270-1870. CUP.

(2). Huberman. M., (1996) Escape from the Market, CUP, chapter 2.

(3). Hatcher, J., and Stephenson, J.Z. (Eds.), (2019) Seven Centuries of Unreal Wages, Palgrave Macmillan

(4). J. Stephenson and P. Wallis, ‘Imperfect competition’, LSE Working Paper (forthcoming).

(5). Brown, W. (1973) Piecework Bargaining, Heinemann.

(7). See debates between Huberman, Rose, Taylor and Winstanley in Social History 1987-89.

(6). Katz, L., & Krueger, A. (2016). The Rise and Nature of Alternative Work Arrangements in the United States, 1995-2015. NBER Working Paper Series.

(8). Fang, W., & Miller, S. (2014). Output Growth and its Volatility: The Gold Standard through the Great Moderation. Southern Economic Journal, 80(3), 728-751.

 

from Vox – “New eBook: The economics of the Great War. A centennial perspective”

by Stephen Broadberry (Oxford University) and Mark Harrison (University of Warwick)

There has always been disagreement over the origins of the Great War, with many authors offering different views of the key factors. One dimension concerns whether the actions of agents should be characterised as rational or irrational. Avner Offer continues to take the popular view that the key decision-makers were irrational in the common meaning of “stupid”, arguing that “(t)he decisions for war were irresponsible, incompetent, and worse”. Roger Ransom, by contrast, uses behavioural economics to introduce bounded rationality for the key decision makers. In his view, over-confidence caused leaders to gamble on war in 1914. At this stage, they expected a large but short war, and when a quick result was not achieved, they then faced the decision of whether or not to continue fighting, or seek a negotiated settlement. Here, Ransom views the decisions of leaders to continue fighting as driven by a concern to avoid being seen to lose the war, consistent with the predictions of prospect theory, where people are more concerned about avoiding losses than making gains. Mark Harrison sees the key decision makers as acting rationally in the sense of standard neoclassical economic thinking, choosing war as the best available option in the circumstances that they faced. For Germany and the other Central Powers in 1914, the decision for war reflected a rational pessimism: locked in a power struggle with the Triple Entente, they had to strike then because their prospects of victory would only get worse.

Full post at https://voxeu.org/article/new-ebook-economics-great-war-centennial-perspective

Winning the capital, winning the war: retail investors in the First World War

by Norma Cohen (Queen Mary University of London)

 

Put_it_into_National_War_Bonds
National War Savings CommitteeMcMaster University Libraries, Identifier: 00001792. Available at wikimedia commons

The First World War brought about an upheaval in British investment, forcing savers to repatriate billions of pounds held abroad and attracting new investors among those living far from London, this research finds. The study also points to declining inequality between Britain’s wealthiest classes and the middle class, and rising purchasing power among the lower middle classes.

The research is based on samples from ledgers of investors in successive War Loans. These are lodged in archives at the Bank of England and have been closed for a century. The research covers roughly 6,000 samples from three separate sets of ledgers of investors between 1914 and 1932.

While the First World War is recalled as a period of national sacrifice and suffering, the reality is that war boosted Britain’s output. Sampling from the ledgers points to the extent to which war unleashed the industrial and engineering innovations of British industry, creating and spreading wealth.

Britain needed capital to ensure it could outlast its enemies. As the world’s capital exporter by 1914, the nation imposed increasingly tight measures on investors to ensure capital was used exclusively for war.

While London was home to just over half the capital raised in the first War Loan in 1914, that had fallen to just under 10% of capital raised in the years after. In contrast, the North East, North West and Scotland – home to the mining, engineering and shipbuilding industries – provided 60% of the capital by 1932, up from a quarter of the total raised by the first War Loan.

The concentration of investor occupations also points to profound social changes fostered by war. Men describing themselves as ‘gentleman’ or ‘esquire’ – titles accorded those wealthy enough to live on investment returns – accounted for 55% of retail investors for the first issue of War Loan. By the post-war years, these were 37% of male investors.

In contrast, skilled labourers – blacksmiths, coal miners and railway signalmen among others– were 9.0% of male retail investors by the after-war years, up from 4.9% in the first sample.

Suppliers of war-related goods may not have been the main beneficiaries of newly-created wealth. The sample includes large investments by those supplying consumer goods sought by households made better off by higher wages, steady work and falling unemployment during the war.

During and after the war, these sectors were accused of ‘profiteering’, sparking national indignation. Nearly a quarter of investors in 5% War Loan listing their occupations as ‘manufacturer’ were producing boots and leather goods, a sector singled out during the war for excess profits. Manufacturers in the final sample produced mineral water, worsteds, jam and bread.

My findings show that War Loan was widely held by households likely to have had relatively modest wealth; while the largest concentration of capital remained in the hands of relatively few, larger numbers had a small stake in the fate of the War Loans.

In the post-war years, over half of male retail investors held £500 or less. This may help to explain why efforts to pay for war by taxing wealth as well as income – a debate that echoes today – proved so politically challenging. The rentier class on whom additional taxation would have been levied may have been more of a political construct by 1932 than an actual presence.

 

EHS 2018 special: How the Second World War promoted racial integration in the American South

by Andreas Ferrara (University of Warwick)

c805244f10399f75a8d9f41f67baf87e
African American and White Employees Working Together during WWII. Available at <https://www.pinterest.com.au/pin/396950154628232921/&gt;

European politicians face the challenge of integrating the 1.26 million refugees who arrived in 2015. Integration into the labour market is often discussed as key to social integration but empirical evidence for this claim is sparse.

My research contributes to the debate with a historical example from the American South where the Second World War increased the share of black workers in semi-skilled jobs such as factory work, jobs previously dominated by white workers.

I combine census and military records to show that the share of black workers in semi-skilled occupations in the American South increased as they filled vacancies created by wartime casualties among semi-skilled whites.

A fallen white worker in a semi-skilled occupation was replaced by 1.8 black workers on average. This raised the share of African Americans in semi-skilled jobs by 10% between 1940 and 1950.

Survey data from the South in 1961 reveal that this increased integration in the workplace led to improved social relations between black and white communities outside the workplace.

Individuals living in counties where war casualties brought more black workers into semi-skilled jobs between 1940-50 were 10 percentage points more likely to have an interracial friendship, 6 percentage points more likely to live in a mixed-race neighbourhood, and 11 percentage points more likely to favour integration over segregation in general, as well as at school and at church. These positive effects are reported by both black and white respondents.

Additional analysis using county-level church membership data from 1916 to 1971 shows similar results. Counties where wartime casualties resulted in a more racially integrated labour force saw a 6 percentage points rise in membership shares of churches, which already held mixed-race services before the war.

The church-related results are especially striking. In several of his speeches Dr Martin Luther King stated that 11am on Sunday is the most segregated hour in American life. And yet my analysis shows that workplace exposure of two groups can overcome even strongly embedded social divides such as churchgoing, which is particularly important in the South, the so-called bible belt.

This historical case study of the American South in the mid-twentieth century, where race relations were often tense, demonstrates that excluding refugees from the workforce may be ruling out a promising channel for integration.

Currently, almost all European countries forbid refugees from participating in the labour market. Arguments put forward to justify this include fear of competition for jobs, concern about downward pressure on wages and a perceived need to deter economic migration.

While the mid-twentieth century American South is not Europe, the policy implication is to experiment more extensively with social integration through workplace integration measures. This not only concerns the refugee case but any country with socially and economically segregated minority groups.

From VOX – Short poppies: the height of WWI servicemen

From Timothy Hatton, Professor of Economics, Australian National University and University of Essex. Originally published on 9 May 2014

The height of today’s populations cannot explain which factors matter for long-run trends in health and height. This column highlights the correlates of height in the past using a sample of British army soldiers from World War I. While the socioeconomic status of the household mattered, the local disease environment mattered even more. Better education and modest medical advances led to an improvement in average health, despite the war and depression.

hattongraph
Distribution of heights in a sample of army recruits. From Bailey et al. (2014)

The last century has seen unprecedented increases in the heights of adults (Bleakley et al., 2013). Among young men in western Europe, that increase amounts to about four inches. On average, sons have been taller than their fathers for the last five generations. These gains in height are linked to improvements in health and longevity.

Increases in human stature have been associated with a wide range of improvements in living conditions, including better nutrition, a lower disease burden, and some modest improvement in medicine. But looking at the heights of today’s populations provides limited evidence on the socioeconomic determinants that can account for long-run trends in health and height. For that, we need to understand the correlates of height in the past. Instead of asking why people are so tall now, we should be asking why they were so short a century ago.

In a recent study Roy Bailey, Kris Inwood and I ( Bailey et al. 2014) took a sample of soldiers joining the British army around the time of World War I. These are randomly selected from a vast archive of two million service records that have been made available by the National Archives, mainly for the benefit of genealogists searching for their ancestors.

For this study, we draw a sample of servicemen who were born in the 1890s and who would therefore be in their late teens or early twenties when they enlisted. About two thirds of this cohort enlisted in the armed services and so the sample suffers much less from selection bias than would be likely during peacetime, when only a small fraction joined the forces. But we do not include officers who were taller than those they commanded. And at the other end of the distribution, we also miss some of the least fit, who were likely to be shorter than average.

FULL TEXT HERE

From The Royal Economic Society – Myths of the Great War

From issue no. 165, APril 2014, pp.17-196

 

Understandably, 2014 has seen (and will yet see) many reflections on the ‘Great War’ of 1914-18. In a lecture given to the Economic History Society Annual Conference on 28th March, Mark Harrison1 identified a number of widely-held myths about that tragic event. This is a shortened version of that lecture, which is available at: http://warwick.ac.uk/cage/research/wpfeed/188-2014_harrison.pdf.

Perceptions of the Great War continue to resonate in today’s world of international politics and policy. Most obviously, does China’s rise show a parallel with Germany’s a century ago? Will China’s rise, unlike Germany’s, remain peaceful? The Financial Times journalist Gideon Rachman wrote last year:

The analogy [of China today] with Germany before the first world war is striking … It is, at least, encouraging that the Chinese leadership has made an intense study of the rise of great powers over the ages – and is determined to avoid the mistakes of both Germany and Japan.2

The idea that China’s leaders wish to avoid Germany’s mistakes is encouraging, certainly.3 But what are the ‘mistakes’, exactly, that they will now seek to avoid? The world can hardly be reassured if we ourselves, social scientists and historians, remain uncertain what mistakes were made and even whether they were mistakes in the first place.

In this lecture I shall review four popular narratives relating to the Great War. They concern why the war started, how it was won, how it was lost, and in what sense it led to the next war.

Full article here: www.res.org.uk/view/art6Apr14Features.html

 

The Great War and Evolution of Central Banking in India

by Tehreem Husain, The Express Tribune

s-l300

Post global financial crisis, there has been increased importance on exploring financial history of advanced economies and emerging markets to identify episodes of boom, crisis and regulatory responses from which parallels can be drawn today. In this blog, Tehreem Husain discusses an episode from early twentieth century Indian financial history which narrates the tale of a crisis and the evolution of a regulatory institution-the central bank in its wake.

The importance of India amongst the pool of emerging market economies can be gauged from the fact that it contributed 6.8 per cent to global GDP on PPP basis in 2014. Sustaining this growth track requires robust financial regulatory frameworks which can only come with a thorough understanding of its history and the events which led to the evolution of its crucial building block-the central bank. Researching early twentieth century Indian financial history suggests that the onset of the Great War and the financial crisis that ensued in India gave impetus to the creation of a central banking institution in the country.

The Great War, one of the most expensive wars in history, caused untold loss of human life and damages to economic and social resources. Britain at the forefront of the war went through insurmountable stress to meet financing needs of the war. Stephen Broadberry and other eminent economic historians have estimated that the cost of the Great War to Britain exceeded one-third of the total national income of war years. As the war continued in Europe, its stress spilled over the boundaries of mainland Britain and British colonies also became entangled in human and financial costs. For instance, not only did India contribute approximately 1.5 million men recruited during the war, but Indian taxpayers also made a significant contribution of £146 million to Britain to finance the war.

War times impose huge costs on the entire economy but more so for banks, due to the key role that they play in financing it. The National Bureau of Economic Research published a special volume on the effect of war on banking in 1943. One of the chapters, ‘Banking System and War Finance’, highlighted the crucial importance of commercial banks for Treasury borrowing. Banks constituted the largest purchasers of government obligations in addition to being the single most important outlet for the sale of government obligations to the public during World War II. Going back, similar to the experience of other countries, during the Great War Indian treasury borrowed heavily from the banking system. Debt archives from 1918 show that Rs 503.3 million were raised in the form of loans, Treasury Bills and Post Office Cash Certificates. At the same time government continued to issue fresh currency notes, which contributed to extraordinary liquidity flushing the banking sector (evidenced by a high cash-to-deposit ratio).

Studying the Indian economy during that time period using macro-financial indicator analysis, the relation between the British involvements in the Great War and the evolution of central banking is explored in India. Evidence suggests that exigencies of war-finance and government resorting to banking system to finance expenditures, the latter came under huge strain. A stressed macro and financial environment during the war years further weakened the fragile and fragmented Indian banking system. It led to a contagion like financial crisis accelerating bank failures in the war years and beyond. This crisis went unabated due to lack of a formal regulatory structure.

The near absence of regulatory oversight leading to financial crisis gave impetus to the creation of a central banking authority. Although the idea of a ‘banking establishment for India’ dates back to 1836, as a consequence of this episode, restructuring and reforms process ensued. This led to the introduction of a quasi-central banking institution, the Imperial Bank of India in 1921 and finally the creation of a full fledged central bank – the Reserve Bank of India, in 1935. In general, as argued by economists Stijn Claessens and M. Ayhan Kose (2013) deficiencies in regulatory oversight[1] leading to currency and maturity mismatches and resultant financial crisis are applicable to this episode as well.

Interestingly, this episode was not unique to India. In the presence of no regulatory institutions, management and resolution of financial crisis becomes increasingly complex. Historian Harold James has written that the global financial panic of 1907 demonstrated the necessity to America the need to mobilize financial power themselves in the form of a central bank analogous to the Bank of England. The Federal Reserve was created in 1913.

To conclude, one can argue that absence of a formal central banking institution in India resulted in many stressed scenarios for Indian financial system and missed opportunities for the imperial government. This meant that at that time there was no liquidity support available to the failing commercial banks, no control and coordination of credit creation (i.e. no reserve requirements), no mechanism or support for price discovery of the securities to be traded in the primary and secondary markets, etc. A similar argument was given by Keynes in his book ‘Indian Currency and Finance’ supporting the idea of an Indian central bank. Had there been a central bank in India it would have performed three essential functions: (a) assist the government in flotation of bonds or other government securities to the commercial banks, (b) provide direct lending to treasury in the form of ways-and-means advances or by purchase of government securities, and (c) provide reserves to the commercial banks to help them buy government obligations and offer them guidance and support to carry on as much of their traditional task of financing trade and industry as was compatible with a maximum war effort.

This article was based on the working paper ‘’Great War and Evolution of Central Banking in India”.

[1] Claessens, S., and Kose, M.A, 2013,” Financial Crises: Explanations, Types and Implications”, IMF Working Paper WP/13/28