British perceptions of German post-war industrial relations

By Colin Chamberlain (University of Cambridge)

Some 10,000 steel workers participate in a demonstration to demand a 10 per...
A demonstration in Stuttgart, 11th January 1962.  Picture alliance/AP Images, available at <https://www.gewerkschaftsgeschichte.de/1953-schwerpunkt-tarifpolitik.html&gt;

‘Almost idyllic’ – this was the view of one British commentator on the state of post-war industrial relations in West Germany. No one could say the same about British industrial relations. Here, industrial conflict grew inexorably from year to year, forcing governments to expend ever more effort on preserving industrial peace.

Deeply frustrated, successive governments alternated between appeasing trade unionists and threatening them with new legal sanctions in an effort to improve their behaviour, thereby avoiding tackling the fundamental issue of their institutional structure. If the British had only studied the German ‘model’ of industrial relations more closely, they would have understood better the reforms that needed to be made.

Britain’s poor state of industrial relations was a major, if not the major, factor holding back Britain’s economic growth, which was regularly less than half the rate in Germany, not to speak of the chronic inflation and balance of payments problems that only made matters worse. So, how come the British did not take a deeper look at the successful model of German industrial relations and learn any lessons?

Ironically, the British were in control of Germany at the time the trade union movement was re-establishing itself after the war. The Trades Union Congress and the British labour movement offered much goodwill and help to the Germans in their task.

But German trade unionists had very different ideas to the British trade unions on how to go about organising their industrial relations, ideas that the British were to ignore consistently over the post-war period. These included:

    • In Britain, there were hundreds of trade unions, but in Germany, there were only 16 re-established after the war, each representing one or more industries, thereby avoiding the demarcation disputes so common in Britain.
    • Terms and conditions were negotiated on this industry-basis by strong well-funded trade unions, which welcomed the fact that their two or three year long collective agreements were legally enforceable in Germany’s system of industrial courts.
    • Trade unions were not involved in workplace grievances and disputes. These were left to employees and managers meeting together in Germany’s highly successful works councils to resolve such issues informally along with engaging in consultative exercises on working practices and company reorganisations. As a result, German companies did not seek to lay-off staff as British companies did on any fall in demand, but rathet to retrain and reallocate them.

British trade unions pleaded that their very untidy institutional structure with hundreds of competing trade unions was what their members actually wanted and should therefore be outside any government interference. The trade unions jealously guarded their privileges and especially rejected any idea of industry-based unions, legally enforceable collective agreements and works councils.

A heavyweight Royal Commission was appointed, but after three years’ deliberation, it came up with little more than the status quo. It was reluctant to study any ideas emanating from Germany.

While the success of industrial relations in Germany was widely recognised in Britain, there was little understanding about why this was so or indeed much interest in it. The British were deeply conservative about the ‘institutional shape’ of industrial relations and had a fear of putting forward any radical German ideas. Britain was therefore at a big disadvantage as far as creating modern trade unions operating in a modern state.

So, what economic price the failure to sort out the institutional structure of the British trade unions?

From VoxEU – Wellbeing inequality in retrospect

Rising trends in GDP per capita are often interpreted as reflecting rising levels of general wellbeing. But GDP per capita is at best a crude proxy for wellbeing, neglecting important qualitative dimensions. 36 more words

via Wellbeing inequality in retrospect — VoxEU.org: Recent Articles

To elaborate further on the topic, Prof. Leandro de la Escosura has made available several databases on inequality, accessible here, as well as a book on long-term Spanish economic growth, available as open source here

 

Winning the capital, winning the war: retail investors in the First World War

by Norma Cohen (Queen Mary University of London)

 

Put_it_into_National_War_Bonds
National War Savings CommitteeMcMaster University Libraries, Identifier: 00001792. Available at wikimedia commons

The First World War brought about an upheaval in British investment, forcing savers to repatriate billions of pounds held abroad and attracting new investors among those living far from London, this research finds. The study also points to declining inequality between Britain’s wealthiest classes and the middle class, and rising purchasing power among the lower middle classes.

The research is based on samples from ledgers of investors in successive War Loans. These are lodged in archives at the Bank of England and have been closed for a century. The research covers roughly 6,000 samples from three separate sets of ledgers of investors between 1914 and 1932.

While the First World War is recalled as a period of national sacrifice and suffering, the reality is that war boosted Britain’s output. Sampling from the ledgers points to the extent to which war unleashed the industrial and engineering innovations of British industry, creating and spreading wealth.

Britain needed capital to ensure it could outlast its enemies. As the world’s capital exporter by 1914, the nation imposed increasingly tight measures on investors to ensure capital was used exclusively for war.

While London was home to just over half the capital raised in the first War Loan in 1914, that had fallen to just under 10% of capital raised in the years after. In contrast, the North East, North West and Scotland – home to the mining, engineering and shipbuilding industries – provided 60% of the capital by 1932, up from a quarter of the total raised by the first War Loan.

The concentration of investor occupations also points to profound social changes fostered by war. Men describing themselves as ‘gentleman’ or ‘esquire’ – titles accorded those wealthy enough to live on investment returns – accounted for 55% of retail investors for the first issue of War Loan. By the post-war years, these were 37% of male investors.

In contrast, skilled labourers – blacksmiths, coal miners and railway signalmen among others– were 9.0% of male retail investors by the after-war years, up from 4.9% in the first sample.

Suppliers of war-related goods may not have been the main beneficiaries of newly-created wealth. The sample includes large investments by those supplying consumer goods sought by households made better off by higher wages, steady work and falling unemployment during the war.

During and after the war, these sectors were accused of ‘profiteering’, sparking national indignation. Nearly a quarter of investors in 5% War Loan listing their occupations as ‘manufacturer’ were producing boots and leather goods, a sector singled out during the war for excess profits. Manufacturers in the final sample produced mineral water, worsteds, jam and bread.

My findings show that War Loan was widely held by households likely to have had relatively modest wealth; while the largest concentration of capital remained in the hands of relatively few, larger numbers had a small stake in the fate of the War Loans.

In the post-war years, over half of male retail investors held £500 or less. This may help to explain why efforts to pay for war by taxing wealth as well as income – a debate that echoes today – proved so politically challenging. The rentier class on whom additional taxation would have been levied may have been more of a political construct by 1932 than an actual presence.

 

THE IMPACT OF MALARIA ON EARLY AFRICAN DEVELOPMENT: Evidence from the sickle cell trait

_Keep_out_malaria_mosquitoes_repair_your_torn_screen__-_NARA_-_514969
poster “Keep out malaria mosquitoes repair your torn screens”. U.S. Public Health Service, 1941–45

While malaria historically claimed millions of African lives, it did not hold back the continent’s economic development. That is one of the findings of new research by Emilio Depetris-Chauvin (Pontificia Universidad Católica de Chile) and David Weil (Brown University), published in the Economic Journal.

Their study uses data on the prevalence of the gene that causes sickle cell disease to estimate death rates from malaria for the period before the Second World War. They find that in parts of Africa with high malaria transmission, one in ten children died from malaria or sickle cell disease before reaching adulthood – a death rate more than twice the current burden of malaria in these regions.

 

According to the World Health Organization, the malaria mortality rate declined by 29% between 2010 and 2015. This was a major public health accomplishment, although with 429,000 annual deaths, the disease remains a terrible scourge.

Countries where malaria is endemic are also, on average, very poor. This correlation has led economists to speculate about whether malaria is a driver of poverty. But addressing that issue is difficult because of a lack of data. Poverty in the tropics has long historical roots, and while there are good data on malaria prevalence in the period since the Second World War, there is no World Malaria Report for 1900, 1800 or 1700.

Biologists only came to understand the nature of malaria in the late nineteenth century. Even today, trained medical personnel have trouble distinguishing between malaria and other diseases without the use of microscopy or diagnostic tests. Accounts from travellers and other historical records provide some evidence of the impact of malaria going back millennia, but these are hardly sufficient to draw firm conclusions. Akyeampong (2006), Mabogunje and Richards (1985)

This study addresses the lack of information on malaria’s impact historically by using genetic data. In the worst afflicted areas, malaria left an imprint on the human genome that can be read today.

Specifically, the researchers look at the prevalence of the gene that causes sickle cell disease. Carrying one copy of this gene provided individuals with a significant level of protection against malaria, but people who carried two copies of the gene died before reaching reproductive age.

Thus, the degree of selective pressure exerted by malaria determined the equilibrium prevalence of the gene in the population. By measuring the prevalence of the gene in modern populations, it is possible to back out estimates of the severity of malaria historically.

In areas of high malaria transmission, 20% of the population carries the sickle cell trait. The researchers’ estimate is that this implies that historically 10-11% of children died from malaria or sickle cell disease before reaching adulthood. Such a death rate is more than twice the current burden of malaria in these regions.

Comparing the most affected areas with those least affected, malaria may have been responsible for a ten percentage point difference in the probability of surviving to adulthood. In areas of high malaria transmission, the researchers’ estimate that life expectancy at birth was reduced by approximately five years.

Having established the magnitude of malaria’s mortality burden, the researchers then turn to its economic effects. Surprisingly, they find little reason to believe that malaria held back development. A simple life cycle model suggests that the disease was not very important, primarily because the vast majority of deaths that it caused were among the very young, in whom society had invested few resources.

This model-based finding is corroborated by the findings of a statistical examination. Within Africa, areas with higher malaria burden, as evidenced by the prevalence of the sickle cell trait, do not show lower levels of economic development or population density in the colonial era data examined in this study.

 

To contact the authors:  David Weil, david_weil@brown.edu

EFFECTS OF COAL-BASED AIR POLLUTION ON MORTALITY RATES: New evidence from nineteenth century Britain

pic
Samuel Griffiths (1873) The Black Country in the 1870s. In Griffiths’ Guide to the iron trade of Great Britain.

Industrialised cities in mid-nineteenth century Britain probably suffered from similar levels of air pollution as urban centres in China and India do today. What’s more, the damage to health caused by the burning of coal was very high, reducing life expectancy by more than 5% in the most polluted cities like Manchester, Sheffield and Birmingham. It was also responsible for a significant proportion of the higher mortality rates in British cities compared with rural parts of the country.

 These are among the findings of new research by Brian Beach (College of William & Mary) and Walker Hanlon (NYU Stern School of Business), which is published in the Economic Journal. Their study shows the potential value of history for providing insights into the long-run consequences of air pollution.

From Beijing to Delhi and Mexico City to Jakarta, cities across the world struggle with high levels of air pollution. To what extent does severe air pollution affect health and broader economic development for these cities? While future academics will almost surely debate this question, assessing the long-run consequences of air pollution for modern cities will not be possible for decades.

But severe air pollution is not a new phenomenon; Britain’s industrial cities of the nineteenth century, for example, also faced very high levels of air pollution. Because of this, researchers argue that history has the potential to provide valuable insights into the long-run consequences of air pollution.

One challenge in studying historical air pollution is that direct pollution measures are largely unavailable before the mid-twentieth century. This study shows how historical pollution levels in England and Wales can be inferred by combining data on the industrial composition of employment in local areas in 1851 with information on the amount of coal used per worker in each industry.

This makes it possible to estimate the amount of coal used in over 581 districts covering all of England and Wales. Because coal was by far the most important pollutant in Britain in the nineteenth century (as well as much of the twentieth century), this provides a way of approximating local industrial pollution emission levels.

The results are consistent with what historical sources suggest: the researchers find high levels of coal use in a broad swath of towns stretching from Lancashire and the West Riding down into Staffordshire, as well as in the areas around Newcastle, Cardiff and Birmingham.

By comparing measures of local coal-based pollution to mortality data, the study shows that air pollution was a major contributor to mortality in Britain in the mid-nineteenth century. In the most polluted locations – places like Manchester, Sheffield and Birmingham – the results show that air pollution resulting from industrial coal use reduced life expectancy by more than 5%.

One potential concern is that locations with more industrial coal use could have had higher mortality rates for other reasons. For example, people living in these industrial areas could have been poorer, infectious disease may have been more common or jobs may have been more dangerous.

The researchers deal with this concern by looking at how coal use in some parts of the country affected mortality in other areas that were, given the predominant wind direction, typically downwind. They show that locations which were just downwind of major coal-using areas had higher mortality rates than otherwise similar locations which were just upwind of these areas.

These results help to explain why cities in the nineteenth century were much less healthy than more rural areas – the so-called urban mortality penalty. Most existing work argues that the high mortality rates observed in British cities in the nineteenth century were due to the impact of infectious diseases, bad water and unclean food.

The new results show that in fact about one third of the higher mortality rate in cities in the nineteenth century was due to exposure to high levels of air pollution due to the burning of coal by industry.

In addition to assessing the effects of coal use on mortality, the researchers use these effects to back out very rough estimates of historical particulate pollution levels. Their estimates indicate that by the mid-nineteenth century, industrialised cities in Britain were probably as polluted as industrial cities in places like China and India are today.

These findings shed new light on the impact of air pollution in nineteenth century Britain and lay the groundwork for further research analysing the long-run effects of air pollution in cities.

 

To contact the authors:  Brian Beach (bbbeach@wm.edu); Walker Hanlon (whanlon@stern.nyu.edu)

Managing the Economy, Managing the People: narratives of economic life in Britain from Beveridge to Brexit

by Jim Tomlinson (University of Glasgow)

 

book‘It’s the economy stupid’, like most clichés, both reveals and conceals important truths. The slogan suggests a hugely important truth about the post-1945 politics of the advanced democracies such as Britain: that economic  issues have been crucial to government strategies and political arguments. What the cliché conceals is the need to examine what is understood by ‘the economy’, a term which has no fixed meaning, and has been constantly re-worked over the years. Starting from those two points, this book provides a distinctive new account of British economic life since the 1940s, focussing upon how successive governments, in seeking to manage the economy, have sought simultaneously to ‘manage the people’: to try and manage popular understanding of economic issues.

The first half the book analyses the development of the major narratives from the 1940s onwards. This  covers the notion of ‘austerity’ and its particular meaning in the 1940s; the rise of a narrative of ‘economic decline’ from the late 1950s, and the subsequent attempts to ‘modernize’ the economy; the attempts to ‘roll back the state’ from the 1970s; the impact of ideas of ‘globalization’ in the 1900s; and, finally, the way the crisis of 2008/9 onwards was constructed as a problem of ‘debts and deficits’. The second part focuses in on four key issues in attempts to ‘manage the people’: productivity, the balance of payments, inflation and unemployment. It shows how in each case  governments sought to get the populace to understand these issues in a particular light, and shaped strategies to that end.

One conclusion of the book is the grounding of most representations of key economic problems of the post-war period in Britain as an industrial economy, and how de-industrialization undermines this representation.  Unemployment, from its origins in the late-Victorian period, was largely about the malfunctioning of  industrial (and male) labour markets. De-industrialization, accompanied by the proliferation of precarious work, including much classified as ‘self-employment’, radically challenges our understanding of  this problem, however much it remains the case that for the great bulk of the population selling their labour is key to their economic prosperity.

The concern with productivity was likewise grounded in the industrial sector. But outside the marketed services, in non-marketed provision like education, health and care, the problems of conceptualising, let alone measuring, productivity are immense. In a world where personal services of various kinds are becoming ever more important, traditional notions of productivity need a radical re-think.

Less obviously, the notion of a national rate of inflation, such as the Cost of Living Index and later the RPI, was grounded in attempts to measure the real wages of the industrial working class. With the value of housing as key underpinning for consumption, and the ‘financialization’ of the economy, this traditional notion of inflation, measuring the cost of a basket of consumables against nominal wages, has been undermined. Asset, especially housing, prices matter much more to many wage earners, whilst the value of financial assets is also important to increasing numbers of people as the population ages.

Finally, the decline of concern with the balance of payments is linked to the rise in the relative importance of financial flows, making  the manufacturing balance or the current account less pertinent. For many years now Britain’s external payments have relied on the rates of return on overseas assets, exceeding those on domestic assets held by foreigners. We are a very long way indeed from 1940s stories of ‘England’s bread hangs by Lancashire’s thread’.

De-industrialization has not only undercut the coherence and relevance of the four standard economic policy problems of the post-war years, but has also destroyed the primary audience that most post-war economic propaganda was aimed at: the industrial working class. While other audiences were not entirely neglected, it was the worker (usually the male worker), who was the prime target of the narratives and whose understandings and behaviour were seen as the key to the projected solutions.

A recurrent anxiety of this propaganda was the receptivity of those workers to its messages. This anxiety helps to explain much of the ‘simplified’ language of this propaganda, as well as its patterns of distribution. More fundamentally, this anxiety rested upon uncertainties about what kind of arguments would a working-class audience find congenial; there was perennial debate about the efficacy of appeals to individual as opposed to the ‘national’ interest. Above all, there was a moral message of distributive justice which infused much of the propaganda, ultimately grounded in the belief that working class culture had within it ingrained notions of  ‘fairness’ that had to be appealed to.

While ethical appeals continued to inform economic propaganda into the twenty-first century, the fragmentation of the old audience accelerated. In addition, given the upward lurch in inequality in the 1980s, and the following period of continuing growth of incomes right at the top of the distribution, appeals to ‘fairness’ have become much more difficult to make credible. Strikingly, concerns about inequality emerged across the political spectrum after the 2007/8 financial crisis, at the same time as the narrative of debts, deficits and austerity had driven post-crisis policies that increased  inequality. Widespread talk of ‘reducing inequality’, whilst having obvious political appeal, especially after Brexit, would seem to be largely rhetorical.

 

Managing the Economy, Managing the People: narratives of economic life in Britain from Beveridge to Brexit is edited by Oxford University Press, 2017,  ISBN 978-019-878609-2

To contact the author: Jim.Tomlinson@Glasgow.ac.uk

Land reform and agrarian conflict in 1930s Spain

Jordi Domènech (Universidad Carlos III de Madrid) and Francisco Herreros (Institute of Policies and Public Goods, Spanish Higher Scientific Council)

Government intervention in land markets is always fraught with potential problems. Intervention generates clearly demarcated groups of winners and losers as land is the main asset owned by households in predominantly agrarian contexts. Consequently, intervention can lead to large, generally welfare-reducing changes in the behaviour of the main groups affected by reform, and to policies being poorly targeted towards potential beneficiaries.

In this paper (available here), we analyse the impact of tenancy reform in the early 1930s on Spanish land markets. Adapting general laws to local and regional variation in land tenure patterns and heterogeneity in rural contracts was one of the problems of agricultural policies in 1930s Spain. In the case of Catalonia in the 1930s, the interest of the case lies in the adaptation of a centralized tenancy reform, aimed at fixed-rent contracts, to sharecropping contracts that were predominant in Catalan agriculture. This was more typically the case of sharecropping contracts on vineyards, the case of customary sharecropping contract (rabassa morta), subject to various legal changes in the late 18th and early 19th centuries. It is considered that the 1930s culminated a period of conflicts between the so called rabassaires (sharecroppers under rabassa morta contracts) and owners of land.

The divisions between owners of land and tenants was one of the central cleavages of Catalonia in the 20th century. This was so even in an area that had seen substantial industrialization. In the early 1920s, work started on a Catalan law of rural contracts, aimed especially at sharecroppers. A law, passed on the 21st March 1934, allowed the re-negotiation of existing rural contracts and prohibited the eviction of tenants who had been less than 6 years under the same contract. More importantly, it opened the door to forced sales of land to long-term tenants. Such legislative changes posed a threat to the status quo and the Spanish Constitutional Court ruled the law was unconstitutional.

The comparative literature on the impacts of land reforms argues that land reform, in this case tenancy reform, can in fact change agrarian structures. When property rights are threatened, landowners react by selling land or interrupting existing tenancy contracts, mechanizing and hiring labourers. Agrarian structure is therefore endogenous to existing threats to property rights. The extent of insecurity in property rights in 1930s Catalonia can be seen in the wave of litigation over sharecropping contracts. Over 30,000 contracts were revised in the courts in late 1931 and 1932 which provoked satirical cartoons (Figure 01).

Untitled
Figure 1. Revisions and the share of the harvest. Source: L’Esquella de la Torratxa, 2nd August 1932, p. 11.
Translation: The rabaissaire question: Peasant: You sweat by coming here to claim your part of the harvest, you would be sweating more if you were to grow it by yourself.

The first wave of petitions to revise contracts led overwhelmingly to most petitions being nullified by the courts. This was most pronounced in the Spanish Supreme Court which ruled against the sharecropper in most of the around 30,000 petitions of contract revision. Nonetheless, sharecroppers were protected by the Catalan autonomous government. The political context in which the Catalan government operated became even more charged in October 1934. That month, with signs that the Centre-Right government was moving towards more reactionary positions, the Generalitat participated in a rebellion orchestrated by the Spanish Socialist Party (PSOE) and Left Republicans. It is in this context of suspension of civil liberties that landowners now had a freer hand to evict unruly peasants. The fact that some sharecroppers did not surrender their harvest meant they could be evicted straight away according to the new rules set by the new military governor of Catalonia.

We use the number of cases of completed and initiated tenant evictions from October 1934 to around mid -1935 as the main dependent variable in the paper. Data were collected from a report produced by the main Catalan tenant union, Unió de Rabassaires (Rabassaires’ Union), published in late 1935 to publicize and denounce tenant evictions or attempts of evicting tenants.

Combining the spatial analysis of eviction cases with individual information on evictors and evicted, we can be reasonably confident about several facts around evictions and terminated contracts in 1930s Catalonia. Our data show that that rabassa morta legacies were not the main determinant of evictions. About 6 per cent of terminated contracts were open ended rabassa morta contracts (arbitrarily set at 150 years in the graph). About 12 per cent of evictions were linked to contracts longer than 50 years, which were probably oral contracts (since Spanish legislation had given a maximum of 50 years). Figure 2 gives the contracts lengths of terminated and threatened contracts.

Untitled 2
Figure 2. Histogram of contract lengths. Source: Own elaboration from Unió de Rabassaires, Els desnonaments rústics.

The spatial distribution of evictions is also consistent with the lack of historical legacies of conflict. Evictions were not more common in historical rabassa morta areas, nor were they typical of areas with a larger share of land planted with vines.

Our study provides a substantial revision of claims by unions or historians about very high levels of conflict in the Catalan countryside during the Second Republic. In many cases, there had a long process of adaptation and fine-tuning of contractual forms to crops and soil and climatic conditions which increased the costs of altering existing institutional arrangements.

To contact the authors:

jdomenec@clio.uc3m.es

francisco.herreros@csic.es

Social Mobility among Christian Africans: Evidence from Anglican Marriage Registers in Uganda (1895-2011)

Felix Meier zu Selhausen (University of Sussex)
Marco H. D. Van Leeuwen (Utrecht University)
Jacob L. Weisdorf (University of Southern Denmark, CAGE, CEPR)

The arrival of Christian missionaries and the receptivity of African societies to formal education prompted a genuine schooling revolution during the colonial era. The bulk of primary education in the British colonies was provided by mission schools (Frankema 2012), and their historical distribution had a long-run effect on African development (e.g. Nunn 2010). To those with access, formal education under colonial rule provided new venues of political influence and opportunities for social mobility. However, did mission schooling benefit a broad layer of the African population, or did it merely strengthen the power of pre-colonial elites? This paper addresses this question by investigating social mobility of Christian converts in colonial Uganda.

The existing literature has conveyed two opposing arguments, based mainly on qualitative sources. On the one hand, scholars have stressed that British colonial officials discouraged post-primary education of the general African population, fearing that such education would nurture anti-colonial sentiments. As a result, the benefits of mission schooling are purported to have been restricted to sons of traditional chiefs and newly empowered elites, who aligned themselves with the British administration and took up the lion’s share of urban skilled occupations (Hanson 2003, Reid 2017). Such dynamics perpetuated the power of chiefs into the post-colonial era and contributed to a legacy of ‘decentralized despotism’ (Mamdani 1996). Despite such dynamics, however, other studies have argued that mission schools became ‘colonial Africa’s chief generator of social mobility and stratification’, acting as a stepping stone to urban middle-class careers for a new generation of Africans (Iliffe 2007, p. 229).

This article explores intergenerational social mobility and colonial elite formation using the occupational titles of African grooms and their fathers who married in the prestigious Anglican Namirembe Cathedral in Kampala or in several rural parishes in Western Uganda between 1895 and 2011. The fact that sampled grooms celebrated an Anglican church marriage meant they were born to parents who, by their choice of religion and compliance with the by-laws of the Anglican Church, had positioned their offspring in a social network that afforded them a wide range of educational and occupational opportunities (Peterson 2016). This unique sample allows us to explore the impact of missionary schooling on the social mobility of converts between generations and uncover implications for colonial elite formation.

Social mobility in Kampala

To measure social mobility, we have grouped each occupation of 14,167 sampled Anglican father-son pairs into a hierarchical scheme of 6 social classes based on skill levels using HISCLASS (Van Leeuwen and Maas 2011). As shown in Figure 1, we find that the occupational mobility of sampled grooms expanded dramatically during the colonial era. By the onset of British rule (1890-99), Buganda society was comparatively immobile with three out of four sons remaining in the social class of their fathers. But by the 1910s, this had reversed to 3 in 4 sons moving to a different class. Careers in the colonial administration (chiefs, clerks) and the Anglican mission (teachers, priests) functioned as key steps on the ladder to upward mobility.

Figure 1: Social mobility among Anglican grooms in Kampala, 1895-2011

fig1

What was the social background of those reaching the highest occupational classes? Table 1 zooms in on grooms’ social-class destination relative to their social origin during the colonial era. It shows that the African converts, benefiting from new occupational opportunities opening-up during the colonial period, were able to take large steps up the social ladder regardless of their social origin. A remarkable 45% of sons from farming family backgrounds (class IV) moved into white-collar work, which indicates that the colonial labour market was generally surprisingly conducive to social mobility among Anglican converts.

Table 1: Outflow mobility rates in Kampala, 1895-1962

fig2

Colonial elite formation: Decentralized despotism?

Did chiefs and their sons benefit disproportionally from occupational diversification under colonialism? Under indirect British rule, many traditional Baganda chiefs converted to Anglicanism and became colonial officials, employed to extract taxes and profits from cash-cropping farmers. This put them in a supreme position for consolidating their pre-colonial societal power. Despite such advantages, our microdata suggests that the privileged position of pre-colonial elites was not sustained over the colonial period Figure 2 shows the probabilities of sons of chiefs (class I) versus farmers and lower-class labourers (class IV-VI) of entering an elite position (class I). At the beginning of the colonial era, sons of chiefs were significantly more likely to reach the top of the social ladder. However, a remarkably fluid colonial labour market, based on meritocratic principles, gradually eroded their economic and political advantages. Towards the end of the colonial era, traditional claims to status no longer conferred automatic advantages upon the sons of chiefs, who lost their high social-status monopoly to a new Christian-educated and commercially orientated class of Ugandans of farming backgrounds (Hanson 2003).

Figure 2: Conditional probability of sons of chiefs and farmers in class I, Kampala

Figure 2

To access the abstract: http://onlinelibrary.wiley.com/doi/10.1111/ehr.12616/abstract

To contact the first author:
Twitter: @FelixMzS1

References

Frankema, E. (2012). ‘The origins of formal education in sub-Saharan Africa: was British rule more benign?’ European Review of Economic History 16(4): 335-55.

Hanson, E. (2003). Landed Obligation: The Practice of Power in Buganda. Portsmouth, NH: Heinemann.

Mamdani, M. (1996). Citizen and Subject: Contemporary Africa and the Legacy of Late Colonialism. Princeton: Princeton University Press.

Meier zu Selhausen, F., van Leeuwen, Marco H.D. and Weisdorf, J. (2018). ‘Social mobility among Christian Africans: Evidence from Anglican marriage registers in Uganda, 1895-2011. Economic History Review, forthcoming.

Nunn, N. (2010). Religious Conversion in Coloinal Africa. American Economic Review: Papers and Proceedings 100 (2) :147-52.

Peterson, D. (2016). ‘The Politics of Transcendence in Colonial Uganda’. Past and Present 230(1): 197-225.

Reid, R. J. (2017). A History of Modern Uganda. Cambridge: Cambridge University Press.

Van Leeuwen, M.H.D. and Maas, I. (2011). HISCLASS – A Historical International Social Class Scheme. Leuven: Leuven University Press.

EHS 2018 special: How the Second World War promoted racial integration in the American South

by Andreas Ferrara (University of Warwick)

c805244f10399f75a8d9f41f67baf87e
African American and White Employees Working Together during WWII. Available at <https://www.pinterest.com.au/pin/396950154628232921/&gt;

European politicians face the challenge of integrating the 1.26 million refugees who arrived in 2015. Integration into the labour market is often discussed as key to social integration but empirical evidence for this claim is sparse.

My research contributes to the debate with a historical example from the American South where the Second World War increased the share of black workers in semi-skilled jobs such as factory work, jobs previously dominated by white workers.

I combine census and military records to show that the share of black workers in semi-skilled occupations in the American South increased as they filled vacancies created by wartime casualties among semi-skilled whites.

A fallen white worker in a semi-skilled occupation was replaced by 1.8 black workers on average. This raised the share of African Americans in semi-skilled jobs by 10% between 1940 and 1950.

Survey data from the South in 1961 reveal that this increased integration in the workplace led to improved social relations between black and white communities outside the workplace.

Individuals living in counties where war casualties brought more black workers into semi-skilled jobs between 1940-50 were 10 percentage points more likely to have an interracial friendship, 6 percentage points more likely to live in a mixed-race neighbourhood, and 11 percentage points more likely to favour integration over segregation in general, as well as at school and at church. These positive effects are reported by both black and white respondents.

Additional analysis using county-level church membership data from 1916 to 1971 shows similar results. Counties where wartime casualties resulted in a more racially integrated labour force saw a 6 percentage points rise in membership shares of churches, which already held mixed-race services before the war.

The church-related results are especially striking. In several of his speeches Dr Martin Luther King stated that 11am on Sunday is the most segregated hour in American life. And yet my analysis shows that workplace exposure of two groups can overcome even strongly embedded social divides such as churchgoing, which is particularly important in the South, the so-called bible belt.

This historical case study of the American South in the mid-twentieth century, where race relations were often tense, demonstrates that excluding refugees from the workforce may be ruling out a promising channel for integration.

Currently, almost all European countries forbid refugees from participating in the labour market. Arguments put forward to justify this include fear of competition for jobs, concern about downward pressure on wages and a perceived need to deter economic migration.

While the mid-twentieth century American South is not Europe, the policy implication is to experiment more extensively with social integration through workplace integration measures. This not only concerns the refugee case but any country with socially and economically segregated minority groups.

from VOX – The return of regional inequality: Europe from 1900 to today

by Joan Rosés (LES) and Nikolaus Wolf (Humboldt University)