THE IMPACT OF MALARIA ON EARLY AFRICAN DEVELOPMENT: Evidence from the sickle cell trait

_Keep_out_malaria_mosquitoes_repair_your_torn_screen__-_NARA_-_514969
poster “Keep out malaria mosquitoes repair your torn screens”. U.S. Public Health Service, 1941–45

While malaria historically claimed millions of African lives, it did not hold back the continent’s economic development. That is one of the findings of new research by Emilio Depetris-Chauvin (Pontificia Universidad Católica de Chile) and David Weil (Brown University), published in the Economic Journal.

Their study uses data on the prevalence of the gene that causes sickle cell disease to estimate death rates from malaria for the period before the Second World War. They find that in parts of Africa with high malaria transmission, one in ten children died from malaria or sickle cell disease before reaching adulthood – a death rate more than twice the current burden of malaria in these regions.

 

According to the World Health Organization, the malaria mortality rate declined by 29% between 2010 and 2015. This was a major public health accomplishment, although with 429,000 annual deaths, the disease remains a terrible scourge.

Countries where malaria is endemic are also, on average, very poor. This correlation has led economists to speculate about whether malaria is a driver of poverty. But addressing that issue is difficult because of a lack of data. Poverty in the tropics has long historical roots, and while there are good data on malaria prevalence in the period since the Second World War, there is no World Malaria Report for 1900, 1800 or 1700.

Biologists only came to understand the nature of malaria in the late nineteenth century. Even today, trained medical personnel have trouble distinguishing between malaria and other diseases without the use of microscopy or diagnostic tests. Accounts from travellers and other historical records provide some evidence of the impact of malaria going back millennia, but these are hardly sufficient to draw firm conclusions. Akyeampong (2006), Mabogunje and Richards (1985)

This study addresses the lack of information on malaria’s impact historically by using genetic data. In the worst afflicted areas, malaria left an imprint on the human genome that can be read today.

Specifically, the researchers look at the prevalence of the gene that causes sickle cell disease. Carrying one copy of this gene provided individuals with a significant level of protection against malaria, but people who carried two copies of the gene died before reaching reproductive age.

Thus, the degree of selective pressure exerted by malaria determined the equilibrium prevalence of the gene in the population. By measuring the prevalence of the gene in modern populations, it is possible to back out estimates of the severity of malaria historically.

In areas of high malaria transmission, 20% of the population carries the sickle cell trait. The researchers’ estimate is that this implies that historically 10-11% of children died from malaria or sickle cell disease before reaching adulthood. Such a death rate is more than twice the current burden of malaria in these regions.

Comparing the most affected areas with those least affected, malaria may have been responsible for a ten percentage point difference in the probability of surviving to adulthood. In areas of high malaria transmission, the researchers’ estimate that life expectancy at birth was reduced by approximately five years.

Having established the magnitude of malaria’s mortality burden, the researchers then turn to its economic effects. Surprisingly, they find little reason to believe that malaria held back development. A simple life cycle model suggests that the disease was not very important, primarily because the vast majority of deaths that it caused were among the very young, in whom society had invested few resources.

This model-based finding is corroborated by the findings of a statistical examination. Within Africa, areas with higher malaria burden, as evidenced by the prevalence of the sickle cell trait, do not show lower levels of economic development or population density in the colonial era data examined in this study.

 

To contact the authors:  David Weil, david_weil@brown.edu

EFFECTS OF COAL-BASED AIR POLLUTION ON MORTALITY RATES: New evidence from nineteenth century Britain

pic
Samuel Griffiths (1873) The Black Country in the 1870s. In Griffiths’ Guide to the iron trade of Great Britain.

Industrialised cities in mid-nineteenth century Britain probably suffered from similar levels of air pollution as urban centres in China and India do today. What’s more, the damage to health caused by the burning of coal was very high, reducing life expectancy by more than 5% in the most polluted cities like Manchester, Sheffield and Birmingham. It was also responsible for a significant proportion of the higher mortality rates in British cities compared with rural parts of the country.

 These are among the findings of new research by Brian Beach (College of William & Mary) and Walker Hanlon (NYU Stern School of Business), which is published in the Economic Journal. Their study shows the potential value of history for providing insights into the long-run consequences of air pollution.

From Beijing to Delhi and Mexico City to Jakarta, cities across the world struggle with high levels of air pollution. To what extent does severe air pollution affect health and broader economic development for these cities? While future academics will almost surely debate this question, assessing the long-run consequences of air pollution for modern cities will not be possible for decades.

But severe air pollution is not a new phenomenon; Britain’s industrial cities of the nineteenth century, for example, also faced very high levels of air pollution. Because of this, researchers argue that history has the potential to provide valuable insights into the long-run consequences of air pollution.

One challenge in studying historical air pollution is that direct pollution measures are largely unavailable before the mid-twentieth century. This study shows how historical pollution levels in England and Wales can be inferred by combining data on the industrial composition of employment in local areas in 1851 with information on the amount of coal used per worker in each industry.

This makes it possible to estimate the amount of coal used in over 581 districts covering all of England and Wales. Because coal was by far the most important pollutant in Britain in the nineteenth century (as well as much of the twentieth century), this provides a way of approximating local industrial pollution emission levels.

The results are consistent with what historical sources suggest: the researchers find high levels of coal use in a broad swath of towns stretching from Lancashire and the West Riding down into Staffordshire, as well as in the areas around Newcastle, Cardiff and Birmingham.

By comparing measures of local coal-based pollution to mortality data, the study shows that air pollution was a major contributor to mortality in Britain in the mid-nineteenth century. In the most polluted locations – places like Manchester, Sheffield and Birmingham – the results show that air pollution resulting from industrial coal use reduced life expectancy by more than 5%.

One potential concern is that locations with more industrial coal use could have had higher mortality rates for other reasons. For example, people living in these industrial areas could have been poorer, infectious disease may have been more common or jobs may have been more dangerous.

The researchers deal with this concern by looking at how coal use in some parts of the country affected mortality in other areas that were, given the predominant wind direction, typically downwind. They show that locations which were just downwind of major coal-using areas had higher mortality rates than otherwise similar locations which were just upwind of these areas.

These results help to explain why cities in the nineteenth century were much less healthy than more rural areas – the so-called urban mortality penalty. Most existing work argues that the high mortality rates observed in British cities in the nineteenth century were due to the impact of infectious diseases, bad water and unclean food.

The new results show that in fact about one third of the higher mortality rate in cities in the nineteenth century was due to exposure to high levels of air pollution due to the burning of coal by industry.

In addition to assessing the effects of coal use on mortality, the researchers use these effects to back out very rough estimates of historical particulate pollution levels. Their estimates indicate that by the mid-nineteenth century, industrialised cities in Britain were probably as polluted as industrial cities in places like China and India are today.

These findings shed new light on the impact of air pollution in nineteenth century Britain and lay the groundwork for further research analysing the long-run effects of air pollution in cities.

 

To contact the authors:  Brian Beach (bbbeach@wm.edu); Walker Hanlon (whanlon@stern.nyu.edu)

Managing the Economy, Managing the People: narratives of economic life in Britain from Beveridge to Brexit

by Jim Tomlinson (University of Glasgow)

 

book‘It’s the economy stupid’, like most clichés, both reveals and conceals important truths. The slogan suggests a hugely important truth about the post-1945 politics of the advanced democracies such as Britain: that economic  issues have been crucial to government strategies and political arguments. What the cliché conceals is the need to examine what is understood by ‘the economy’, a term which has no fixed meaning, and has been constantly re-worked over the years. Starting from those two points, this book provides a distinctive new account of British economic life since the 1940s, focussing upon how successive governments, in seeking to manage the economy, have sought simultaneously to ‘manage the people’: to try and manage popular understanding of economic issues.

The first half the book analyses the development of the major narratives from the 1940s onwards. This  covers the notion of ‘austerity’ and its particular meaning in the 1940s; the rise of a narrative of ‘economic decline’ from the late 1950s, and the subsequent attempts to ‘modernize’ the economy; the attempts to ‘roll back the state’ from the 1970s; the impact of ideas of ‘globalization’ in the 1900s; and, finally, the way the crisis of 2008/9 onwards was constructed as a problem of ‘debts and deficits’. The second part focuses in on four key issues in attempts to ‘manage the people’: productivity, the balance of payments, inflation and unemployment. It shows how in each case  governments sought to get the populace to understand these issues in a particular light, and shaped strategies to that end.

One conclusion of the book is the grounding of most representations of key economic problems of the post-war period in Britain as an industrial economy, and how de-industrialization undermines this representation.  Unemployment, from its origins in the late-Victorian period, was largely about the malfunctioning of  industrial (and male) labour markets. De-industrialization, accompanied by the proliferation of precarious work, including much classified as ‘self-employment’, radically challenges our understanding of  this problem, however much it remains the case that for the great bulk of the population selling their labour is key to their economic prosperity.

The concern with productivity was likewise grounded in the industrial sector. But outside the marketed services, in non-marketed provision like education, health and care, the problems of conceptualising, let alone measuring, productivity are immense. In a world where personal services of various kinds are becoming ever more important, traditional notions of productivity need a radical re-think.

Less obviously, the notion of a national rate of inflation, such as the Cost of Living Index and later the RPI, was grounded in attempts to measure the real wages of the industrial working class. With the value of housing as key underpinning for consumption, and the ‘financialization’ of the economy, this traditional notion of inflation, measuring the cost of a basket of consumables against nominal wages, has been undermined. Asset, especially housing, prices matter much more to many wage earners, whilst the value of financial assets is also important to increasing numbers of people as the population ages.

Finally, the decline of concern with the balance of payments is linked to the rise in the relative importance of financial flows, making  the manufacturing balance or the current account less pertinent. For many years now Britain’s external payments have relied on the rates of return on overseas assets, exceeding those on domestic assets held by foreigners. We are a very long way indeed from 1940s stories of ‘England’s bread hangs by Lancashire’s thread’.

De-industrialization has not only undercut the coherence and relevance of the four standard economic policy problems of the post-war years, but has also destroyed the primary audience that most post-war economic propaganda was aimed at: the industrial working class. While other audiences were not entirely neglected, it was the worker (usually the male worker), who was the prime target of the narratives and whose understandings and behaviour were seen as the key to the projected solutions.

A recurrent anxiety of this propaganda was the receptivity of those workers to its messages. This anxiety helps to explain much of the ‘simplified’ language of this propaganda, as well as its patterns of distribution. More fundamentally, this anxiety rested upon uncertainties about what kind of arguments would a working-class audience find congenial; there was perennial debate about the efficacy of appeals to individual as opposed to the ‘national’ interest. Above all, there was a moral message of distributive justice which infused much of the propaganda, ultimately grounded in the belief that working class culture had within it ingrained notions of  ‘fairness’ that had to be appealed to.

While ethical appeals continued to inform economic propaganda into the twenty-first century, the fragmentation of the old audience accelerated. In addition, given the upward lurch in inequality in the 1980s, and the following period of continuing growth of incomes right at the top of the distribution, appeals to ‘fairness’ have become much more difficult to make credible. Strikingly, concerns about inequality emerged across the political spectrum after the 2007/8 financial crisis, at the same time as the narrative of debts, deficits and austerity had driven post-crisis policies that increased  inequality. Widespread talk of ‘reducing inequality’, whilst having obvious political appeal, especially after Brexit, would seem to be largely rhetorical.

 

Managing the Economy, Managing the People: narratives of economic life in Britain from Beveridge to Brexit is edited by Oxford University Press, 2017,  ISBN 978-019-878609-2

To contact the author: Jim.Tomlinson@Glasgow.ac.uk

Land reform and agrarian conflict in 1930s Spain

Jordi Domènech (Universidad Carlos III de Madrid) and Francisco Herreros (Institute of Policies and Public Goods, Spanish Higher Scientific Council)

Government intervention in land markets is always fraught with potential problems. Intervention generates clearly demarcated groups of winners and losers as land is the main asset owned by households in predominantly agrarian contexts. Consequently, intervention can lead to large, generally welfare-reducing changes in the behaviour of the main groups affected by reform, and to policies being poorly targeted towards potential beneficiaries.

In this paper (available here), we analyse the impact of tenancy reform in the early 1930s on Spanish land markets. Adapting general laws to local and regional variation in land tenure patterns and heterogeneity in rural contracts was one of the problems of agricultural policies in 1930s Spain. In the case of Catalonia in the 1930s, the interest of the case lies in the adaptation of a centralized tenancy reform, aimed at fixed-rent contracts, to sharecropping contracts that were predominant in Catalan agriculture. This was more typically the case of sharecropping contracts on vineyards, the case of customary sharecropping contract (rabassa morta), subject to various legal changes in the late 18th and early 19th centuries. It is considered that the 1930s culminated a period of conflicts between the so called rabassaires (sharecroppers under rabassa morta contracts) and owners of land.

The divisions between owners of land and tenants was one of the central cleavages of Catalonia in the 20th century. This was so even in an area that had seen substantial industrialization. In the early 1920s, work started on a Catalan law of rural contracts, aimed especially at sharecroppers. A law, passed on the 21st March 1934, allowed the re-negotiation of existing rural contracts and prohibited the eviction of tenants who had been less than 6 years under the same contract. More importantly, it opened the door to forced sales of land to long-term tenants. Such legislative changes posed a threat to the status quo and the Spanish Constitutional Court ruled the law was unconstitutional.

The comparative literature on the impacts of land reforms argues that land reform, in this case tenancy reform, can in fact change agrarian structures. When property rights are threatened, landowners react by selling land or interrupting existing tenancy contracts, mechanizing and hiring labourers. Agrarian structure is therefore endogenous to existing threats to property rights. The extent of insecurity in property rights in 1930s Catalonia can be seen in the wave of litigation over sharecropping contracts. Over 30,000 contracts were revised in the courts in late 1931 and 1932 which provoked satirical cartoons (Figure 01).

Untitled
Figure 1. Revisions and the share of the harvest. Source: L’Esquella de la Torratxa, 2nd August 1932, p. 11.
Translation: The rabaissaire question: Peasant: You sweat by coming here to claim your part of the harvest, you would be sweating more if you were to grow it by yourself.

The first wave of petitions to revise contracts led overwhelmingly to most petitions being nullified by the courts. This was most pronounced in the Spanish Supreme Court which ruled against the sharecropper in most of the around 30,000 petitions of contract revision. Nonetheless, sharecroppers were protected by the Catalan autonomous government. The political context in which the Catalan government operated became even more charged in October 1934. That month, with signs that the Centre-Right government was moving towards more reactionary positions, the Generalitat participated in a rebellion orchestrated by the Spanish Socialist Party (PSOE) and Left Republicans. It is in this context of suspension of civil liberties that landowners now had a freer hand to evict unruly peasants. The fact that some sharecroppers did not surrender their harvest meant they could be evicted straight away according to the new rules set by the new military governor of Catalonia.

We use the number of cases of completed and initiated tenant evictions from October 1934 to around mid -1935 as the main dependent variable in the paper. Data were collected from a report produced by the main Catalan tenant union, Unió de Rabassaires (Rabassaires’ Union), published in late 1935 to publicize and denounce tenant evictions or attempts of evicting tenants.

Combining the spatial analysis of eviction cases with individual information on evictors and evicted, we can be reasonably confident about several facts around evictions and terminated contracts in 1930s Catalonia. Our data show that that rabassa morta legacies were not the main determinant of evictions. About 6 per cent of terminated contracts were open ended rabassa morta contracts (arbitrarily set at 150 years in the graph). About 12 per cent of evictions were linked to contracts longer than 50 years, which were probably oral contracts (since Spanish legislation had given a maximum of 50 years). Figure 2 gives the contracts lengths of terminated and threatened contracts.

Untitled 2
Figure 2. Histogram of contract lengths. Source: Own elaboration from Unió de Rabassaires, Els desnonaments rústics.

The spatial distribution of evictions is also consistent with the lack of historical legacies of conflict. Evictions were not more common in historical rabassa morta areas, nor were they typical of areas with a larger share of land planted with vines.

Our study provides a substantial revision of claims by unions or historians about very high levels of conflict in the Catalan countryside during the Second Republic. In many cases, there had a long process of adaptation and fine-tuning of contractual forms to crops and soil and climatic conditions which increased the costs of altering existing institutional arrangements.

To contact the authors:

jdomenec@clio.uc3m.es

francisco.herreros@csic.es

Social Mobility among Christian Africans: Evidence from Anglican Marriage Registers in Uganda (1895-2011)

Felix Meier zu Selhausen (University of Sussex)
Marco H. D. Van Leeuwen (Utrecht University)
Jacob L. Weisdorf (University of Southern Denmark, CAGE, CEPR)

The arrival of Christian missionaries and the receptivity of African societies to formal education prompted a genuine schooling revolution during the colonial era. The bulk of primary education in the British colonies was provided by mission schools (Frankema 2012), and their historical distribution had a long-run effect on African development (e.g. Nunn 2010). To those with access, formal education under colonial rule provided new venues of political influence and opportunities for social mobility. However, did mission schooling benefit a broad layer of the African population, or did it merely strengthen the power of pre-colonial elites? This paper addresses this question by investigating social mobility of Christian converts in colonial Uganda.

The existing literature has conveyed two opposing arguments, based mainly on qualitative sources. On the one hand, scholars have stressed that British colonial officials discouraged post-primary education of the general African population, fearing that such education would nurture anti-colonial sentiments. As a result, the benefits of mission schooling are purported to have been restricted to sons of traditional chiefs and newly empowered elites, who aligned themselves with the British administration and took up the lion’s share of urban skilled occupations (Hanson 2003, Reid 2017). Such dynamics perpetuated the power of chiefs into the post-colonial era and contributed to a legacy of ‘decentralized despotism’ (Mamdani 1996). Despite such dynamics, however, other studies have argued that mission schools became ‘colonial Africa’s chief generator of social mobility and stratification’, acting as a stepping stone to urban middle-class careers for a new generation of Africans (Iliffe 2007, p. 229).

This article explores intergenerational social mobility and colonial elite formation using the occupational titles of African grooms and their fathers who married in the prestigious Anglican Namirembe Cathedral in Kampala or in several rural parishes in Western Uganda between 1895 and 2011. The fact that sampled grooms celebrated an Anglican church marriage meant they were born to parents who, by their choice of religion and compliance with the by-laws of the Anglican Church, had positioned their offspring in a social network that afforded them a wide range of educational and occupational opportunities (Peterson 2016). This unique sample allows us to explore the impact of missionary schooling on the social mobility of converts between generations and uncover implications for colonial elite formation.

Social mobility in Kampala

To measure social mobility, we have grouped each occupation of 14,167 sampled Anglican father-son pairs into a hierarchical scheme of 6 social classes based on skill levels using HISCLASS (Van Leeuwen and Maas 2011). As shown in Figure 1, we find that the occupational mobility of sampled grooms expanded dramatically during the colonial era. By the onset of British rule (1890-99), Buganda society was comparatively immobile with three out of four sons remaining in the social class of their fathers. But by the 1910s, this had reversed to 3 in 4 sons moving to a different class. Careers in the colonial administration (chiefs, clerks) and the Anglican mission (teachers, priests) functioned as key steps on the ladder to upward mobility.

Figure 1: Social mobility among Anglican grooms in Kampala, 1895-2011

fig1

What was the social background of those reaching the highest occupational classes? Table 1 zooms in on grooms’ social-class destination relative to their social origin during the colonial era. It shows that the African converts, benefiting from new occupational opportunities opening-up during the colonial period, were able to take large steps up the social ladder regardless of their social origin. A remarkable 45% of sons from farming family backgrounds (class IV) moved into white-collar work, which indicates that the colonial labour market was generally surprisingly conducive to social mobility among Anglican converts.

Table 1: Outflow mobility rates in Kampala, 1895-1962

fig2

Colonial elite formation: Decentralized despotism?

Did chiefs and their sons benefit disproportionally from occupational diversification under colonialism? Under indirect British rule, many traditional Baganda chiefs converted to Anglicanism and became colonial officials, employed to extract taxes and profits from cash-cropping farmers. This put them in a supreme position for consolidating their pre-colonial societal power. Despite such advantages, our microdata suggests that the privileged position of pre-colonial elites was not sustained over the colonial period Figure 2 shows the probabilities of sons of chiefs (class I) versus farmers and lower-class labourers (class IV-VI) of entering an elite position (class I). At the beginning of the colonial era, sons of chiefs were significantly more likely to reach the top of the social ladder. However, a remarkably fluid colonial labour market, based on meritocratic principles, gradually eroded their economic and political advantages. Towards the end of the colonial era, traditional claims to status no longer conferred automatic advantages upon the sons of chiefs, who lost their high social-status monopoly to a new Christian-educated and commercially orientated class of Ugandans of farming backgrounds (Hanson 2003).

Figure 2: Conditional probability of sons of chiefs and farmers in class I, Kampala

Figure 2

To access the abstract: http://onlinelibrary.wiley.com/doi/10.1111/ehr.12616/abstract

To contact the first author:
Twitter: @FelixMzS1

References

Frankema, E. (2012). ‘The origins of formal education in sub-Saharan Africa: was British rule more benign?’ European Review of Economic History 16(4): 335-55.

Hanson, E. (2003). Landed Obligation: The Practice of Power in Buganda. Portsmouth, NH: Heinemann.

Mamdani, M. (1996). Citizen and Subject: Contemporary Africa and the Legacy of Late Colonialism. Princeton: Princeton University Press.

Meier zu Selhausen, F., van Leeuwen, Marco H.D. and Weisdorf, J. (2018). ‘Social mobility among Christian Africans: Evidence from Anglican marriage registers in Uganda, 1895-2011. Economic History Review, forthcoming.

Nunn, N. (2010). Religious Conversion in Coloinal Africa. American Economic Review: Papers and Proceedings 100 (2) :147-52.

Peterson, D. (2016). ‘The Politics of Transcendence in Colonial Uganda’. Past and Present 230(1): 197-225.

Reid, R. J. (2017). A History of Modern Uganda. Cambridge: Cambridge University Press.

Van Leeuwen, M.H.D. and Maas, I. (2011). HISCLASS – A Historical International Social Class Scheme. Leuven: Leuven University Press.

EHS 2018 special: How the Second World War promoted racial integration in the American South

by Andreas Ferrara (University of Warwick)

c805244f10399f75a8d9f41f67baf87e
African American and White Employees Working Together during WWII. Available at <https://www.pinterest.com.au/pin/396950154628232921/&gt;

European politicians face the challenge of integrating the 1.26 million refugees who arrived in 2015. Integration into the labour market is often discussed as key to social integration but empirical evidence for this claim is sparse.

My research contributes to the debate with a historical example from the American South where the Second World War increased the share of black workers in semi-skilled jobs such as factory work, jobs previously dominated by white workers.

I combine census and military records to show that the share of black workers in semi-skilled occupations in the American South increased as they filled vacancies created by wartime casualties among semi-skilled whites.

A fallen white worker in a semi-skilled occupation was replaced by 1.8 black workers on average. This raised the share of African Americans in semi-skilled jobs by 10% between 1940 and 1950.

Survey data from the South in 1961 reveal that this increased integration in the workplace led to improved social relations between black and white communities outside the workplace.

Individuals living in counties where war casualties brought more black workers into semi-skilled jobs between 1940-50 were 10 percentage points more likely to have an interracial friendship, 6 percentage points more likely to live in a mixed-race neighbourhood, and 11 percentage points more likely to favour integration over segregation in general, as well as at school and at church. These positive effects are reported by both black and white respondents.

Additional analysis using county-level church membership data from 1916 to 1971 shows similar results. Counties where wartime casualties resulted in a more racially integrated labour force saw a 6 percentage points rise in membership shares of churches, which already held mixed-race services before the war.

The church-related results are especially striking. In several of his speeches Dr Martin Luther King stated that 11am on Sunday is the most segregated hour in American life. And yet my analysis shows that workplace exposure of two groups can overcome even strongly embedded social divides such as churchgoing, which is particularly important in the South, the so-called bible belt.

This historical case study of the American South in the mid-twentieth century, where race relations were often tense, demonstrates that excluding refugees from the workforce may be ruling out a promising channel for integration.

Currently, almost all European countries forbid refugees from participating in the labour market. Arguments put forward to justify this include fear of competition for jobs, concern about downward pressure on wages and a perceived need to deter economic migration.

While the mid-twentieth century American South is not Europe, the policy implication is to experiment more extensively with social integration through workplace integration measures. This not only concerns the refugee case but any country with socially and economically segregated minority groups.

from VOX – The return of regional inequality: Europe from 1900 to today

by Joan Rosés (LES) and Nikolaus Wolf (Humboldt University)

 

Are businessmen from Mars and businesswomen from Venus? An analysis of female business success and failure in Victorian and Edwardian England

by Jennifer Aston (Oxford University)  and Paulo di Martino (University of Birmingham)

The full paper was published on the Economic History Review, accessible here

 

612fc5b1fca0c5d01f6f9f6d143f79d6
Fashion in Edwardian England

Do women and men trade in different ways? If so, why? And are men more or less successful than women? These are very important questions not just, or not only, for the academic debate, but also for the policy implications that might emerge, especially in countries such as the UK where, rightly or wrongly, we believe in personal entrepreneurship as one of the main antidotes to unemployment and to the crisis of big business.

In economic history, it has traditionally been argued that women and men traded in similar ways up to the industrial revolution but, since then, women have ben progressively relegated to a “separate sphere” allowed, at most, some engagement with naturally “female” occupations such as textiles or food provision. Although more recent literature has strongly undermined this view, a lot of ground has still to be covered, especially about the period post 1850s.

We approach this debate by starting with a simple question about business “success” across gender: did women happen to fail more likely than men? Thanks to the reconstruction of original data on personal bankruptcy derived from contemporary official publications by the Board of Trade, this research suggests that this was not the case. In fact, depending on how prudently data on the number of female entrepreneurs are looked at, women appear more successful in, at least, keeping their businesses alive.

This finding, however, only paved the way for more questions. In particular, had the narrative of women only dealing with traditional and safe industries and operating in semi-informal businesses been true, what we observe via the lens of official statistics would be just a distorted view. This researched focussed on other primary sources: the reports of about 100 women whose businesses failed around the turn of the century. The findings support the initial hypothesis: although smaller than male counterparts (hence, in fact, riskier), female businesses were not hidden away from the public sphere, the official trading places, or the rules of the formal credit market. So, boarding house keeper Eleanor Bosito and the hotelier Esther Brandon were declared bankrupt and subject to formal proceedings despite having very few creditors who all lived within five miles from the businesses of the two women.  with unsecured debts of about £160 faced bankruptcy as a result of the petition filed by Jane Davis, a widow who lived less than half a mile from Agnes’s home and had lent her the sum of £5. This was the same destiny faced by Elizabeth Goodchild a businesswoman who, contrary to the other cases, operated on a large scale with suppliers and clients from all around Britain and Europe. This evidence reveals that, first of all, small scale trade was thus not necessarily the rule for women and, even when it was the case, it did not coincide with informality or sheltering from the “rules of the game”.

Businesswomen then did not come from, nor traded on, a different planet and certainly did not need the patronising protection of a male-dominated institutional environment. Instead the legal system forged ad hoc rules for married woman, via specific provisions in Bankruptcy Laws which lifted them from any responsibility. These level of defence, similar only to the one available to lunatics and children, proved ineffective. Or, in fact, the perfect background for frauds: in 1899 a spinster who was due to be declared bankrupt got married before the actual beginning of the procedure, thus avoiding any legal consequence (and, hopefully, having found love too).

In conclusion, this research indicates that Victorian and Edwardian businesswomen were perfectly able to trade in a fashion similar to the one of their male counterpart and, if anything, they were more successful. This leads to a basic and probably intuitive policy implication: if we want more women to successfully engage in business, all we have to do is to remove the economic, social, and cultural barriers that limit their access to opportunities.

WHEN ART BECAME AN ATTRACTIVE INVESTMENT: New evidence on the valuation of artworks in wartime France

by Kim Oosterlinck (Université Libre de Bruxelles)

 

Scene_from_Degenerate_Art_auction,_1938,_works_by_Picasso,_Head_of_a_Woman,_Two_Harlequins
Scene from the Degenerate Artauction, spring 1938, published in a Swiss newspaper; works by Pablo PicassoHead of a Woman (lot 117), Two Harlequins (lot 115). “Paintings from the degenerate art action will now be offered on the international art market. In so doing we hope at least to make some money from this garbage” wrote Joseph Goebbels in his diaries. From Wikipedia

The art market in France during the Nazi occupation provided one of the best available investment opportunities, according to research published in the Economic Journal. Using an original database to recreate an art market price index for the period 1937-1947, his study shows that in a risk-return framework, gold was the only serious alternative to art.

The research indicates that discretion, the inflation-proof character of art, the absence of market intervention and the possibility of reselling works abroad all played a crucial role in the valuation of artworks. Some investors were ready to go to the black market to acquire assets that could easily be resold abroad. But for those who preferred to stay on the side of legality, the art market provided an attractive alternative.

The author notes that the French art market during the occupation has been the subject of numerous publications. But most of these focus on the fate of looted artworks, with limited attention given to the art market itself.

What’s more, previous research on the economics of art usually considers artworks as a poor investment. But the case of occupied France shows that in extreme circumstances, artworks may prove extremely attractive investment vehicles.

During wartime, illegal activities and the risk of being forced to flee the country increased the appeal of ‘discreet assets’ – ones that allow the storage of a large amount of value in small and easily transportable goods.

By comparing the price index for small and large artworks, the new study establishes that investors were looking for smaller artworks, especially just before the German invasion and during the period 1942-1943, when the black market flourished.

Non-pecuniary motives for buying art, such as ‘conspicuous consumption’, are often thought of as playing an important role in art valuation. The new research analyses this point for occupied France by exploiting the distinction made by the Nazis between ‘degenerate’ and ‘non-degenerate’ artworks.

Pricing of ‘degenerate’ works was indeed affected by the impossibility of engaging in their conspicuous consumption. The price difference between these two categories of artworks is clear at the beginning of the occupation, when the Nazi policy towards ‘degenerate’ artworks held in France had not been clearly spelled out.

The difference gradually vanished as it became known that Hitler took a favourable view of French ‘artistic decadence’ and was not planning to get these works destroyed as long as they remained in France.

Discretion does not only concern artworks, the researcher notes. Other discreet assets, such as collectible stamps, also experienced sharp price increases during the Nazi occupation of France. Assets that are easy to transport and hide therefore have characteristics that are valued by some investors during troubled times.

The interest in discreet artworks goes beyond wartime. At any point, tax evaders may be willing to buy art or other discreet assets to hide illicit profits or to diminish their tax burden. As a result, when wealth and wealth inequality increase, so does demand for discreet assets.

Whereas previous research traditionally attributes these price increases to social competition, the new study suggests an alternative explanation: assets that facilitate tax evasion should fetch a higher price in an environment characterised by increasing wealth inequality. The research thus opens the door to a different interpretation of the high demand for artworks in Japan in the 1990s or in China today.

To contact the author: koosterl@ulb.ac.be

British exports and American tariffs, 1870-1913

by Brian D Varian (Swansea University)

B. Saul (1965) once referred to late nineteenth-century Britain as the ‘export economy’. During this period, one of Britain’s largest export markets—in some years, the largest market—was the United States. To the United States, Britain exported a range of (mainly manufactured) goods spanning such industries as iron, steel, tinplate, textiles, and numerous others.

A forthcoming article in the Economic History Review argues that the total volume of British exports to the United States was significantly affected by American tariffs during the interval from 1870-1913. The argument runs contrary to the more general finding of Jacks et al. (2010) that Britain’s trade with a sample of countries, i.e. not just the United States, was uninfluenced by foreign tariffs.

This argument complements some previous studies that focused on specific commodities that Britain exported to the United States in the late nineteenth century. Irwin (2000) found that Britain’s tinplate exports to the United States were indeed responsive to changes in the American duty on tinplate. Inwood and Keay (2015) reached a similar conclusion regarding Britain’s pig iron exports to the United States. However, as this research claims, the determinacy of American tariffs for the volume of British exports was not limited to only certain commodities, but rather applied to the bilateral flow of trade, as a whole.

The United States imposed different duties on different commodities. Because the composition of commodities that the United States imported from all countries collectively differed from the composition of commodities that the United States imported from Britain, the average American tariff is an inaccurate measure of the tariff level encountered by, specifically, British exports to the United States. For this reason, this research reconstruct an annual series of the bilateral American tariff toward Britain for the interval from 1870-1913, using the disaggregated data reported in the historical trade statistics of the United States. This reconstructed series is crucial to the argument.

chart

The figure above presents the average American tariff and the reconstructed bilateral American tariff toward Britain, both expressed as percentages (ad valorem equivalent percentages, to be precise). In the 1890s, the average American tariff and the bilateral American tariff toward Britain do not follow a similar course. For example, whereas the tariff revisions of the Wilson-Gorman Tariff Act of 1894 had little effect on the average American tariff, these tariff revisions resulted in the bilateral American tariff toward Britain declining from 45% in 1893/4 to 31% in 1894/5.

This econometric analysis of the Anglo-American bilateral trade flow relies upon the empirically-correct bilateral American tariff toward Britain. In this respect, the forthcoming article in the Economic History Review departs from other historical studies of trade, which use average tariffs as approximations of bilateral tariffs.

Perhaps the reconstruction of another country’s bilateral tariff toward Britain—Germany’s tariff toward Britain is an obvious choice—would reveal that the effect of foreign tariffs on British exports was more widespread than just the bilateral American case. Nevertheless, the importance of the bilateral American case should not be diminished, as the United States was a large export market of Britain, the ‘export economy’ of the late nineteenth century.

 

Link to the article: http://onlinelibrary.wiley.com/doi/10.1111/ehr.12486/full

To contact the author: b.d.varian@swansea.ac.uk

 

References

Inwood, K. and Keay, I., ‘Transport costs and trade volumes: evidence from the trans-Atlantic iron trade, 1870-1913’, Journal of Economic History, 75 (2015), pp. 95-124.

Irwin, D. A., Did late-nineteenth-century US tariffs promote infant industries? Evidence from the tinplate industry’, Journal of Economic History, 60 (2000), pp. 335-60.

Jacks, D., Meissner, C. M., and Novy, D., ‘Trade costs in the first wave of globalization’, Explorations in Economic History, 47 (2010), pp. 127-41.

Saul, S. B., ‘The export economy, 1870-1914’, Bulletin of Economic Research, 17 (1965), pp. 5-18.