by Elizabeth A. Faulkner (University of Hull) and Cathal Rogers (Staffordshire University)
This paper was presented at the EHS Annual Conference 2019 in Belfast.
The trafficking of children receives extensive media coverage today, with endless tales of exploited and enslaved children. But these reports are not isolated.
For example in 1923, the League of Nations Advisory Committee on the Traffic in Women and Children heard that ‘The White slave traffic assumed large proportions; young girls – and even young boys – swelled the personnel of the over-numerous houses of ill-fame’. The purpose of our study is to identify whether fears of the sexual enslavement of children during the era were legitimate or the product of a ‘moral panic’.
The issue of human trafficking is a relatively new area of international law, but the issue has appeared on numerous occasions as an issue of grave moral concern at the international level for over a century. In 1921, the League of Nations passed the International Convention for the Suppression of the Traffic in Women and Children.
This Convention marked a notable departure from the overtly racialised focus of previous attempts to address this issue of human trafficking namely, the 1904 and 1910 White Slave Traffic Conventions.
Our study investigates the trafficking and exploitation of children between 1922 and 1929 through an examination of the archives of the League of Nations, Geneva. The inquiry sought to uncover recorded cases of child trafficking through focusing on the Summary of Annual Reports submitted to the Traffic in Women and Children Committee.
In terms of references to ‘trafficking’, from the 324 responses (1922-1929) considered by this inquiry, only 11 references to trafficking were identified. As a percentage, that is just 3.3% of responses.
Our research seeks to understand the exploitation of children during the 1920s, beyond ‘trafficking for immoral purposes’. Identifying the types of exploitation that children experienced globally, whether for commercial or economic gain, sexual gratification or adoption.
The aim of the research is to challenge and enrich our understanding of morals, race and the exploitation of children in the nineteenth and early twentieth century, through deconstructing fears of the sexual enslavement of children.
The inquiry seeks to readdress the racial bias of previous examinations of the human trafficking of the era and to expand our knowledge of trafficked and or exploited children in the legacy of the ‘White Slavery Conventions’.
 De Reding De Bibberegg, Delegate of the International Red Cross Committee and the International Red Cross Committee and the International ‘Save the Children’ Fund in Greece. League of Nations, Advisory Committee on the Traffic in Women and Children, Minutes if the Second Session, Geneva March 22nd – 27th 1923 at 65
 The ‘White Slavery Conventions’ namely the International Agreement for the Suppression of White Slave Traffic 1904, the International Convention for the Suppression of the White Slave Traffic 1910, the International Convention for the Suppression of Traffic in Women and Children 1921 and the International Convention for the Suppression of the Traffic in Women of Full Age 1933
by Kerstin Enflo (Lund University), Anna Missiaia (Lund University) and Joan Rosés (LSE)
This research will be presented during the EHS Annual Conference in Belfast, April 5th – 7th 2019. Conference registration can be found on the EHS website.
Fast urbanisation is a phenomenon often associated with the image of African or Asian mega-cities, but migrations from rural to urban areas are also a European phenomenon (see the growth experienced by large capitals such as London and Paris, but also smaller ones such as Stockholm and Copenhagen). And according to United Nations forecasts, the urbanisation trend will continue, with an estimated 2.5 billion people added to the world’s urban population by 2050.
The first question that comes to mind is whether urbanisation triggers economic growth, and therefore should be favoured by policy-makers, as suggested by eminent scholars such as Ed Glaeser (The Triumph of the City: How Our Best Invention Makes Us Richer, Smarter, Greener, Healthier, and Happier, 2011) or Richard Florida (The Rise of the Creative Class, 2002).
Although this relationship is overall positive, the paradigm has been challenged with respect to African mega-cities: their urbanisation rate takes off in periods of growth but it does not immediately decrease in periods of recession. As cities continue to grow in size, but fail to grow in GDP per capita, their inhabitants experience falling income levels, ultimately leading to falling living standards (Fay and Opal, 2000).
If we look at Europe, urbanisation without growth does not appear to be an issue when countries are the units of analysis. But the national dimension could be concealed by the success stories of the large capitals, with less clear success stories for middle-sized declining cities. Net of the big successful capitals, many cities that thrived during the post-war period are now struggling, with clear economic, social and political consequences.
Our work, to be presented at the Economic History Society’s 2019 annual conference, contributes to the debate by looking at this relationship for the first time at the regional, rather than national level, using urbanisation rates and GDP per capita in EU regions in the twentieth century. The regional dimension makes it possible to disentangle the effects of urbanisation from the effects of being the capital’s region.
Our main findings are that the relationship between urbanisation and growth is positive and significant until the middle of the twentieth century, while it is not significant in recent years. We therefore observe a progressive decoupling of regional urbanisation and economic growth. The effect on growth of the presence of the capital in the region is very large: between 60% and 70% of that of urbanisation until the mid-twentieth century.
When looking at macro areas, both Southern Europe and Northern Europe show no statistically significant relationship between urbanisation and economic growth, suggesting that regions containing urban areas without the status of capital do not necessarily grow more than regions without such urban areas.
This is consistent with the idea of a ‘winner-take-all urbanism’ presented by Florida (The New Urban Crisis, 2017) in which there is a growing divide between the winner cities (London, New York, Paris, San Francisco) and the rest.
In the winner cities, the middle class, the service class and the working class are priced out by highly paid creative workers. In the rest of the cities, where creative workers are not based, the middle class declines without being replaced by the new rich.
Our results are relevant for policy-makers as they challenge the view that urbanisation per se is a strong channel for economic growth regardless of the period and geographical area considered.
by Joerg Baten (University of Tübingen) and Alexandra de Pleijt (University of Oxford)
This research will be presented during the EHS Annual Conference in Belfast, April 5th – 7th 2019. Conference registration can be found on the EHS website.
What are the crucial ingredients for success or failure of economies in long-term perspective? Is female autonomy one of the critical factors?
A number of development economists have found that gender inequality was associated with slower development (Sen, 1990; Klasen and Lamanna, 2009; Gruen and Klasen, 2008). This resulted in development policies targeted specifically at women. In 2005, for example, the United Nations Secretary General Kofi Annan stated that gender equality is a prerequisite for eliminating poverty, reducing infant mortality and reaching universal education (United Nations, 2005).
In recent periods, however, a number of doubts have been made public by development economists. Duflo (2012) suggests that there is no automatic effect of gender equality on poverty reduction, citing a number of studies. The causal direction from poverty to gender inequality might be at least as strong as the opposite direction, according to this view.
For an assessment of the direction of causality in long-term perspective, consistent data had not been available until now. Due to this lack of evidence, the link between female autonomy and human capital formation in early modern Europe has not yet been formally tested in a dynamic model (for Eastern Europe, see Baten et al, 2017; and see de Pleijt et al, 2016, for a cross-section).
De Moor and van Zanden (2010) have put forward the hypothesis that female autonomy had a strong influence on European history, basing their argument on a historical description of labour markets and the legacy of medieval institutions. They argue that female marriage ages, among other components of demographic behaviour, might have been a crucial factor for early development in northwestern European countries (for a critique, especially on endogeneity issues, see Dennison and Ogilvie 2014 and 2016; reply: Carmichael et al, 2016).
In a similar vein, Diebolt and Perrin (2013) argue, theoretically, that gender inequality retarded modern economic growth in many countries.
In a new study, to be presented at the Economic History Society’s 2019 annual conference, we directly assess the growth effects of female autonomy in a dynamic historical context.
Given the obviously crucial role of endogeneity issues in this debate, we carefully consider the causal nature of the relationship. More specifically, we exploit relatively exogenous variation of (migration adjusted) lactose tolerance and pasture suitability as instrumental variables for female autonomy.
The idea is that a high lactose tolerance increased the demand for dairy farming, whereas similarly, a high share of land suitable for pasture farming allowed more supply. In dairy farming, women traditionally had a strong role; this allowed them to participate substantially in income generation during the late medieval and early modern period (Voigtländer and Voth, 2013).
In contrast, female participation was limited in grain farming, as it requires substantial upper-body strength (Alesina et al, 2013). Hence, the genetic factor of lactose tolerance and pasture suitability influences long-term differences in gender-specific agricultural specialisation.
In instrumental variable regressions, we show that the relationship between female autonomy (age at marriage) and human capital (numeracy) is likely to be causal. More specifically, we use two different datasets: the first is a panel dataset of European countries from 1500 to 1850, which covers a long time horizon.
Second, we study 268 regions in Europe, stretching from the Ural Mountains in the east to Spain in the southwest and the UK in the northwest. Our results are robust to the inclusion of a large number of control variables and different specifications of the model.
In sum, our empirical results suggest that economies with more female autonomy became (or remained) superstars in economic development. The female part of the population needed to contribute to overall human capital formation and prosperity, otherwise the competition with other economies was lost.
Institutions that excluded women from developing human capital – such as being married early, and hence, often dropping out of independent, skill-demanding economic activities – prevented many economies from being successful in human history.
Alesina, A, P Giuliano and N Nunn (2013) ‘On the Origins of Gender Roles: Women and the Plough’, Quarterly Journal of Economics 128(2): 469-530.
Baten, J, and AM de Pleijt (2018) ‘Girl Power Generates Superstars in Long-term Development: Female Autonomy and Human Capital Formation in Early Modern Europe’, CEPR Working Paper.
Baten, J, M Szoltysek and M Camestrini (2017) ‘Girl Power’ in Eastern Europe? The Human Capital Development of Central-Eastern and Eastern Europe in the Seventeenth to Nineteenth Century and its Determinants’, European Review of Economic History 21(1): 29-63.
Carmichael, SG, AM de Pleijt, JL van Zanden and T de Moor (2016) ‘The European Marriage Pattern and its Measurement’, Journal of Economic History 76(1): 196-204.
Carmichael, SG, S Dilli and A Rijpma (2014) ‘Gender Inequality since 1820’, in How Was Life? Global Well-being since 1820 edited by JL van Zanden, J Baten, M Mira d’Hercole, A Rijpma, C Smith and M Timmer, OECD.
De Moor, T, and JL van Zanden (2010) ‘Girl Power: The European Marriage Pattern and Labour Markets in the North Sea Region in the Late Medieval and Early Modern Period’, Economic History Review 63(1): 1-33.
De Pleijt, AM, JL van Zanden and SG Carmichael (2016) ‘Gender Relations and Economic Development: Hypotheses about the Reversal of Fortune in EurAsia’, Centre for Global Economic History (CGEH) Working Paper Series No. 79
Dennison, T, and S Ogilvie (2014) ‘Does the European Marriage Pattern Explain Economic Growth?’, Journal of Economic History 74(3): 651-93.
Dennison, T, and S Ogilvie (2016) ‘Institutions, Demography and Economic Growth’, Journal of Economic History 76(1): 205-17.
Diebolt, C, and F Perrin (2013) ‘From Stagnation to Sustained Growth: The Role of Female Empowerment’, American Economic Review: Papers and Proceedings 103: 545-49.
Duflo, E (2012) ‘Women Empowerment and Economic Development’, Journal of Economic Literature 50(4): 1051-79.
Gruen, C, and SKlasen (2008) ‘Growth, Inequality, and Welfare: Comparisons across Space and Time’, Oxford Economic Papers 60: 212-36.
Hanushek, EA, and L Woessmann (2012) ‘Do Better Schools Lead to More Growth? Cognitive Skills, Economic Outcomes, and Causation’, Journal of Economic Growth 17(4): 267-321.
Kelly, M, J Mokyr and C Ó Gráda (2013) ‘Precocious Albion: A New Interpretation of the British Industrial Revolution’, UCD Centre for Economic Research Working Paper Series No. 13/11.
Klasen, S, and F Lamanna (2009) ‘The Impact of Gender Inequality in Education and Employment on Economic Growth: New Evidence for a Panel of Countries’, Feminist Economics 15(3): 91-132.
Robinson, JA (2009) ‘Botswana as a Role Model for Country Success’, UNU WIDER Research Paper No. 2009/40.
Sen, A (1990) ‘More than 100 million women are missing’, New York Review of Books, 20 December: 61-66.
United Nations (2005) Progress towards the Millennium Development Goals, 1990-2005, Secretary-General’s Millennium Development Goals Report.
Voigtländer, N, and H-J Voth (2013) ‘How the West ‘Invented’ Fertility Restriction’, American Economic Review 103(6): 2227-64.
Britain’s unusually high house price to income ratio plays an important role in reducing living standards and increasing “housing poverty”. This article shows that Britain’s housing shortage partly stems from deliberate long-term government policies aimed at restricting both public and private sector house-building. From the 1950s to the early 1980s, successive governments reduced housing starts as part of `stop-go’ macroeconomic policy, with major cumulative impacts.
This policy had its roots in the Second World War, when an influential coalition of Bank of England and Treasury officials pressed for a post-war policy of savage deflation, to restore sterling’s credibility and re-establish London as a major financial centre. John Maynard Keynes warned that prioritising international ‘obligations’ over the war-time commitment to build a fairer society would be repeating the 1920s gold standard error – though his direct influence ended with his untimely death. Deflationary policy proved politically impracticable in the short-term, as evidenced by Labour’s 1945 landslide election victory, though its supporters bided their time and were able to implement much of their agenda in the changed political climate of the 1950s.
The Conservatives’ 1951 election victory was based on a pledge to build 300,000 new homes per year. This was achieved in 1953 and building peaked at 340,000 completions in 1954. However, officials took advantage of the 1955-57 credit squeeze to press for severe cuts in housing investment. Municipal house-building was cut, while private house-building was depressed largely through restricting the growth of building society funds (by pressurising the building societies’ cartel to keep interest rates at such low levels that they were starved of mortgage funds). While the severity of policy varied over time, these restrictions were maintained almost continually until the early 1980s.
These restrictions were never formally announced and were hidden from Cabinet for much of this period. Meanwhile, given the political importance of housing, the Conservative government simultaneously proposed ever-larger housing targets (culminating in a 1964 election pledge to build 400,000 per annum). This created a perverse situation, whereby the government was spending substantial sums on highly publicised policies to increase demand for private housing (such as the 1959 House Purchase and Housing Act and the 1963 abolition of Schedule A income tax), while covertly reducing housing supply through restricting mortgage funding, limiting building firms’ access to credit, and reducing municipal housing investment. The following Labour government found itself drawn in to a similarly restrictive housing policy, as part of its ill-fated commitment to avoid sterling devaluation (arguably based on misleading Treasury advice), while housing restrictions were also used as an instrument of macroeconomic stabilisation in the 1970s.
A 1974 Bank of England analysis found that this policy had created both an exaggerated housing cycle and a structural deficit (with house-building being held below market-clearing levels at all points in the cycle). This had in turn reduced the capacity of the housing market to respond to rising demand, by reducing builders’ land banks, building materials capacity, and building labour, which raised house-prices while lowering productivity and technical progress. There is also evidence of “learning effects” by house-builders, who avoided expanding their activities during cyclical upturns, as they correctly perceived that tighter government restrictions might be imposed before their houses were ready to sell. These pressures fuelled house price inflation, both directly, and because housing became increasingly regarded as as a hedge against inflation.
Figure 1: Capital formation in dwellings, as percentage of total capital formation, and housing completions per thousand families, private houses and all houses, 1924-38 and 1954-79
British house-building during this era compared unfavourably to inter-war levels, as shown in Figure 1. Moreover, private house-building was even more depressed that total housing – as the Treasury found it easier to covertly restrict private housing than to reduce municipal building starts, where policy was more open to Cabinet and public scrutiny. British gross domestic fixed capital investment in housing was also very low relative to other European nations. Our time-series econometric analysis for 1955-1979 corroborates the `success’ of the restrictions and also shows the predicted asymmetric impact in `stop’ and `go’ phases of policy. This is an important finding – as stop-go policy is often examined in terms of the volatility of the variable under examination – based on the unrealistic assumption that industry would fail to realise that demand upturns might be rapidly terminated by the re-imposition of controls.
Housing restriction policy has persisting consequences. Additions to the housing stock were depressed for several decades, while the inflationary-hedge benefits for house-purchase became a self-fulfilling prophecy. Meanwhile restrictive planning policy (which was substantially intensified in the 1950s, as a further measure of housing restriction) has proved difficult to reverse. Average house-prices to income ratios have thus continued the upward trend established in this era, currently excluding a substantial and growing proportion of the population from owner-occupation.
Since the Victorian period, it has been commonly assumed that inventors were rarely remunerated for their inventions. To contemporaries they were ‘the miserable victim of [their] own powerful genius’, ‘Martyrs of Science’ who worked ‘alone, unfriended, solitary’, while ‘the recorded instances of the[ir] martyrdom would be a task of enormous magnitude’. Prominent examples of important inventors from the industrial revolution period, but who had the misfortune to die in penury (the steam engineer Richard Trevithick, for example), has meant that this view has passed into the modern literature almost without scrutiny.
This assumption, though, is significant, as it directly informs how we might explain probably ‘the’ big problem in economic history: what were the origins of the industrial revolution, and concomitantly, of modern economic growth. In particular, if inventors did usually fail to obtain financial rewards, this precludes potential explanations of the industrial revolution that invoke incentives to explain the actions of those who invented and commercialised the new technology industrialisation required. It also precludes the applicability of endogenous growth theory to the industrial revolution (theory which has earnt two of its progenitors 2018 Nobel prizes) as it assumes that profit incentives determine the amount of inventive activity that occurs.
In an attempt to determine the wealth of inventors, I have collected probate data for over 700 inventors born in Britain between 1660 and 1830, from a list first compiled by Ralf Meisenzahl and Joel Mokyr. This probate data indicates that inventors were in fact extremely wealthy. For instance, in one exercise, I compared the probated wealth of 422 inventors who died between 1800 and 1870, with that of the overall adult male population.
Table 1. Probated wealth of inventors, 1800-1870
Adult male population (1839-1841)
Adult male population (1858)
<£200 or no will
Notes: For details on how the distribution of male probated wealth was estimated for 1839-41, and 1858, please refer to the appendix in the original article published in the Economic History Review.
The table above shows us that approximately 5 to 6 percent of adult males who died in 1839-41 and 1858 (years for when these figures can be collated), left behind wealth probated in excess of £1,000. The equivalent figure for inventors was over 60 percent. The disparity only increases as we move up through the wealth categories. Whereas only 0.16 percent of adult males left behind wealth probated in excess of £50,000 in 1858 (one in 650), for inventors it was 14.2 percent (one in 7).
It does not, however, automatically follow that the wealth of inventors was actually derived from their inventions. These were presumably talented individuals and their income may have been accrued over the course of a ‘normal’ business career and/or inherited. Unfortunately, this is a prohibitively difficult subject to approach directly: accounts rarely survive for these inventors and in any case, it is doubtful whether income from an invention could be neatly distinguished from ‘normal’ business income. As an indirect approach, I have also collected probate information for the brothers of inventors. Brothers are an especially apposite group for comparison: they would have enjoyed a very similar inheritance to their brothers (although inheriting financial capital appears to have mattered less than inheriting social capital) and they tended to enter similar occupations to their (inventive) brothers. Indeed, 24 of the inventors in the entire dataset were related as brothers – the talents and opportunities required to become an inventor were clearly not evenly distributed among the adult male population.
For 143 of the 422 inventors discussed in table 1, it was possible to confirm the existence of at least one adult brother who reached at least the age of 25 and who died in Britain between 1800 and 1870 (253 brothers in total). In the table below, the top row divides these 143 inventors into the same wealth categories as those used in the table above, with the number in parentheses denoting how many of the 143 inventors are in each category. The columns beneath this then show the distribution of the wealth of their brothers. So, there are 25 inventors in this exercise whose estate was worth less than £200. Of their 45 brothers, 31 were also left behind less than £200. Three had probated wealth between £200 and £1,000, nine between £1,000 and £10,000 and two between £10,000 and £50,000. None left behind more than £50,000.
Table 2. Brother’s Probates, 1800-1870
< £200 (25)
< £1,000 (11)
< £10,000 (35)
< £50,000 (44)
Notes: as Table 1
Overall, if inventors were wealthier than their brothers, then the latter should be concentrated at the top and to the right of the table, and away from the bottom left corner. Clearly, they are – overwhelmingly so when one considers how important simple happenstance can be in influencing an individual’s financial success over the course of their career.
Previous work has relied on impressionistic evidence to suggest that inventors in this period rarely obtained financial rewards commensurate with their technical achievements. Probate information, though, shows that inventors were extremely wealthy relative to the adult male population. Inventors were also significantly wealthier than another group who would have received a similar inheritance (in terms of both financial and social capital) and entered similar occupations: their brothers. Their additional wealth was derived from inventive activities: invention paid.
Is the euro area sustainable in its current membership form? My research provides new lessons from past examples of monetary integration, looking at the monetary unification of Italy and Germany in the second half of the nineteenth century.
Currency areas’ optimal membership has recently been at the forefront of the policy debate, as the original choice of letting peripheral countries join the euro was widely blamed for the common currency existential crisis. Academic work on ‘optimum currency areas’ (OCA) traditionally warned against the risk of adopting a ‘one size fits all’ monetary policy for regions with differing business cycles.
Krugman (1993) even argued that monetary unification in itself might increase its own costs over time, as regions are encouraged to specialise and thus become more different to one another. But those concerns were dismissed by Frankel and Rose’s (1998) influential ‘OCA endogeneity’ theory: once regions with ex-ante diverging paths join a common currency, they will see their business cycle synchronise progressively ex-post.
My findings question the consensus view in favour of ‘OCA endogeneity’ and raise the issue of the adverse effects of monetary integration on regional inequality. I argue that the Italian monetary unification played a role in the emergence of the regional divide between Italy’s Northern and Southern regions by the turn of the twentieth century.
I find that pre-unification Italian regions experienced largely asymmetric shocks, pointing to high economic costs stemming from the 1862 Italian monetary unification. While money markets in Northern Italy were synchronised with the core of the European monetary system, Southern Italian regions tended to move together with the European periphery.
The Italian unification is an exception in this respect, as I show that other major monetary arrangements in this period, particularly the German monetary union but also the Latin Monetary Convention and the Gold Standard, occurred among regions experiencing high shock synchronisation.
Contrary to what ‘OCA endogeneity’ would imply, shock asymmetry among Italian regions actually increased following monetary unification. I estimate that pairs of Italian provinces that came to be integrated following unification became, over four decades, up to 15% more dissimilar to one another in their economic structure compared to pairs of provinces that already belonged to the same monetary union. This means that, in line with Krugman’s pessimistic take on currency areas, economic integration in itself increased the likelihood of asymmetric shocks.
In this respect, the global grain crisis of the 1880s, disproportionally affecting the agricultural South while Italy pursued a restrictive monetary policy, might have laid the foundations for the Italian ‘Southern Question’. As pointed out by Krugman, asymmetric shocks in a currency area with low transaction costs can lead to permanent loss in regional income, as prices are unable to adjust fast enough to prevent factors of production to permanently leave the affected region.
The policy implications of this research are twofold.
First, the results caution against the prevalent view that cyclical symmetry within a currency area is bound to improve by itself over time. In particular, the role of specialisation and factor mobility in driving cyclical divergence needs to be reassessed. As the euro area moves towards more integration, additional specialisation of its regions could further magnify – by increasing the likelihood of asymmetric shocks – the challenges posed by the ‘one size fits all’ policy of the European Central Bank on the periphery.
Second, the Italian experience of monetary unification underlines how the sustainability of currency areas is chiefly related to political will rather than economic costs. Despite the fact that the Italian monetary union has been sub-optimal from the start and to a large extent remained so, it has managed to survive unscathed for the last century and a half. While the OCA framework is a good predictor of currency areas’ membership and economic performance, their sustainability is likely to be a matter of political integration.
The history of transatlantic slavery is one of the most active and fruitful fields of international historical research, and an important lesson of the latest work on maritime countries like Britain and France is that there the profits of slavery and indeed abolition ‘trickled down’ to very wide sections of the population and to places well away from the principal slave-trading ports. Recently historians have started to look beyond the familiar Atlantic axis and to apply the same paradigm to the European hinterlands of the triangular trade. That is, they have sought its traces and impacts in territories that were not directly involved (or were relatively minor participants) in the traffic in Africans: the German-speaking countries, Scandinavia, Italy and Central Europe. And they are finding that the slave trade, the plantation economies that it fed, the consequences of its abolition, and not least the questions of moral and political principle that it threw up, were very much a part of the texture of society right across Europe.
In material terms, it is clear that the manufacture of trade goods – the wares with which Europeans paid African traders for the enslaved men, women and children whom they then shipped to the Americas – was an important element of many regional economies. Firearms, iron bars and ironware travelled from Denmark and the Baltic to Western Europe’s slaving ports. Glass beads were exported from Bohemia (the Czech lands), and the higher quality Venetian products attracted Liverpool merchants to set up branch offices in Italy to secure their supply. The Swiss family firm Burckhardt/Bourcard began by supplying cotton cloth for the slave trade and importing slave-produced luxury goods and moved into equipping its own slaving ships. Textile plants in the Wupper Valley in Western Germany and the hand looms of Eastern Prussia provided linens of varying quality for use on the slave plantations, though because they were shipped through English and Dutch ports their German origins have often been obscured. And the trading networks established in the context of the slave economy supported German exporting projects even after the trade was abolished, as German firms continued to trade into territories – Brazil and the Caribbean – where slavery persisted until the late 19th century.
Germans in particular were keen observers of the Atlantic slave economy, and they had their own perspective on international debates about the trade and its abolition. At the beginnings of the trade, the rulers of Brandenburg Prussia had some hopes of buying into it, establishing a slave fort on the Gold Coast between 1682 and 1720. One of the key documents of this episode is the diary of a ship’s barber, Johann Peter Oettinger, who sailed on slaving expeditions. He chose to make no comment about the brutalities that he witnessed and recorded. Characteristically, though, when the diaries were published for German readers 200 years later, they were given a moralising spin; by the 1880s, Germany was at the forefront of the Scramble for Africa, justifying colonisation in the name of suppressing the internal slave trade. Before that, and once the German states were no longer involved in the slave trade, German-speaking scientists and administrators placed themselves in the service of those states that were: Ernst Schimmelmann, whose family had one foot in Hamburg and one in Copenhagen, was a plantation owner and manager of the Swedish state slaving company, but also responsible for the abolition of the Danish slave trade in 1792. And initiatives for the post-abolition exploitation of tropical territories relied on the work of German scientists in service to the Danish state like the botanist Julius von Rohr.
Scholarly attention to the German case is also bringing the Atlantic plantation economies into dialogue with the practices of unfree labour that existed in Central Europe at the same time. Analysis of the conditions of linen production on eastern Prussia’s aristocratic estates indicates that their low production costs helped to keep down the costs of production on slave plantations. And when Germans confronted the moral and legal challenges to slavery that were crystallising into a political movement in Britain and France by the 1790s, they could not escape the implications of abolitionist arguments for the future of their own ‘peculiar institutions’ of serfdom and personal service. This was true of Theresa Huber, the author and journalist who stands for two generations of Germans who engaged in transnational abolitionist networks, and who was equally sharp in her critique of serfdom. And it was true of Prussian administrators who, when challenged by enslaved Africans on German soil to enforce the notion that ‘there are no slaves in Prussia’, could not help asking themselves what that might mean for the process towards reform of feudal institutions.
These issues have only begun to receive greater attention – more studies are needed to gain a clearer understanding of the various links through which continental Europe was connected to the Transatlantic slave business and its abolition.
SAVE 25% when you order direct from the publisher using the offer codeBB500in the box at the checkout. Discount applies to print and eBook editions. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291, and quote the same code. Offer ends on the 28th June 2018. Any queries please email email@example.com
While malaria historically claimed millions of African lives, it did not hold back the continent’s economic development. That is one of the findings of new research by Emilio Depetris-Chauvin (Pontificia Universidad Católica de Chile) and David Weil (Brown University), published in the Economic Journal.
Their study uses data on the prevalence of the gene that causes sickle cell disease to estimate death rates from malaria for the period before the Second World War. They find that in parts of Africa with high malaria transmission, one in ten children died from malaria or sickle cell disease before reaching adulthood – a death rate more than twice the current burden of malaria in these regions.
According to the World Health Organization, the malaria mortality rate declined by 29% between 2010 and 2015. This was a major public health accomplishment, although with 429,000 annual deaths, the disease remains a terrible scourge.
Countries where malaria is endemic are also, on average, very poor. This correlation has led economists to speculate about whether malaria is a driver of poverty. But addressing that issue is difficult because of a lack of data. Poverty in the tropics has long historical roots, and while there are good data on malaria prevalence in the period since the Second World War, there is no World Malaria Report for 1900, 1800 or 1700.
Biologists only came to understand the nature of malaria in the late nineteenth century. Even today, trained medical personnel have trouble distinguishing between malaria and other diseases without the use of microscopy or diagnostic tests. Accounts from travellers and other historical records provide some evidence of the impact of malaria going back millennia, but these are hardly sufficient to draw firm conclusions. Akyeampong (2006), Mabogunje and Richards (1985)
This study addresses the lack of information on malaria’s impact historically by using genetic data. In the worst afflicted areas, malaria left an imprint on the human genome that can be read today.
Specifically, the researchers look at the prevalence of the gene that causes sickle cell disease. Carrying one copy of this gene provided individuals with a significant level of protection against malaria, but people who carried two copies of the gene died before reaching reproductive age.
Thus, the degree of selective pressure exerted by malaria determined the equilibrium prevalence of the gene in the population. By measuring the prevalence of the gene in modern populations, it is possible to back out estimates of the severity of malaria historically.
In areas of high malaria transmission, 20% of the population carries the sickle cell trait. The researchers’ estimate is that this implies that historically 10-11% of children died from malaria or sickle cell disease before reaching adulthood. Such a death rate is more than twice the current burden of malaria in these regions.
Comparing the most affected areas with those least affected, malaria may have been responsible for a ten percentage point difference in the probability of surviving to adulthood. In areas of high malaria transmission, the researchers’ estimate that life expectancy at birth was reduced by approximately five years.
Having established the magnitude of malaria’s mortality burden, the researchers then turn to its economic effects. Surprisingly, they find little reason to believe that malaria held back development. A simple life cycle model suggests that the disease was not very important, primarily because the vast majority of deaths that it caused were among the very young, in whom society had invested few resources.
This model-based finding is corroborated by the findings of a statistical examination. Within Africa, areas with higher malaria burden, as evidenced by the prevalence of the sickle cell trait, do not show lower levels of economic development or population density in the colonial era data examined in this study.
To contact the authors: David Weil, firstname.lastname@example.org
Felix Meier zu Selhausen (University of Sussex) Marco H. D. Van Leeuwen (Utrecht University) Jacob L. Weisdorf (University of Southern Denmark, CAGE, CEPR)
The arrival of Christian missionaries and the receptivity of African societies to formal education prompted a genuine schooling revolution during the colonial era. The bulk of primary education in the British colonies was provided by mission schools (Frankema 2012), and their historical distribution had a long-run effect on African development (e.g. Nunn 2010). To those with access, formal education under colonial rule provided new venues of political influence and opportunities for social mobility. However, did mission schooling benefit a broad layer of the African population, or did it merely strengthen the power of pre-colonial elites? This paper addresses this question by investigating social mobility of Christian converts in colonial Uganda.
The existing literature has conveyed two opposing arguments, based mainly on qualitative sources. On the one hand, scholars have stressed that British colonial officials discouraged post-primary education of the general African population, fearing that such education would nurture anti-colonial sentiments. As a result, the benefits of mission schooling are purported to have been restricted to sons of traditional chiefs and newly empowered elites, who aligned themselves with the British administration and took up the lion’s share of urban skilled occupations (Hanson 2003, Reid 2017). Such dynamics perpetuated the power of chiefs into the post-colonial era and contributed to a legacy of ‘decentralized despotism’ (Mamdani 1996). Despite such dynamics, however, other studies have argued that mission schools became ‘colonial Africa’s chief generator of social mobility and stratification’, acting as a stepping stone to urban middle-class careers for a new generation of Africans (Iliffe 2007, p. 229).
This article explores intergenerational social mobility and colonial elite formation using the occupational titles of African grooms and their fathers who married in the prestigious Anglican Namirembe Cathedral in Kampala or in several rural parishes in Western Uganda between 1895 and 2011. The fact that sampled grooms celebrated an Anglican church marriage meant they were born to parents who, by their choice of religion and compliance with the by-laws of the Anglican Church, had positioned their offspring in a social network that afforded them a wide range of educational and occupational opportunities (Peterson 2016). This unique sample allows us to explore the impact of missionary schooling on the social mobility of converts between generations and uncover implications for colonial elite formation.
Social mobility in Kampala
To measure social mobility, we have grouped each occupation of 14,167 sampled Anglican father-son pairs into a hierarchical scheme of 6 social classes based on skill levels using HISCLASS (Van Leeuwen and Maas 2011). As shown in Figure 1, we find that the occupational mobility of sampled grooms expanded dramatically during the colonial era. By the onset of British rule (1890-99), Buganda society was comparatively immobile with three out of four sons remaining in the social class of their fathers. But by the 1910s, this had reversed to 3 in 4 sons moving to a different class. Careers in the colonial administration (chiefs, clerks) and the Anglican mission (teachers, priests) functioned as key steps on the ladder to upward mobility.
Figure 1: Social mobility among Anglican grooms in Kampala, 1895-2011
What was the social background of those reaching the highest occupational classes? Table 1 zooms in on grooms’ social-class destination relative to their social origin during the colonial era. It shows that the African converts, benefiting from new occupational opportunities opening-up during the colonial period, were able to take large steps up the social ladder regardless of their social origin. A remarkable 45% of sons from farming family backgrounds (class IV) moved into white-collar work, which indicates that the colonial labour market was generally surprisingly conducive to social mobility among Anglican converts.
Table 1: Outflow mobility rates in Kampala, 1895-1962
Did chiefs and their sons benefit disproportionally from occupational diversification under colonialism? Under indirect British rule, many traditional Baganda chiefs converted to Anglicanism and became colonial officials, employed to extract taxes and profits from cash-cropping farmers. This put them in a supreme position for consolidating their pre-colonial societal power. Despite such advantages, our microdata suggests that the privileged position of pre-colonial elites was not sustained over the colonial period Figure 2 shows the probabilities of sons of chiefs (class I) versus farmers and lower-class labourers (class IV-VI) of entering an elite position (class I). At the beginning of the colonial era, sons of chiefs were significantly more likely to reach the top of the social ladder. However, a remarkably fluid colonial labour market, based on meritocratic principles, gradually eroded their economic and political advantages. Towards the end of the colonial era, traditional claims to status no longer conferred automatic advantages upon the sons of chiefs, who lost their high social-status monopoly to a new Christian-educated and commercially orientated class of Ugandans of farming backgrounds (Hanson 2003).
Figure 2: Conditional probability of sons of chiefs and farmers in class I, Kampala
Frankema, E. (2012). ‘The origins of formal education in sub-Saharan Africa: was British rule more benign?’ European Review of Economic History 16(4): 335-55.
Hanson, E. (2003). Landed Obligation: The Practice of Power in Buganda. Portsmouth, NH: Heinemann.
Mamdani, M. (1996). Citizen and Subject: Contemporary Africa and the Legacy of Late Colonialism. Princeton: Princeton University Press.
Meier zu Selhausen, F., van Leeuwen, Marco H.D. and Weisdorf, J. (2018). ‘Social mobility among Christian Africans: Evidence from Anglican marriage registers in Uganda, 1895-2011’. Economic History Review, forthcoming.
Nunn, N. (2010). Religious Conversion in Coloinal Africa. American Economic Review: Papers and Proceedings 100 (2) :147-52.
Peterson, D. (2016). ‘The Politics of Transcendence in Colonial Uganda’. Past and Present 230(1): 197-225.
Reid, R. J. (2017). A History of Modern Uganda. Cambridge: Cambridge University Press.
Van Leeuwen, M.H.D. and Maas, I. (2011). HISCLASS – A Historical International Social Class Scheme. Leuven: Leuven University Press.
by Joan Rosés (LES) and Nikolaus Wolf (Humboldt University)
A recent literature has explored growing personal wealth inequality in countries around the world. This column explores the widening wealth gap between regions and across states in Europe. Using data going back to 1900, it shows that regional convergence ended around 1980 and the gap has been growing since then, with capital regions and declining industrial regions at the two extremes. This rise in regional inequality, combined with rising personal inequality, has played a significant role in the recent populist backlash.