Stealing for the market: the illegitimacy of enslavement in the early modern Atlantic world

by Judith Spicksley (Wilberforce Institute, University of Hull)

This research will be presented during the EHS Annual Conference in Belfast, April 5th – 7th 2019. Conference registration can be found on the EHS website.

 

 

Slaves on the West Coast of Africa, c.1833 (oil on canvas)
The Slave Trade (Slaves on the West Coast of Africa), by Auguste-François Biard, 1840. Available at Wikimedia Commons. 

Slavery was understood to be illegitimate long before anti-slavery activists called for the abolition of the slave trade in the eighteenth century. Slavery is now prohibited in international law, but it was a legal institution in the vast bulk of societies at some point in the past.

A range of legal methods were used to enslave people, of which the most common were birth, capture in war, judicial punishment, debt, and poverty. But there was another method of enslavement that historians include in their list: the kidnap and theft of persons for sale on the market.

These practices were never considered acceptable forms of enslavement. In among the earliest law codes that survive from Old Babylonia in the second millennium BCE, to Israel in the first, are punishments for the theft of a person, which attracted the death penalty.

But demand for slaves created opportunities for traders to sell those they had stolen as if they were slaves proper, and increase their wealth in the process. These cases of illegal enslavement ran alongside bona fide sales throughout the period in which slavery was legitimate.

Examples include the activities of Cilician-based pirates in the eastern Mediterranean in the late Roman Republic and early Empire, and the violent sourcing of labour in Africa for the American plantations in the early modern Atlantic world. But it was the raiding bands that scoured the Slav lands of Eastern Europe for captives in the high medieval period that encouraged an understanding of the meaning of slavery as illegal in the west.

The term ‘slave’ appeared in English, and in the languages of Western Europe more generally, from the late medieval period via the ethonym Slav. This was the name given to members of the Slavic peoples living in Eastern Europe whose communities were frequently raided for persons who could be sold as slaves.

But the term ‘slavery’ does not enter the English language until the mid-sixteenth century. At that point, it was applied as a metaphor for the tyranny of Catholicism, as the development of Protestantism created a major religious schism.

The term ‘slavery’ was also applied to the activities of the earliest English slave traders. During his first voyage in 1562, John Hawkins is reputed to have violently captured around 400 Africans in Guinea, whom he later sold in the West Indies. He repeated these activities over the next five years with the support of Queen Elizabeth.

Hawkins was following in the footsteps of other Europeans, most notably Lançarote de Freitas, the Portuguese explorer, who is recognised as having set the transatlantic slave trade in motion. De Freitas returned from North Africa to Lagos in 1444 with a cargo of 235 Berber captives seized in a series of raids, who were subsequently sold into slavery.

From the mid-seventeenth century, with the challenge to the divine right of kings, ‘slavery’ became a metaphor for, and a weapon of, political tyranny in England. It also became a reality for travellers.

The seventeenth century saw an increased level of activity by the so-called Barbary pirates, operating out of North Africa, who seized European sailors and travellers and held them as ‘slaves’ for ransom. Englishmen and women were captured and enslaved in the Americas too, as the Atlantic economy underwent expansion.

As a result, the meaning of ‘slavery’ as a system of illegal subjection, linked to tyranny, violence and theft, had become deeply embedded in English thought before the abolitionists were established as an organised force from the late eighteenth century.

Family standards of living in England over the very long run

by Sara Horrell (University of Cambridge), Jane Humphries (University of Oxford), and Jacob Weisdorf (University of Southern Denmark and Centre for Economic Policy Research)

This research will be presented during the EHS Annual Conference in Belfast, April 5th – 7th 2019. Conference registration can be found on the EHS website.

 

Dore_London
Over London–by Rail from London: A Pilgrimage (1872). Available at Wikimedia Commons.

The secular evolution in human wellbeing, measured by unskilled workers’ real wages, has long been the subject of scholarly debate. Attention is focused on whether modern economic growth is a relatively recent phenomenon in human history, prompted by the Industrial Revolution, or if workers in England experienced economic progress well before the Industrial Revolution, even if on a more modest scale. The answer will help inform third-world policy-makers about alternative routes to economic growth.

Thanks to recent archival work, we now have information on payments made to working-class men, women and children across 600 years of English history – from before the Black Death through to the classic years of the Industrial Revolution. In a study to be presented at the Economic History Society’s 2019 annual conference, we bring all of these payments together to provide a first-ever account of the earning possibilities of working-class families in historical England.

By asking how much a typical lower-class family consumed, in terms of basic consumption goods, such as calories, clothes, heating and housing, we are able to ask how much work was needed by the husband, as well as his wife and children, in order to achieve this. Also, because historical families were rather large (four to five children were not uncommon), we pay particular attention to the ‘family squeeze’ – that is, stages during the family lifecycle when the ratio of dependants to earners peaked.

Despite the post-Black Death period being regarded as a ‘golden age of labour’ and on assumptions of plentiful work, the husband’s earnings were not enough in the fourteenth century to satisfy a typical family’s basic consumption needs during the family squeeze. Women and children’s work was regularly needed in order to make ends meet and, even then, this was not enough to avoid insolvency problems during a couple’s old age.

But as we move forward through the medieval and early-modern periods, progressively less women and children’s work was required to ensure a stable standard of living, and old age poverty became less severe. In this sense, we conclude that the quality of life of an average lower-class family gradually improved in the centuries leading up to the Industrial Revolution.

We also conclude by arguing that a surplus in the family budget after necessities had been bought in the run-up to the Industrial Revolution enabled families to allocate a growing fraction of their income to market goods rather than homemade products.

This served as a stimulus to the Industrial Revolution because it motivated producers to innovate and profit from satisfying this increased demand. A widening market seemed important in combination (or competition) with the hypotheses that industrialisation sprung from entrepreneurial efforts to save labour.

Slavery and Anglo-American capitalism revisited

by Gavin Wright (Stanford University)

This research will be presented in the Tawney Lecture during the EHS Annual Conference in Belfast, April 5th – 7th 2019. Conference registration can be found on the EHS website.

 

Slaves_cutting_the_sugar_cane_-_Ten_Views_in_the_Island_of_Antigua_(1823),_plate_IV_-_BL
Slaves cutting sugar cane, taken from ‘Ten Views in the Island of Antigua’ by William Clark. Available at Wikimedia Commons.

For decades, scholars have debated the role of slavery in the rise of industrial capitalism, from the British Industrial Revolution of the eighteenth century to the acceleration of the American economy in the nineteenth century.

Most recent studies find an important element of truth in the thesis associated with Eric Williams that links the slave trade and slave-based commerce with early British industrial development. Long-distance markets were crucial supports for technological progress and for the infrastructure of financial markets and the shipping sector.

But the eighteenth century Atlantic economy was dominated by sugar, and sugar was dominated by slavery. The role of the slave trade was central to the process, because it would have been all but impossible to attract a free labour force to the brutal and deadly conditions that prevailed in sugar cultivation. As the mercantilist, Sir James Steuart asked in 1767: ‘Could the sugar islands be cultivated to any advantage by hired labour?’

Adherents of an insurgency known as the New History of Capitalism have extended this line of analysis to nineteenth century America, maintaining that: ‘During the eighty years between the American Revolution and the Civil War, slavery was indispensable to the economic development of the United States.’ A crucial linkage in this perspective is between slave-grown cotton and the cotton textile industries of both Britain and the United States, as asserted by Marx: ‘Without slavery you have no cotton; without cotton you have no modern industry.’

My research, to be presented in this year’s Tawney Lecture to the Economic History Society’s annual conference, argues to the contrary, that such analyses overlook the second part of the Williams thesis, which held that industrial capitalism abandoned slavery because it was no longer needed for continued economic expansion. We need not ascribe cynical or self-interested motives to the abolitionists to assert that these forces were able to succeed because the political-economic consensus that supported slavery in the eighteenth century no longer prevailed in the nineteenth.

Between the American Revolution in 1776 and the end of the Napoleonic Wars in 1815, the demands of industrial capitalism changed in fundamental ways: expansion of new export markets in non-slave areas; streamlined channels for migration of free labour; the shift of the primary raw material from sugar to cotton. Unlike sugar, cotton was not confined to unhealthy locations, did not require large fixed capital investment, and would have spread rapidly through the American South, with or without slavery.

These historic shifts were recognised in the United States as in Britain, as indicated by the post-Revolutionary abolitions in the northern states and territories. To be sure, southern slavery was highly profitable to the owners, and the slave economy experienced considerable growth in the antebellum period. But the southern regional economy seemed increasingly out of step with the US mainstream, its centrality for national prosperity diminishing over time.

Indeed, my study asserts that on balance the persistence of slavery actually reduced the growth of cotton supply compared with a free-labour alternative. The truth of this proposition is most clearly demonstrated by the expansion of production after the Civil War and emancipation, and the return of world cotton prices to their pre-war levels.

The comfortable, the rich and the super-rich: what really happened to top British incomes during the first half of the twentieth century?

by Peter Scott and James T Walker (Henley Business School, University of Reading)

This research will be presented during the EHS Annual Conference in Belfast, April 5th – 7th 2019. Conference registration can be found on the EHS website.

 

 

Across_road_junction_at_Clapham_Common,_London,_England
Across road junction at Clapham Common, London, England. Available at Wikimedia Commons.

Long-run analysis of British income inequality has been severely hampered by poor historical income distribution data relative to other western countries. Until 1937, there were no official peacetime income distribution estimates for Britain, despite considerable contemporary interest in how much of the national income was taken by the rich.

In research to be presented at the Economic History Society’s 2019 annual conference, we address this question, focusing on changes in the incomes of the top 0.001-5% of the income distribution. This group is important for two reasons. First, because top incomes accounted for a substantial slice of total personal incomes, with the top 1% and top 5% taking around 30% and 45% of total income in 1911, according to our estimates.

Second, income redistribution in western countries is typically dominated by changes in the shares of the top 5% and, especially, within the top percentile. Thus examining higher incomes is crucial to explaining the apparent paradox between a relatively stagnant income distribution among the bulk of the British population and the generally assumed trend towards a more equal pre-tax income distribution.

Using a newly rediscovered Inland Revenue survey of personal incomes for taxpayers in 1911, we show that Britain had particularly high-income inequality compared with other Western countries. Top British income shares fell considerably over the following decades, though British incomes remained more unequal than in the United States or France in 1949.

Inequality reduction was driven principally by a collapse in unearned incomes, reflecting economic shocks and government policy responses. Over the period from 1911 to 1949, there was a general downward trend in rent, dividend and interest income, with particularly sharp falls during the two world wars and the recessions of 1920-21 and 1929-32.

War-time inflation eroded the real income received from fixed interest securities; new London Stock Exchange issues of overseas securities were restricted by the Treasury (to protect Britain’s foreign exchange position), reducing rentiers’ ability to invest their income overseas; and the agricultural depression lowered real (inflation-adjusted) land rents.

These trends reflected a progressive collapse of the globalised world economy from 1914 to 1950, which both reduced the incomes of the rich and redistributed income to the bottom 95% of the income spectrum.

For example, rent control (introduced in 1915 and continuing throughout the period of our study), depressed the incomes of landlords, but substantially reduced the real cost of a major household expenditure burden, in a country where around 90% of households were private tenants. Rent control also led to extensive house sales by landlords, mainly to sitting tenants, at prices reflecting their low, controlled, rents.

Meanwhile the scarcity of low-risk, high yielding assets during the interwar years led to substantial deposits in building societies by high-income individuals, funding the house-building boom of the 1930s. Restrictions on overseas new issues also led the City of London to become increasingly involved in British industrial finance – expanding industrial growth and employment.

Conversely, the policy liberalisations of the 1980s that heralded the start of the new globalisation (and the resumption of growing income inequality in western nations) have made it far easier for the rich to offshore their assets, or themselves, either in search of better investments opportunities or jurisdictions more suited to protecting their wealth. This has produced a strong international trend towards rising income inequality, with Britain returning to its position as one of the most unequal western nations.

Demographic shocks and women’s labour market participation: evidence from the 1918 influenza pandemic in India

by James Fenske, Bishnupriya Gupta and Song Yuan (University of Warwick)

This research will be presented during the EHS Annual Conference in Belfast, April 5th – 7th 2019. Conference registration can be found on the EHS website.

 

 

Women_washing_clothes_at_the_Sabarmati_river,_Ahmedabad_(late_19th_or_early_20th_century)
Women washing clothes at the Sabarmati river, Ahmedabad. Available at Wikimedia Commons.

Women’s labour force participation is an important driver of economic development and gender equality. Historically, India has had low female participation in economic activities outside the home. Researchers have cited early marriage, social conservatism and limited comparative advantage of women in certain types of agriculture among the potential explanations.

Despite economic growth, female labour force participation has fallen in recent years, generating several important studies. Our research, to be presented at the Economic History Society’s 2019 annual conference, considers the response of female labour force participation to one major demographic shock – the 1918 influenza pandemic.

Past studies have looked at the wartime mortality of men as a demographic shock affecting the total supply of labour, and have found that this affects women’s labour market participation. The empirical evidence of other types of demographic shock due to epidemics is more limited.

Our research focuses on India and aims to understand the impact of the large-scale demographic shock that was the 1918 influenza pandemic, in the context of a society in which female labour market participation had typically been low due to cultural norms.

From 1918 to 1919, a deadly influenza epidemic hit India, and caused more than 13 million deaths, equivalent to 5% of the population. In contrast to typical epidemics, which are disproportionally deadly to immunologically weak individuals such as infants and the very old, this epidemic primarily caused deaths among young adults between the ages of 20 and 40.

The mortality rate varied greatly across districts, ranging from 1.4% to 17.9% of the population in our sample. We focus on three questions: did the 1918 influenza pandemic increase or decrease female employment? If so, why? Was this effect persistent?

To answer these questions, we combine detailed district-level historical census data on occupations by gender from 1901 to 1931 with data from multiple sources on influenza mortality, marital statuses by age and gender, and wages.

Using an event-study approach, we find that a 1% increase in the mortality rate raised the female labour force participation rate by 1.2% in 1921 and the change was concentrated in the service sector. But this was transitory, disappearing by 1931. By contrast, the pandemic did not affect the labour force participation of men at either the district or district-by-sector level.

How do we explain the labour market effects of the pandemic? Possible causal channels will have affected either the supply of or the demand for female labour. One possible channel is that the death of men increased the share of widows in the population. As household income was generally earned by men, widows were pushed to participate in the labour market in order to mitigate the negative economic shock.

On the other hand, the rise in the proportion of widowers had no impact on male labour force participation, as most men worked before the disaster, whether widowed or not. In addition, the pandemic led to a shortage of labour, potentially increasing wages, inducing women out of the home and into the labour force.

Our findings provide evidence that negative demographic shocks alter the working behaviour of women, at least in the short run. In contrast with previous research on events such as the slave trade and the two world wars, which have considered sex-biased demographic shocks, we show that shocks that are not sex-based can also play a role in determining female employment. Further, it enables us to understand the historical dynamics and determinants of female economic and social status in India.

 

Girl-power generates superstars in long-term development: evidence from fifteenth to nineteenth century Europe

by Joerg Baten (University of Tübingen) and Alexandra de Pleijt (University of Oxford)

This research will be presented during the EHS Annual Conference in Belfast, April 5th – 7th 2019. Conference registration can be found on the EHS website.

 

 

What are the crucial ingredients for success or failure of economies in long-term perspective? Is female autonomy one of the critical factors?

 

800px-Abraham_Ortelius_Map_of_Europe
Map of Europe, by Abraham Ortelius. Available on Wikimedia Commons.

A number of development economists have found that gender inequality was associated with slower development (Sen, 1990; Klasen and Lamanna, 2009; Gruen and Klasen, 2008). This resulted in development policies targeted specifically at women. In 2005, for example, the United Nations Secretary General Kofi Annan stated that gender equality is a prerequisite for eliminating poverty, reducing infant mortality and reaching universal education (United Nations, 2005).

In recent periods, however, a number of doubts have been made public by development economists. Duflo (2012) suggests that there is no automatic effect of gender equality on poverty reduction, citing a number of studies. The causal direction from poverty to gender inequality might be at least as strong as the opposite direction, according to this view.

For an assessment of the direction of causality in long-term perspective, consistent data had not been available until now. Due to this lack of evidence, the link between female autonomy and human capital formation in early modern Europe has not yet been formally tested in a dynamic model (for Eastern Europe, see Baten et al, 2017; and see de Pleijt et al, 2016, for a cross-section).

De Moor and van Zanden (2010) have put forward the hypothesis that female autonomy had a strong influence on European history, basing their argument on a historical description of labour markets and the legacy of medieval institutions. They argue that female marriage ages, among other components of demographic behaviour, might have been a crucial factor for early development in northwestern European countries (for a critique, especially on endogeneity issues, see Dennison and Ogilvie 2014 and 2016; reply: Carmichael et al, 2016).

In a similar vein, Diebolt and Perrin (2013) argue, theoretically, that gender inequality retarded modern economic growth in many countries.

In a new study, to be presented at the Economic History Society’s 2019 annual conference, we directly assess the growth effects of female autonomy in a dynamic historical context.

Given the obviously crucial role of endogeneity issues in this debate, we carefully consider the causal nature of the relationship. More specifically, we exploit relatively exogenous variation of (migration adjusted) lactose tolerance and pasture suitability as instrumental variables for female autonomy.

The idea is that a high lactose tolerance increased the demand for dairy farming, whereas similarly, a high share of land suitable for pasture farming allowed more supply. In dairy farming, women traditionally had a strong role; this allowed them to participate substantially in income generation during the late medieval and early modern period (Voigtländer and Voth, 2013).

In contrast, female participation was limited in grain farming, as it requires substantial upper-body strength (Alesina et al, 2013). Hence, the genetic factor of lactose tolerance and pasture suitability influences long-term differences in gender-specific agricultural specialisation.

In instrumental variable regressions, we show that the relationship between female autonomy (age at marriage) and human capital (numeracy) is likely to be causal. More specifically, we use two different datasets: the first is a panel dataset of European countries from 1500 to 1850, which covers a long time horizon.

Second, we study 268 regions in Europe, stretching from the Ural Mountains in the east to Spain in the southwest and the UK in the northwest. Our results are robust to the inclusion of a large number of control variables and different specifications of the model.

In sum, our empirical results suggest that economies with more female autonomy became (or remained) superstars in economic development. The female part of the population needed to contribute to overall human capital formation and prosperity, otherwise the competition with other economies was lost.

Institutions that excluded women from developing human capital – such as being married early, and hence, often dropping out of independent, skill-demanding economic activities – prevented many economies from being successful in human history.

 

 

References

Alesina, A, P Giuliano and N Nunn (2013) ‘On the Origins of Gender Roles: Women and the Plough’, Quarterly Journal of Economics 128(2): 469-530.

Baten, J, and AM de Pleijt (2018) ‘Girl Power Generates Superstars in Long-term Development: Female Autonomy and Human Capital Formation in Early Modern Europe’, CEPR Working Paper.

Baten, J, M Szoltysek and M Camestrini (2017) ‘Girl Power’ in Eastern Europe? The Human Capital Development of Central-Eastern and Eastern Europe in the Seventeenth to Nineteenth Century and its Determinants’, European Review of Economic History 21(1): 29-63.

Carmichael, SG, AM de Pleijt, JL van Zanden and T de Moor (2016) ‘The European Marriage Pattern and its Measurement’, Journal of Economic History 76(1): 196-204.

Carmichael, SG, S Dilli and A Rijpma (2014) ‘Gender Inequality since 1820’, in How Was Life? Global Well-being since 1820 edited by JL van Zanden, J Baten, M Mira d’Hercole, A Rijpma, C Smith and M Timmer, OECD.

De Moor, T, and JL van Zanden (2010) ‘Girl Power: The European Marriage Pattern and Labour Markets in the North Sea Region in the Late Medieval and Early Modern Period’, Economic History Review 63(1): 1-33.

De Pleijt, AM, JL van Zanden and SG Carmichael (2016) ‘Gender Relations and Economic Development: Hypotheses about the Reversal of Fortune in EurAsia’, Centre for Global Economic History (CGEH) Working Paper Series No. 79

Dennison, T, and S Ogilvie (2014) ‘Does the European Marriage Pattern Explain Economic Growth?’, Journal of Economic History 74(3): 651-93.

Dennison, T, and S Ogilvie (2016) ‘Institutions, Demography and Economic Growth’, Journal of Economic History 76(1): 205-17.

Diebolt, C, and F Perrin (2013) ‘From Stagnation to Sustained Growth: The Role of Female Empowerment’, American Economic Review: Papers and Proceedings 103: 545-49.

Duflo, E (2012) ‘Women Empowerment and Economic Development’, Journal of Economic Literature 50(4): 1051-79.

Gruen, C, and S Klasen (2008) ‘Growth, Inequality, and Welfare: Comparisons across Space and Time’, Oxford Economic Papers 60: 212-36.

Hanushek, EA, and L Woessmann (2012) ‘Do Better Schools Lead to More Growth? Cognitive Skills, Economic Outcomes, and Causation’, Journal of Economic Growth 17(4): 267-321.

Kelly, M, J Mokyr and C Ó Gráda (2013) ‘Precocious Albion: A New Interpretation of the British Industrial Revolution’, UCD Centre for Economic Research Working Paper Series No. 13/11.

Klasen, S, and F Lamanna (2009) ‘The Impact of Gender Inequality in Education and Employment on Economic Growth: New Evidence for a Panel of Countries’, Feminist Economics 15(3): 91-132.

Robinson, JA (2009) ‘Botswana as a Role Model for Country Success’, UNU WIDER Research Paper No. 2009/40.

Sen, A (1990) ‘More than 100 million women are missing’, New York Review of Books, 20 December: 61-66.

United Nations (2005) Progress towards the Millennium Development Goals, 1990-2005, Secretary-General’s Millennium Development Goals Report.

Voigtländer, N, and H-J Voth (2013) ‘How the West ‘Invented’ Fertility Restriction’, American Economic Review 103(6): 2227-64.

The age of mass migration in Latin America

by Blanca Sánchez-Alonso (Universidad San Pablo-CEU, Madrid)

This article is published by The Economic History Review, and it is available on the EHS website.

 

Blanca Blog
General Carneiro station which belonged to Minas and Rio railway. Minas Gerais province, Brazil, c.1884. Available at Wikimedia Commons.

Latin America was considered a ‘land of opportunity’ between 1870 and 1930.  During that period 13 million Europeans migrated to this region.  However, the experiences of Latin American countries are not fully incorporated into current debates concerning the age of mass migration.

The main objective of my article, ‘The age of mass migration in Latin America’,  is to rethink the role of European migration to the region in the light of new research. It addresses several major questions suggested by the economic literature on migration: whether immigrants were positively selected from their sending countries, how immigrants assimilated into host economies, the role of immigration policies, and the long-run effects of European immigration on Latin America.

Immigrants overwhelmingly originated from the economically backward areas of southern Europe. Traditional interpretations have tended to extrapolate the economic backwardness of Italy, Spain, and Portugal (measured in terms of per capita GDP and relative to advanced European countries) to emigration flows. Yet, judging by literacy levels, migrants to Latin America from southern European countries were positively selected. Immigrants to Latin America from Spain, Italy and Portugal were drawn from the northern regions which had higher levels of literacy.  There were very few immigrants to Latin America  from the southern regions of these countries. .  When immigrant literacy is compared with that of potential emigrants from regions of high emigration, positive selection appears quite clear.

One proxy often used to signal positive self-selection is upward mobility within and across generations. Recent empirical research shows that it was the possibility of rapid social upgrading that made Argentina attractive to immigrants. First-generation immigrants experienced faster occupational upgrading than natives; upward occupational mobility occurred for a large proportion of those who declared unskilled occupations on arrival. Immigrants to Argentina experienced a very fast growth in occupational earnings (6 per cent faster than natives) between 1869 and 1895. For the city of Buenos Aires in 1895, new evidence shows that Italian and Spanish males received, on average, 80 per cent of average native-born earnings. In some categories, such as crafts and services, immigrants obtained higher wages than natives. These findings provide an economic rationale why some Europeans chose Argentina over the US, despite a smaller wage differential between originating country  and destination.

Immigrants appear to have adjusted successfully to Latin American labour markets.  This is evidenced by access to property and in the large ownership of businesses.  Almost all European communities experienced strong and fast upward social mobility in the destination countries. Whether this was because of positive selection at home or because of the relatively low skill levels in the host societies is still an open question.

European immigrants to Latin America had higher levels of literacy than the native population. Despite non-selective immigration policies, Latin American countries received immigrants with higher levels of human capital compared to natives. Linking immigrants’ human capital to long run economic and educational outcomes has been the focus of recent research for Brazil and Argentina. The impact of immigration in those areas with higher shares of Europeans appears to be important since immigrants demanded and created schools (public or private). New research presents evidence of path dependency linking past immigrants’ human capital with present outcomes in economic development in the region.

Immigration policies in Latin America raised few barriers to European immigration. However, the political economy of immigration policy of Argentina shows a more complicated story than the classic representation of landowners constantly supporting an open-door policy.

Brazil developed a long-lasting programme of subsidized immigration. The expected income of immigrants to São Paulo was augmented by prospective savings, a guaranteed job on arrival, and the subsidized transportation cost. Going to Brazil was perceived as a good investment in southern Europe. Transport subsidies and the peculiarities of the colono contract in the coffee areas seem more important explanations than real wage differentials for understanding how Brazil competed for workers in the international labour market. The Lewis model merits further investigation for two main reasons. First, labour supply increased faster than the number of workers needed for the coffee expansion because of subsidies and, second, labour markets in São Paulo were segmented. European immigrants supplied only a fraction (though a substantial one) of the total labour force needed for the coffee plantations. The internal supply of workers became increasingly important and must be included in the total labour supply.

Recent literature shows that researchers are either identifying new quantitative evidence or exploiting existing data in new ways. Consequently,  new research is providing answers and posing questions to show that Latin America has much to add to debates on the economic and social impact of historical immigration.

 

To contact Blanca Sánchez-Alonso: blanca@ceu.es

Shoplifting in Eighteenth-Century England

by Shelley Tickell (University of Hertfordshire)

Shoplifting in Eighteenth Century England is published by Boydell and Brewer Press. SAVE  25% when you order direct from the publisher – offer ends on the 5th March 2019. See below for details.

 

TickellPicture

What would you choose to buy from a store if money was no object? This was a decision eighteenth-century shoplifters made in practice on a daily basis. We might assume them to be attracted to the novel range of silk and cotton textiles, foodstuffs, ornaments and silver toys that swelled the consumer market in this period. Demand for these home-manufactured and imported goods was instrumental in a trebling of the number of English shops in the first half of the century, escalating the scale of the crime. However, as my book Shoplifting in Eighteenth-Century England shows, this was not the case. Consumer desire was by no means shoplifters’ major imperative.

 

Shoplifting occurred nationwide, but it was disproportionately a problem in the capital. A study of a sample of the many thousand prosecutions at the Old Bailey reveals that linen drapers, shoemakers, hosiers and haberdashers were the retailers most at risk. Over 70% of goods stolen, particularly by women, were fabrics, clothing and trimmings. Though thefts were highly gendered, men also stole these items far more frequently than the food, jewellery and household goods which were largely their preserve. Yet items stolen were not predominantly the most fashionable. Traditional linens, wool stockings and leather shoes were stolen as often as silk handkerchiefs and cotton prints. A prolific shoplifter who confessed to her crime found it profitable over the course of a year to steal printed linen at four times the quantity of the more stylish cotton, lawns, muslins and silk handkerchiefs she also took.

The shoplifters prosecuted were overwhelmingly from plebeian backgrounds. Professional gangs did exist but for most the crime was a source of occasional subsistence. Shop thieves came from the most economically vulnerable sections of society, seeking to weather an urban economy of low-paid and insecure work; many were older women or children. As the stolen goods needed to be convertible to income they were very commonly sold. So thieves sought the items which were most negotiable, those in greatest demand and least conspicuous in the working neighbourhoods in which they lived. A parcel of handkerchiefs stolen unopened was found to be ‘too fine’ for a market seller to whom it was offered. While there was undoubtedly an eagerness for popular fashion, the call for neat and appropriate daily dress in working communities was as insistent. We find the frequency with which shoplifters stole different types of clothing is consistent with a market demand governed in great part by the customary turnover of clothing items in labouring families. Handkerchiefs, shoes and stockings which were replaced regularly, were stolen frequently, jackets and stays more rarely.

There were also some practical reasons why shoplifters avoided the high-fashion goods that elite shops sold. To enter the emporiums in which the rich shopped added a heightened degree of risk. Testimony confirms shopkeepers’ deep reluctance to suspect any customer who appeared genteel, but in elite areas such as London’s West End retailers had an established clientele and a new face was likely to draw attention. A few shoplifters did try their luck by making an effort to dress the part and their polite fashioning and acting skill, witnesses recall, was often masterly. But an accidental slip into plebeian manners was easily done. Three customers dressed in silk drew the suspicion of a Covent Garden shopwoman as, she explained, ‘they called me my dear in a very sociable way’.

In general, shoplifters restricted themselves to plundering smaller local shops that were convenient to reconnoitre and with fewer staff to mount surveillance. A mapping of incidents in London shows this bias towards poorer and less fashionable districts, particularly to the north and east of the capital. The research found that within these working neighbourhoods shoplifted goods played an instrumental role in the intricate social and economic relations that underpinned community survival. Local associates earned money selling or pawning goods for the thief, their reputation serving to give the transaction an added credibility. Neighbours were informally sold stolen items on favourable terms, often including an element of exchange and credit, which acted to secure their complicity and future loyalty. We also come across shoplifted goods that were pawned to fund the shoplifter’s ongoing business or even recommodified as stock for their small retail concerns. Need rather than consumption fever motivated these shoplifters. Shoplifting was a capital crime throughout the century but this seems to have been of very little moment when the dictate was economic survival. As a shoplifter bluntly testified of her friend in 1747, ‘The prisoner came to me to go with her to the prosecutor’s shop, she wanted money, and she should go to the gallows’.

 

SAVE 25% when you order direct from the publisher using the offer code BB500 online at https://boydellandbrewer.com/shoplifting-in-eighteenth-century-england-pb.htmlOffer ends 5th March 2019. Discount applies to print and eBook editions. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291, and quote the same code. Any queries please email marketing@boydell.co.uk

 

To contact Shelly Tickell: s.g.tickell@herts.ac.uk

The Price of the Poor’s Words: Social Relations and the Economics of Deposing for One’s “Betters” in Early Modern England

by Hillary Taylor (Jesus College, Cambridge)

This article is published by The Economic History Review, and it is available on the EHS website

william_powell_frith_-_poverty_and_wealth
Poverty and Wealth. Available at Wikimedia Commons

Late sixteenth- and early seventeenth-century England was one of the most litigious societies on record. If much of this litigation was occasioned by debt disputes, a sizeable proportion involved gentlemen suing each other in an effort to secure claims to landed property. In this genre of suits, gentlemen not infrequently enlisted their social inferiors and subordinates to testify on their behalf.[1] These labouring witnesses were usually qualified to comment on the matter at hand a result of their employment histories. When they deposed, they might recount their knowledge of the boundaries of some land, of a deed or the like. In the course of doing so, they might also comment on all sorts of quotidian affairs. Because testifying enabled illiterate and otherwise anonymous people to speak on-record about all sorts of issues, historians have rightly regarded depositions as a singularly valuable source: for all their limitations, they offer us access to worlds that would otherwise be lost.

But we don’t know much about what labouring people thought about the prospect of testifying for (and against) their superiors, or how they came to testify in the first place. Did they think that it presented an opportunity to assert themselves? Did it – as some contemporary legal commentators claimed – provide them with an opportunity to make a bit of money on the side by ‘selling’ dubious evidence to their litigious superiors?[2] Or were they reluctant to depose in such circumstances and, if so, why? Where subordinated individuals deposed for their ‘betters’, what was the relationship between the ‘pull’ of economic reward and the ‘push’ of extra-economic coercion?

I wrote an article that considers these questions. It doesn’t have any tables or graphs; the issues with which it’s concerned don’t readily lend themselves to quantification. Rather, this piece tries to think about how members of the labouring population conceived of the possibilities that were afforded to and the constraints that were imposed upon them by dint of their socio-economic position.

In order to reconstruct these areas of popular thought, I read loads of late sixteenth- and early seventeenth-century suits from the court of Star Chamber. In these cases, labouring witnesses who had deposed for one superior against another were subsequently sued for perjury (this was typically done in an effort to have a verdict that they had helped to secure overturned). Allegations against these witnesses got traction because it was widely assumed that people who worked for their livings were poor and, as a result, would lie under oath for anyone who would pay them for doing so. Where these suits advanced to the deposition-taking phase, labouring witnesses who were accused of swearing falsely under oath and witnesses of comparable social position provided accounts of their relationship with the litigious superiors in question, or commentaries on the perceived risks and benefits of giving evidence. They discussed the economic dispensations (or the promise thereof) which they had been given, or the coercion which had been used to extract their testimony.

Taken in aggregate, this evidence suggests that members of the labouring population had a keen sense of the politics of testimony. In a dynamic and exacting economy such as that of late sixteenth- and early seventeenth-century England, where labouring people’s material prospects were irrevocably linked to their reputation and ‘honesty,’ deposing could be risky. Members of the labouring population were aware of this, and many were hesitant to depose at all. Their reluctance may well have been born of an awareness that doubt was likely to be cast upon their testimony as a result of their subordinated and dependent social position, which lent credibility to accusations that they had sworn falsely for gain. More immediately, it reflected concerns about the material reprecussions that they feared would follow from commenting on the affairs of their ‘betters.’ Such projections were not merely the stuff of paranoid speculation. In 1601, a carpenter from Buckinghamshire called Christopher Badger had put his mark to a statement defending a gentleman, Arthur Wright, who had frustrated efforts to impose a stinting arrangement on the common to, as many locals claimed, the ‘damadge of the poorer sorte and to the comoditie of the riche.’ Badger recalled that one of Wright’s opponents – also a gentleman – later approached him and said ‘You have had my worke and the woorke of divers’ other pro-stinting individuals. To discourage Badger from further involvement, he added a thinly veiled threat: ‘This might be an occasion that you maie have lesse worke then heretofore you have had.’[3] For members of the labouring population, material circumstance often militated against opening their mouths.

But there was an irony to the politics of testimony, which was not lost on common people. If material conditions made some prospective witnesses reluctant to depose, they all but compelled others to do so (even when they expressed reservations). In some instances, labouring people’s poverty rendered the rewards – a bit of coal, a cow, promises of work that was not dictated by the vagaries of seasonal employment, or nebulous offers of a life freed from want – that they were promised (and less often given) in return for their testimony compelling. In others, the dependency, subordination and obligation that characterized their relations with their superiors necessitated that they speak as required, or face the consequences. In the face of such pressures, a given individual’s reservations about testifying were all but irrelevant.

To contact Hillary Taylor: Hat27@cam.ac.uk

Notes

[1] For debt and debt-related litigation, see Craig Muldrew, The Economy of Obligation: The Culture of Credit and Social Relations in Early Modern England (Basingstoke, 1998).

[2] For suspicions surrounding the testimony of poor and/or labouring witnesses, see Alexandra Shepard, Accounting for Oneself: Worth, Status, and the Social Order in Early Modern England (Oxford, 2015).

[3] TNA, STAC 5/W17/32. Continue reading

Missing girls in 19th-century Spain

by Francisco J. Beltrán Tapia (Norwegian University of Science and Technology)

This article is published by the Economic History Review, and it is available here

Gender discrimination, in the form of sex-selective abortion, female infanticide and the mortal neglect of young girls, constitutes a pervasive feature of many contemporary developing countries, especially in South and East Asia and Africa. Son preference stemmed from economic and cultural factors that have long influenced the perceived relative value of women in these regions and resulted in millions of “missing girls”. But, were there “missing girls” in historical Europe? The conventional narrative argues that there is little evidence for this kind of gender discrimination. According to this view, the European household formation system, together with prevailing ethical and religious values, limited female infanticide and the mortal neglect of young girls.

However, several studies suggest that parents treated their sons and daughters differently in 19th-century Britain and continental Europe (see, for instance, here, here or here). These authors stress that an unequal allocation of food, care and/or workload negatively affected girls’ nutritional status and morbidity, which translated in worsened heights and mortality rates. In order to provide more systematic historical evidence of this type of behaviour, our research (with Domingo Gallego-Martínez) relies on sex ratios at birth and at older ages. In the absence of gender discrimination, the number of boys per hundred girls in different age groups is remarkably regular, so comparing the observed figure to the expected (gender-neutral) sex ratio permits assessing the cumulative impact of gender bias in peri-natal, infant and child mortality and, consequently, the importance of potential discriminatory practices. However, although non-discriminatory sex ratios at birth revolve around 105-106 boys per hundred girls in most developed countries today, historical sex ratios cannot be compared directly to modern ones.

We have shown here that non-discriminatory infant and child sex ratios were much lower in the past. The biological survival advantage of girls was more visible in the high-mortality environments that characterised pre-industrial Europe due to poor living conditions, lack of hygiene and the absence of public health systems. Subsequently, boys suffered relatively higher mortality rates both in utero and during infancy and childhood. Historical infant and child sex ratios were therefore relatively low, even in the presence of gender-discriminatory practices. This is illustrated in Figure 1 below which plots the relationship between child sex ratios and infant mortality rates using information from seventeen European countries between 1750 and 2001. In particular, in societies where infant mortality rates were around 250 deaths (per 1,000 live births), a gender-neutral child sex ratio should have been slightly below parity (around 99.5 boys per hundred girls).

pic 01
Figure 1. Infant mortality rates and child sex ratios in Europe, 1750-2001

 

Compared to this benchmark, infant and child sex ratios in 19th-century Spain were abnormally high (see black dots in Figure 1 above; the number refers to the year of the observation), thus suggesting that some sort of gender discrimination was unduly increasing female mortality rates at those ages. This pattern, which is not the result of under-enumeration of girls in the censuses, mostly disappeared at the turn of the 20th century. Notwithstanding that average sex ratios remained relatively high in nineteenth- century Spain, some regions exhibited even more extreme figures. In 1860, 54 districts (out of 471) had infant sex ratios above 115, figures that are extremely unlikely to have occurred by chance. Relying on an extremely rich dataset at the district level, our research analyses regional variation in order to examine what lies behind the unbalanced sex ratios. Our results show that the presence of wage labour opportunities for women and the prevalence of extended families in which different generations of women cohabited had beneficial effects on girls’ survival. Likewise, infant and child sex ratios were lower in dense, more urbanized areas.

This evidence thus suggests that discriminatory practices with lethal consequences for girls constituted a veiled feature of pre-industrial Spain. Excess female mortality was then not necessarily the result of ill-treatment of young girls but could have been just based on an unequal allocation of resources within the household, a circumstance that probably cumulated as infants grew older. In contexts where infant and child mortality is high, a slight discrimination in the way that young girls were fed or treated when ill, as well as in the amount of work which they were entrusted with, was likely to have resulted in more girls dying from the combined effect of undernutrition and illness. Although female infanticide or other extreme versions of mistreatment of young girls may not have been a systematic feature of historical Europe, this line of research would point to more passive, but pervasive, forms of gender discrimination that also resulted in a significant fraction of missing girls.

To contact the author:

francisco.beltran.tapia@ntnu.no

Twitter: @FJBeltranTapia