Real urban wage in an agricultural economy without landless farmers: Serbia, 1862-1910

by Branko Milanović (City University New York and LSE)

This blog is based on a forthcoming article on The Economic History Review

Screenshot 2020-06-10 at 17.10.50
Railway construction workers, ca.1900.

Calculations of historical welfare ratios (wages expressed in relation to the  subsistence needs of a wage-earner’s family) exist for many countries and time periods.  The original methodology was developed by Robert Allen (2001).  The objective of real wage studies is not only to estimate real wages but to assess living standards before the advent of national accounts.  This methodology has been employed to address key questions in economic history: income divergence between Northern Europe and China (Li and van Zanden, 2012; Allen, Bassino, Ma, Moll-Murata, and van Zanden, 2011); the  “Little Divergence”  (Pamuk 2007); development of North v. South America (Allen, Murphy and Schneider, 2012), and even the causes of the Industrial Revolution (Allen 2009; Humphries 2011; Stephenson 2018, 2019).

We apply this methodology to Serbia between 1862 and 1910, to consider  the extent to which  small, peasant-owned farms and backward agricultural technology can be  used to approximate  real income.   Further,  we develop debates on   North v. South European divergence by focusing on  Serbia (a South-Eastern country), in contrast to previous studies which focus on Mediterranean countries (Pamuk 2007; Losa and Zarauz, forthcoming). This approach allows us to formulate a hypothesis regarding the social determination of wages.

Using Serbian wage and price data from 1862 to 1910, we calculate welfare ratios for unskilled (ordinary) and skilled (construction) urban workers. We use two different baskets of goods for wage comparison: a ‘subsistence’ basket that includes a very austere diet, clothing and housing needs, but no alcohol, and a ‘respectability’ basket, composed of a greater quantity and variety of goods, including alcohol.  We modify some of the usual assumptions found in the literature to better reflect the economic and demographic conditions of Serbia in the second half of the 19th century.  Based on contemporary sources, we make the assumption that the ‘work year’ was 200, not  250 days, and that the average family size was six, not four.  Both assumptions reduce the level of the welfare ratio, but do not affect its evolution.

We find that the urban wage of unskilled workers was, on average, about 50 per cent higher than the subsistence basket for the family (Figure 1), and remained broadly constant throughout the period. This result confirms the absence of modern economic growth in Serbia (at least as far as the low income population is concerned), and indicates economic divergence between South-East and Western Europe. Serbia, diverged from Western Europe’s standard of living during the second half of the 19th century:  in 1860 the welfare ratio in London was about three times  higher than urban Serbia but by 1907, this gap had widened to more than five  to one (Figure 1).

Picture 1ee
Figure 1. Welfare ratio (using subsistence basket), urban Serbia 1862-1910. Note: Under the assumptions of 200 working days per year, household size of 6, and inclusive of the daily food and wine allowance provided by the employer. Source: as per article.

 

In contrast, the welfare ratio of skilled construction workers was between 20 to 30 percent higher in the 1900s compared to the 1860s (Figure 1). This trend reflects modest economic progress as well as an increase in the skill premium, which has been observed for Ottoman Turkey (Pamuk 2016).

The wages of ordinary workers appear to move more closely with the ‘subsistence basket’, whereas the wages of construction (skilled) workers wage seem to vary with the cost of the ‘respectability basket’. This leads us to hypothesize that the wages of both groups of workers were implicitly “indexed” to different baskets, reflecting the different value of the work done by each group.

Our results enhance provide further insights on economic conditions in 19th century Balkans, and generate searching questions about the assumptions used in Allen-inspired work on real wages. The standard assumption of 250 days work per annum and a ‘typical’ family size of four, may be undesirable for comparative purposes. The ultimate objective of real wage/welfare ratio studies is to provide more accurate assessments of real incomes between counties. Consequently, the assumptions underlying welfare ratios need to be country-specific.

To contact the authorbmilanovic@gc.cuny.edu

https://twitter.com/BrankoMilan

 

REFERENCES

Allen, Robert C. (2001), “The Great Divergence in European Wages and Prices from the Middle Ages to the First World War“, Explorations in Economic History, October.

Allen, Robert C. (2009), The British Industrial Revolution in Global Perspective, New Approaches to Economic and Social History, Cambridge.

Allen Robert C., Jean-Pascal Bassino, Debin Ma, Christine Moll-Murata and Jan Luiten van Zanden (2011), “Wages, prices, and living standards in China, 1738-1925: in comparison with Europe, Japan, and India”.  Economic History Review, vol. 64, pp. 8-36.

Allen, Robert C., Tommy E. Murphy and Eric B. Schneider (2012), “The colonial origins of the divergence in the Americas: A labor market approach”, Journal of Economic History, vol. 72, no. 4, December.

Humphries, Jane (2011), “The Lure of Aggregates and the Pitfalls of the Patriarchal Perspective: A Critique of the High-Wage Economy Interpretation of the British Industrial Revolution”, Discussion Papers in Economic and Social History, University of Oxford, No. 91.

Li, Bozhong and Jan Luiten van Zanden (2012), “Before the Great Divergence: Comparing the Yangzi delta and the Netherlands at the beginning of the nineteenth century”, Journal of Economic History, vol. 72, No. 4, pp. 956-989.

Losa, Ernesto Lopez and Santiao Paquero Zarauz, “Spanish Subsistence Wages and the Little Divergence in Europe, 1500-1800”, European Review of Economic History, forthcomng.

Pamuk, Şevket (2007), “The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600”, European Review of Economic History, vol. 11, 2007, pp. 280-317.

Pamuk, Şevket (2016),  “Economic Growth in Southeastern Europe and Eastern Mediterranean, 1820-1914”, Economic Alternatives, No. 3.

Stephenson, Judy Z. (2018), “ ‘Real’ wages? Contractors, workers, and pay in London building trades, 1650–1800’,  Economic History Review, vol. 71 (1), pp. 106-132.

Stephenson, Judy Z. (2019), “Working days in a London construction team in the eighteenth century: evidence from St Paul’s Cathedral”, The Economic History Review, published 18 September 2019. https://onlinelibrary.wiley.com/doi/abs/10.1111/ehr.12883.

 

 

Poverty or Prosperity in Northern India? New Evidence on Real Wages, 1590s-1870s

by Pim de Zwart (Wageningen University) and Jan Lucassen (International Institute of Social History, Amsterdam)

The full article from this blog was published on The Economic History Review and it is now available open access on early view at this link 

 

At the end of the sixteenth century, the Indian subcontinent, largely unified under the Mughals, was one of the most developed parts of the global economy, with relatively high incomes and a thriving manufacturing sector. Over the centuries that followed, however, incomes declined and India deindustrialized. The precise timing and causes of this decline remain the subject of academic debate about the Great Divergence between Europe and Asia. Whereas some scholars depicted the eighteenth century in India as a period of economic growth and comparatively high living standards, other have suggested it was an era of decline and relatively low incomes. The evidence on which these contributions have been based is rather thin, however. In our paper, we add quantitative and qualitative data from numerous British and Dutch archival sources about the development of real wages and the functioning of the northern Indian labour market between the late sixteenth and late nineteenth centuries.

In particular, we introduce a new dataset with over 7500 observations on wages across various towns in northern India (Figure 1). The data pertain to the income earned in a wide range of occupations, from unskilled urban workers and farm servants, to skilled craftsmen and bookkeepers, and for both adult men and women and children. All these wage observations were coded following the HISCLASS scheme that allow us to compare trends in wages between groups of workers. The wage database provides information about the incomes of an important body of workers in northern India. There was little slavery and serfdom in India, and wage labour was relatively widespread. There was a functioning free labour market in which European companies enjoyed no clearly privileged position. The data thus obtained for India can therefore be viewed as comparable to those gathered for many European cities in which  the wages of construction workers were often paid by large institutions.

Picture 1
Figure 01 – Map of India and regional distribution of the wage data. Source: as per article

We calculated the value of the wage relative to a subsistence basket of goods. We made further adjustments to the real wage methodology by incorporating information about climate, regional consumption patterns, average heights, and BMI, to more accurately calculate the subsistence cost of living. Comparing the computed real wage ratios for northern India with those prevailing in other parts of Eurasia leads to a number of important insights (Figure 1). Our data suggests that the Great Divergence between Europe and India happened relatively early, from the late seventeenth century. The slightly downward trend since the late seventeenth century lasted and wage labourers saw their purchasing power diminish until the devastating Bengal famine in 1769-1770. Given this evidence, it is difficult to view the eighteenth century as period of generally rising prosperity across northern India. While British colonialism may have reduced growth in the nineteenth century — pretensions about the superiority of European administration and the virtues of the free market may have had long-lasting negative consequences — it is nonetheless clear that most of the decline in living standards preceded colonialism. Real wages in India stagnated in the nineteenth century, while Europe experienced significant growth; consequently, India lagged further behind.

Picture 2
Fig.02 – Real wages in India in comparison with Europe and Asia. Source: as per article

With real wages below subsistence level it is likely that Indian wage labourers worked more than the 250 days per year often assumed in the literature. This is also confirmed in our sources which suggest 30 days of labour per month. To accommodate this observation, we added a real wage series based on the assumption of 360 days labour per year (Figure 2). Yet even with 360 working days per year, male wages were at various moments in the eighteenth and nineteenth centuries insufficient to sustain a family at subsistence level. This evidence indicates the limits of what can be said about living standards based solely on the male wage. In many societies and in most time periods, women and children made significant contributions to household income. This also seems to have been the case for northern India. Over much of the eighteenth and nineteenth centuries, the gap between male and female wages was smaller in India than in England. The important contribution of women and children to household incomes may have allowed Indian families to survive despite low levels of male wages.

 

To contact the authors: 

Pim de Zwart (pim.dezwart@wur.nl)

Jan Lucassen (lucasjan@xs4all.nl)

Demand slumps and wages: History says prepare to bargain

by Judy Z. Stephenson (Bartlett Faculty of the Built Environment, UCL)

This blog is part of the  EHS series on The Long View on Epidemics, Disease and Public Health:Research from Economic History).

Big shifts and stops in supply, demand, and output hark back to pre-industrial days, and they carry lessons for today’s employment contracts and wage bargains.

Canteen at the National Projectile Factory
Munitions factory in Lancaster, 1917 ca.
Image courtesy of Lancaster City Museum. Available at <http://www.documentingdissent.org.uk/munitions-factories-in-lancaster-and-morecambe/&gt;

Covid-19 has brought the world to a slump of unprecedented proportions. Beyond immediate crises in healthcare and treatment, the biggest impact is on employment. Employers, shareholders and policymakers are struggling to come to terms with the implications of ‘closed-for-business’ for an unspecified length of time, and laying-off workers seems the most common response, even though unprecedented government support packages for firms and workers have heralded the ‘return of the state’, and the fiscal implications have provoked wartime comparisons.

There is one very clear difference between war and the current pandemic: that of mobilisation. Historians tend to look on times of war as times of full employment and high demand. (1). A concomitant slump in demand and a huge surplus of de-mobilised labour were associated with the depression in real wages and labour markets in the peacetime years after 1815. That slump accompanied increasing investment in large scale factory production, particularly in the textile industry. The decades afterwards are some of the best documented in labour history (2), and they are characterised by frequent stoppages, down-scaling and restarts in production. They should be of interest now because they are the story of how modern capitalist producers learned to set and bargain for wages to ensure they had the skills they needed, when they needed to produce efficiently. Much of what employers and workers learned over the nineteenth century are directly pertinent to problems that currently face employers, workers, and the state.

Before the early nineteenth century in England – or elsewhere for that matter – most people were simply not paid a regular weekly wage, or in fact paid for their time at all (3). Very few people had a ‘job’, and shipwrights, building workers, some common labourers, (in all maybe 15% of workers in early modern economies) were paid ‘by the day’, but the hours or output that a ‘day’ involved were varied and indeterminate. The vast majority of pre-industrial workers were not paid for their time, but for what they produced.

These workers  earned piece rates, like today’s delivery riders earn ‘per drop’, and uber drivers earn ‘per ride’, or garment workers per unit made. When supply of materials failed, or demand for output stalled, workers were not paid, irrespective of whether they could work or not. Blockades, severe weather, famine, plague, financial crises, and unreliable supplies, all stopped work, and so payment of wages ended.  Stoppages were natural and expected. Historical records indicate that in many years commercial activity and work slowed to a trickle in January and February. Households subsisted on savings or credit before they could start earning again, or parishes and the poor law provided bare subsistence in the interim. Notable characteristics of pre-industrial wages – by piecework and otherwise – were wage posting and nominal rate rigidity, or lack of wage bargaining. Rates for some work didn’t change for almost a century, and the risk of no work seems to have been accounted for on both sides. (4).

Piecework, or payment for output is a system of wage formation is of considerable longevity   and its purpose was always to protect employers from labour costs in uncertain conditions. It seems attractive because it transfers  the risks associated with output volatility from the employer to the worker.  Such a practices are the basis of today’s  ‘gig’ economy.  Some workers – those in their prime who are skilled and strong – tend to do well out of the system, and enjoy being able to increase their earnings with effort. This is the flexibility of the gig economy that some relish today.  But its less effective for those who need to be trained or managed, older workers, or anyone who has to limit their hours.

However, piecework or gig wage systems have risks for the employer. In the long run, we know piece bargains break down, or become unworkably complex as both workers and employers behave opportunistically (5). Where firms need skilled workers to produce quickly, or they want to invest in firm or industry specific human capital to increase competitiveness through technology, they can suddenly find themselves outpriced by competitors, or with a labour force with a strong leisure preference or, indeed,  a labour shortage. Such conditions characterised early industrialisation. In the British textile industry this opportunism created and exacerbated stoppages throughout the nineteenth century. After each stoppage both employers and workers sought to change rates. But new bargains were difficult to agree. Employers tried to cut costs. Labour struck. Bargaining for wages impeded efficient production.

Eventually, piecework bargains formed implicit, more stable contracts and ‘invisible handshakes’ paved the way to the relative stability of hourly wages and hierarchy of skills in factories (though the mechanism by which this happened is contested) (6). The form of the wage slowly changed to payment by the hour or unit of time.  Employers worked out that ‘fair’ regular wages (or efficiency wages),  and a regular workforce served them better in the long run than trying to save labour costs through stoppages. Unionisation bettered working conditions and the security of contracts. The Trade Board Act of 1909 regulated the wages of industries still operating minimal piece rates, and ushered in the era of collective wage bargaining as the norm, which only ended with the labour market policies of Thatcherism and subsequent governments.

So far in the twenty-first century, although there has been a huge shift to self-employment, gig wage formation and non-traditional jobs (7) we have not experienced the bitter bargaining that characterised the shift from piecework to time work two hundred years ago, or the unrest of the 1970s and early 1980s. Some of this is probably down to the decline of output volatility that accompanied increased globalisation since the ‘Great Moderation’ and the extraordinarily low levels of unemployment in most economies in the last decade (8). Covid-19 brings output volatility back, in a big, unpredictable way, and the history of wage bargaining indicates that when factors of production are subject to shocks, bargaining is costly. Employers who want to rehire workers who have been unpaid for months, may find established wage bargains no longer hold. Now, shelf stackers who have risked their lives on zero hours contracts may think that their pay rate per hour should reflect this risk. Well-paid professionals incentivised by performance related pay are discovering the precarity of ‘eat what you kill’, and may find that their basic pay doesn’t reflect the preparatory work they need to do in conditions that will not let them perform. Employers facing the same volatility might try to change rates, and many employers have already moved to cut wages.

Today’s state guarantee of many worker’s income, unthinkable in the nineteenth century laissez-faire state, are welcome and necessary. That today’s gig economy workers have made huge strides towards attaining full employment rights would also appear miraculous to most pre-industrial workers. Yet, contracts and wage formation matter. With increasing numbers of workers without job security, and essential services suffering demand and supply shocks, many workers and employers are likely to confront significant shifts in employment.  History suggests bargaining for them is not as easy a process as the last thirty years have led us to believe.

 

To contact the author: 

j.stephenson@ucl.ac.uk

@judyzara

 

References:

(1). Allen, R. (2009). Engels’ pause: Technical change, capital accumulation, and inequality in the British industrial revolution. Explorations in Economic History, 46(4), 418-435; Broadberry et al, (2015). British Economic Growth, 1270-1870. CUP.

(2). Huberman. M., (1996) Escape from the Market, CUP, chapter 2.

(3). Hatcher, J., and Stephenson, J.Z. (Eds.), (2019) Seven Centuries of Unreal Wages, Palgrave Macmillan

(4). J. Stephenson and P. Wallis, ‘Imperfect competition’, LSE Working Paper (forthcoming).

(5). Brown, W. (1973) Piecework Bargaining, Heinemann.

(7). See debates between Huberman, Rose, Taylor and Winstanley in Social History 1987-89.

(6). Katz, L., & Krueger, A. (2016). The Rise and Nature of Alternative Work Arrangements in the United States, 1995-2015. NBER Working Paper Series.

(8). Fang, W., & Miller, S. (2014). Output Growth and its Volatility: The Gold Standard through the Great Moderation. Southern Economic Journal, 80(3), 728-751.

 

How many days a year did people work in England before the Industrial Revolution?

By Judy Stephenson (University College London)

The full paper that inspired this blog post will be published on The Economic History Review and is currently available on early view here

DomeConstruction09
St Paul’s Cathedral – the construction of the Dome. Available at <https://www.explore-stpauls.net/oct03/textMM/DomeConstructionN.htm>

How many days a year did people work in England before the Industrial Revolution? For those who don’t spend their waking hours desperate for sources to inform wages and GDP per capita over seven centuries, this question provokes an agreeable discussion about artisans, agriculture and tradition. Someone will mention EP Thompson and clocks or Saint Mondays. ‘Really that few?’ It’s quaint.

But, for those of us who do spend our waking hours desperate for sources to inform wages and GDP per capita over seven centuries the question has evolved in the last few years into a debate about productivity and when modern economic growth began in an ‘industrious revolution’. A serious body of research in economic history has recently estimated increasing numbers of days that people worked from the late seventeenth century. Current estimates are that people worked about 270 days a year by 1700, rising to about 300 after 1750.

The uninitiated might think that estimates of such important things like the working year would be based on some substantive evidence, but in fact, most estimates of the working year that economic historians have been using for the last two decades don’t come from working records at all. They come from court depositions where witnesses told the courts when they went to and left work, or they come from working out how many days a worker had to toil to afford a basket of consumption goods. This approach, pioneered by Jacob Weisdorf and Bob Allen in 2011, essentially holds welfare as a constant throughout history, and it’s the key assumption made in a new paper on wages forthcoming from Jane Humphries and Jacob Weisdorf. Unsurprisingly for historians familiar with material showing the miserable conditions under which the poor toiled in eighteenth century Britain, this calculation frequently leads to a high number of days worked. It also implies that Londoners, due to higher day wages, may have had slightly more leisure than rural workers. Both implications might appear counterintuitive.

Knowledgeable historians, such as John Hatcher, have pointed out that the idea that anyone had 270 days paid work a year before the industrial revolution is fanciful. But unless there was an industrious revolution, and people did begin to work more days per year in market work – as Jan de Vries posited – the established evidence firmly implies that workers became worse off throughout the eighteenth century, because wage rates as measured by builders wages didn’t increase in line with inflation, and in fact builders earned even less than we thought.

My article, “Working days in a London construction team in the eighteenth century: evidence from St Paul’s Cathedral” forthcoming in the Review, takes a different approach: it uses the actual working records of a team of masons working under William Kempster who constructed the South West tower of St Paul’s Cathedral. For five years in the 1700s, these archives are exceptionally detailed. They show that building was seasonal (it’s not like we didn’t know – it’s just we had sort of forgotten), and building was stage dependent, so not all men could have worked all year. In fact, they didn’t. Surprisingly, for a stable firm at an established and large site, very few men worked for Kempster for more than about 27 weeks. Work was temporal and insecure, and working and employment relationships were casual.

If one was to take a crude average of the days each man worked in any year it would be less than 150 days. To do so is obviously misleading and that’s not what the paper claims, because obviously men worked for other employers too. But, what the working patterns reveal is that unless men seamlessly moved from one employer to another with no search costs or time in between, it would have been impossible for them to have worked 250 days a year. Its more plausible that they were able to work between 200 and 220 days.

Moreover, the data shows that men did not work the full 6 days per week on offer. The average number of days worked per week was only 5.2. This wasn’t because men did not work Saint Mondays (which are almost indiscernible) but because they took idiosyncratic breaks. Only the foremen seem to have been able to sustain six days a week.

However, men that had a longer relationship with Kempster worked more days per year than the rest. This implies that stronger working relationships or consolidation of employers and workers relationships might have led to an increase in the average number of days worked. However, architectural and construction historians generally think that consolidation in the industry did not occur until the 1820s. If there was an industrious revolution in the eighteenth century it might not have happened for builders. If builders’ wages are representative – and that old assumption seems increasingly stretched these days – then the story for wages in the eighteenth century is even more pessimistic than before.

The evidence from working records presented in this article paper are still relatively fragmentary but they do clearly show that holding welfare to be stable by calculating the number of days worked from consumption goods – as the Weisdorf/ Humphries/ Allen approach does not give us the whole story.

But then again, is it really plausible to hold welfare stable? The debate, and scholarship no doubt will continue.

 

To contact the author:

J.Stephenson@ucl.ac.uk

@judyzara

Factor Endowments on the “Frontier”: Algerian Settler Agriculture at the Beginning of the 1900s

by Laura Maravall Buckwalter (University of Tübingen)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

It is often claimed that access to land and labour during the colonial years determined land redistribution policies and labour regimes that had persistent, long-run effects.  For this reason, the amount of land and labour available in a colonized country at a fixed point in time are being included more frequently in regression frameworks as proxies for the types of colonial modes of production and institutions. However, despite the relevance of these variables within the scholarly literature on settlement economies, little is known about the way in which they changed during the process of settlement. This is because most studies focus on long-term effects and tend to exclude relevant inter-country heterogeneities that should be included in the assessment of the impact of colonization on economic development.

In my article, I show how colonial land policy and settler modes of production responded differently within a colony.  I examine rural settlement in French Algeria at the start of the 1900s and focus on cereal cultivation which was the crop that allowed the arable frontier to expand. I rely upon the literature that reintroduces the notion of ‘land frontier expansion’ into the understanding of settler economies. By including the frontier in my analysis, it is possible to assess how colonial land policy and settler farming adapted to very different local conditions. For exanple,  because settlers were located in the interior regions they encountered growing land aridity. I argue that the expansion of rural settlement into the frontier was strongly dependent upon the adoption of modern ploughs, intensive labour (modern ploughs were non-labour saving) and larger cultivated fields (because they removed fallow areas) which, in turn, had a direct impact on  colonial land policy and settler farming.

Figure 1. Threshing wheat in French Algeria (Zibans)

Buckwalter 1
Source: Retrieved from https://www.flickr.com/photos/internetarchivebookimages/14764127875/in/photostream/, last accessed 31st of May, 2019.

 

My research takes advantage of annual agricultural statistics reported by the French administration at the municipal level in Constantine for the years 1904/05 and 1913/14. The data are analysed in a cross-section and panel regression framework and, although the dataset provides a snapshot at only two points in time, the ability to identify the timing of settlement after the 1840s for each municipality provides a broader temporal framework.

Figure 2. Constantine at the beginning of the 1900s

Buckwalter 2
Source: Original outline of the map derives from mainly from Carte de la Colonisation Officielle, Algérie (1902), available online at the digital library from the Bibliothèque Nationale de France, retrieved from http://catalogue.bnf.fr/ark:/12148/cb40710721s (accessed on 28 Apr. 2019) and ANOM-iREL, http://anom.archivesnationales.culture.gouv.fr/ (accessed on 28 Apr. 2019).

 

The results illustrate how the limited amount of arable land on the Algerian frontier forced  colonial policymakers to relax  restrictions on the amount of land owned by settlers. This change in policy occurred because expanding the frontier into less fertile regions and consolidating settlement required agricultural intensification –  changes in the frequency of crop rotation and more intensive ploughing. These techniques required larger fields and were therefore incompatible  with the French colonial ideal of establishing a small-scale, family farm type of settler economy.

My results also indicate that settler farmers were able to adopt more intensive techniques mainly by relying on the abundant indigenous labour force. The man-to-cultivable land ratio, which increased after the 1870s due to continuous indigenous population growth and colonial land expropriation measures, eased settler cultivation, particularly on the frontier. This confirms that the availability of labour relative to land is an important variable that should be taken into consideration to assess the impact of settlement on economic development. My findings are in accord with Lloyd and Metzer (2013, p. 20), who argue that, in Africa, where the indigenous peasantry was significant, the labour surplus allowed low wages and ‘verged on servility’, leading to a ‘segmented labour and agricultural production system’. Moreover, it is precisely the presence of a large indigenous population relative to that of the settlers, and the reliance of settlers upon the indigenous labour and the state (to access land and labour), that has allowed Lloyd and Metzer to describe Algeria (together with Southern Rhodesia, Kenya and South Africa) as having a “somewhat different type of settler colonialism that emerged in Africa over the 19th and early 20th Centuries” (2013, p.2).

In conclusion, it is reasonable to assume that, as rural settlement gains ground within a colony, local endowments and cultivation requirements change. The case of rural settlement in Constantine reveals how settler farmers and colonial restrictions on ownership size adapted to the varying amounts of land and labour.

 

To contact: 

laura.maravall@uni-tuebingen.de

Twitter: @lmaravall

 

References

Ageron, C. R. (1991). Modern Algeria: a history from 1830 to the present (9th ed). Africa World Press.

Frankema, E. (2010). The colonial roots of land inequality: geography, factor endowments, or institutions? The Economic History Review, 63(2):418–451.

Frankema, E., Green, E., and Hillbom, E. (2016). Endogenous processes of colonial settlement. the success and failure of European settler farming in Sub-Saharan Africa. Revista de Historia Económica-Journal of Iberian and Latin American Economic History, 34(2), 237-265.

Easterly, W., & Levine, R. (2003). Tropics, germs, and crops: how endowments influence economic development. Journal of monetary economics, 50(1), 3-39.

Engerman, S. L., and Sokoloff, K. L. (2012). Economic development in the Americas since 1500: endowments and institutions. Cambridge University Press.

Lloyd, C. and Metzer, J. (2013). Settler colonization and societies in world history: patterns and concepts. In Settler Economies in World History, Global Economic History Series 9:1.

Lützelschwab, C. (2007). Populations and Economies of European Settlement Colonies in Africa (South Africa, Algeria, Kenya, and Southern Rhodesia). In Annales de démographie historique (No. 1, pp. 33-58). Belin.

Lützelschwab, C. (2013). Settler colonialism in Africa Lloyd, C., Metzer, J., and Sutch, R. (2013), Settler economies in world history. Brill.

Willebald, H., and Juambeltz, J. (2018). Land Frontier Expansion in Settler Economies, 1830–1950: Was It a Ricardian Process? In Agricultural Development in the World Periphery (pp. 439-466). Palgrave Macmillan, Cham.

Child workers and industrial health in Britain 1780-1850

Peter Kirby, Child workers and industrial health in Britain 1780-1850 (Woodbridge: Boydell Press, 2013. Pp. xi + 212. 8 tabs. 6 figs. ISBN 9781843838845 Pbk. £19.99)

Review by Alysa Levene (Oxford Brookes University)

Book by Peter Kirby

‘Child workers and industrial health in Britain 1780-1850’ is published by Boydell and Brewer. SAVE  25% when you order direct from the publisher – offer ends on the 18th July 2019. See below for details.

 

copertina kirby.png

The physical horrors endured by child workers in the early industrial workplace are well known to historians – or at least, we think they are. The regulations of the various Factory Acts and the testaments of sub-commissioners, doctors and factory workers to the parliamentary enquiries of the 1830s and 1840s are common reference points for those of us working or teaching in this area. However, over the last few years, several in-depth studies of child labour in industrial England have appeared which have started to challenge and nuance what we think we know. First, Katrina Honeyman, Child Workers in England, 1780-1820 (2007) suggested that apprentices to cotton mills were often better looked after than we have thought. Then, Jane Humphries, Childhood and Child Labour in the British Industrial Revolution (2010) set industrial work in a wider context of schooling and family life, as evidenced in over 600 working-class autobiographies. And now, Peter Kirby has added the first monograph study of occupational health among child workers in the first half of the nineteenth century¸ and has again, knocked down many of the key points we have been telling students for years.

The book is organised thematically, starting with an Introduction which sets out in detail the historical background to child labour in industry, and the sources we have for studying it. Here, Kirby points out the problems with the medical evidence collected for the parliamentary enquiries in the 1830s and 1840s; namely that many of the doctors concerned did not have first-hand experience of occupational health and so tended to attribute any health issues to working conditions rather than environmental ones. This leads him to place more emphasis on the writings of non-medical men, shifting the perspective away from doctors and children and towards health and conditions of work in the round. The main chapters consider child health in industrial cities generally; the key issues affecting the health of child industrial worker (deformities; ‘materials’ – see more below; and injuries); heights and ages, and how these were measured; and finally, corporal punishment and murder.

One of Kirby’s key conclusions is that it was environmental rather than working conditions which were responsible for most of the health problems experienced by child workers. He states that many began work in factories and mines already compromised by poor nutrition, environmental pollution and the impact of parental loss (which led to work at a young age), and that in fact, stunted and disabled children may have been preferentially admitted to the factory workforce because they were suited to the lighter tasks found there. To a certain degree this is convincing, and it is certainly instructive and worthwhile to draw attention to the relationship between the conditions of home life and working life so clearly. The discussion of environmental pollution and its impact on health is particularly detailed. However, it seems hard to believe either that so many children would have suffered from conditions like byssinosis, scoliosis or poliomyelitis as Kirby suggests, or that pre-existing disability could have been so widespread among child workers given the need to stand upright and bear a load in so many areas of work.

The discussion of ‘materials’ is another area where Kirby provides an impressive level of detail, and which advances our understanding of the realities of working life in mills. In particular, he draws attention to the pollutants which can be carried in raw cotton, and ties this to changes in supply during this period, for example, away from imports from the West Indies, and towards those from North America, which were less likely to be contaminated (this coincided with a fall in ‘mill fevers’). This is something which has not been much considered in previous work (although it was noted by contemporaries) and which has a bearing on both adult and child workers.

Kirby attempts to bring a similarly new perspective to the discussion of workplace violence, suggesting that corporal punishment was common only in specific circumstances (such as where safety or productivity demanded it, or where child workers were particularly vulnerable, like parish apprentices), and that it was in any case a more accepted part of daily life than it is now. These two points do not necessarily sit easily together; certainly the evidence of violence in the commissioners’ reports suggests that it was not condoned. He is more confident on the system of medical inspection, and provides a detailed discussion of its scale and potential pitfalls, particularly the difficulty of assessing children’s ages (vital for ensuring that factories and mines adhered to the changing laws on age at starting work). Ultimately this led to the development of standard charts for growth and dentition.

Overall, this is an excellent and comprehensive study of the occupational health of child workers in the most high-profile areas of the industrial sector. It makes a significant contribution to debates on child labour, and the impact of industry on health and daily life. Kirby paints a notably more optimistic picture of the industrial workplace than we are used to, certainly in times of the impact on health and stature of its youngest workers. He ends by calling for more work on other areas of the industrial workforce, and this would certainly be welcome. The book is an excellent introduction to the topic for students and researchers alike; it remains to be seen whether it sparks a new wave of debate over the ‘optimistic’ versus the ‘pessimistic’ schools of thought on the industrial revolution.

 

SAVE 25% when you order direct from the publisher using the offer code B125 online hereOffer ends 18th July 2019. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291, and quote the same code. Any queries please email marketing@boydell.co.uk

 

Women’s work and structural change: occupational structure in eighteenth-century Spain

by Carmen Sarasua (Universitat Autonoma de Barcelona)

The full paper was published on The Economic History Review, and is available here

 

Screen Shot 2019-06-10 at 19.14.12This research uses householders’ declarations in the Cadaster of Ensenada to calculate labour participation rates for women and men in inland Spain. Conducted between 1750 and 1755, the Cadaster was a general property survey carried out with the aim of modernizing and unifying the fiscal system of the Kingdom of Castile (about three-quarters of modern Spain). Most householders did not declare the occupations of wives or children because their wages were not taxed. In some towns, householders did declare the occupations of their wives and children:

I belong to the General estate, my trade fuller, married, my family is formed by myself, 46 years old, Ynés López Zamorano, 40 years old. I have four daughters, Agustina, 20, her occupation weaver, Isabel, 13, her occupation spinning, María, 11, her occupation going to sewing school, María Teresa, 2 months.

Based on a database currently comprising 44,484 individuals (the population of 22 localities in five provinces of southern Castile), this article shows that men’s participation rates ranged from 78.2 to 92.5 percent. Generally, men’s participation rates were lower in large towns and cities because these localities were home to nonworking members of the nobility, beggars, and monks.

The article also establishes that the actual levels of women’s market activity were much higher than is commonly assumed. For the entire region, women’s participation rate was 32.3 percent. Differences in participation rates among localities were much larger for women than for men, ranging from 12.4 percent in Pedro Muñoz to 82.7 per cent in Villamanrique. Such large differences are explained by the failure to record women’s work. Ajofrín, in Toledo, was a prosperous production centre for woolen fabrics, with a population of 3,308 in 1753. According to householders’ declarations, 82.4 percent of men and 25.3 percent of women were gainfully employed. However, in response to a Cadaster question about the number of poor people living there, the town council responded, ‘Only eight, as everybody is devoted to the work of wool, particularly women, even the oldest ones.’

The Cadaster permits analysis of the region’s deeply gendered occupational structure. The primary sector occupied 60.0 percent and 2.9 percent, respectively, of working-age men and women. Men’s primary-sector employment was lower in towns, while women’s presence in the primary sector ranged from zero in some localities to 35.3 percent in regions where flax cultivation was important.

The service sector occupied 16.4 percent and 34.4 percent, respectively, of working men and women. Domestic service was especially important accounting for 84.4 per cent of women and 17.8 percent of men. For women, domestic service correlated with the size of town and the number of households headed by priests. For men, service occupations were more diversified, and included transportation and retailing.

The most important results from my research involve the industrial sector. The labour-intensity of manufacturing together with the abundant supply of cheap labour, the diffusion of cottage industries, and the demand for commodities (particularly textiles) from internal and colonial markets, meant a large portion of the region’s population worked in manufactures in the eighteenth century. This sector occupied 23.6 percent of working men and 62.8 percent of working women.

Untitled
Table 1. Occupational distribution of women and men, 18th century inland Spain. For sources, access full article here

The unusually high share of men in industry is explained by the recruiting practices of royal factories. Women’s stronger presence in manufacturing was explained by contemporary observers as follows:

The trade that people from La Mancha carry out, within the court, of stockings, bonnets, knitted socks, girdles and garters is from their mills. The merchant associations do not agree with this freedom.

The importance of these products is largely unnoticed in the academic literature which suggests they were consumed by families. But a range of finished products made by women — stockings, lace, scarfs, bonnets, knitted socks, girdles, garters, bedspreads, ribbons, and edging – were destined for the market. Householders’ declarations indicate that women’s textile work was motivated by the need to obtain food and to support the family.

According to the 1877 census, 66.1 percent of the Spanish labour force was engaged in agriculture compared to 14.4 percent in industry. Agriculture was the principal employer until 1930. Standard interpretations of economic growth view a large share of agricultural employment as an indicator of economic backwardness. But such analyses tend to focus only on the structural decomposition of the male labour force. By incorporating women’s work my research develops historical analyses of structural change. My findings are consistent with recent literature that in many European regions non-agricultural employment followed a U-shaped curve.

La Mancha failed to industrialize in the nineteenth century. The invasion of the Napoleonic army in 1808 and the subsequent war led to widespread destruction of sheep flocks and infrastructure, abandonment of agricultural lands, inflation, and severe demographic crisis. National and international commercial networks were disrupted generating a substantial increase in the price of imports. Subsequently, the share of the labour force in agriculture grew. Following the mechanization of textile production thousands of women and girls lost their jobs. Manufacturing employment fell across the country, even in the regions that industrialized. Women found fewer employment opportunities in the countryside, and many eventually moved to the cities to work in domestic service. Only by considering women workers is it possible to understand when, where, why, and how this structural change happened.

 

To contact the author: carmen.sarasua@uab.es

Recurring growth without industrialisation: occupational structures in northern Nigeria, 1921-2006

by Emiliano Travieso (University of Cambridge)

 

Nigeria2
InStove factory workers in Nigeria. Available at <http://www.instove.org/node/59&gt;

Despite recent decades of economic growth, absolute poverty is on the rise in Nigeria, as population increases continue to outpace the reduction in poverty rates. Manufacturing industries, which have the potential to absorb large numbers of workers into better paying jobs, have expanded only very modestly, and most workers remain employed in low productivity sectors (such as the informal urban economy and subsistence agriculture). 

 This scenario is particularly stark in the northern states, which concentrate more than half of the national population and where poverty rates are at their highest. As the largest region of the most populated nation in the continent (and itself three times as large as any other West African country), quantifying and qualifying northern Nigerias past economic development is crucial in order to discuss the perspectives for structural change and poverty alleviation in sub-Saharan Africa.  

 My research traces the major shifts in the economy of northern Nigeria during and since colonial rule through a detailed study of occupational structures, based on colonial and independence-era censuses and other primary sources. 

 While the region has a long history of handicraft production – under the nineteenth-century Sokoto Caliphate it became the largest textile producer in sub-Saharan Africa – northern Nigeria deindustrialised during British indirect rule. Partially as a result of the expansion of export agriculture (mainly of groundnuts and, to a lesser extent, cotton), the share of the workforce in manufacturing decreased from 18% to 7% in the last four decades of the colonial period. 

 After independence in 1960, growth episodes were led by transport, urban services and government expenditure fuelled by oil transfers from the southeast of the country, but did not spur significant structural change in favour of manufacturing. By 2006, the share of the workforce in manufacturing had risen only slightly: to 8%. 

 In global economic history, poverty alleviation has often resulted from a previous period of systematic movement of labour from low- to high-productivity sectors. The continued expansion of manufacturing achieved just that during the Industrial Revolution in the West and, in the twentieth century, in many parts of the Global South. 

 In large Asian and Latin American economies, late industrialisation sustained impressive achievements in terms of job creation and poverty alleviation. In cases such as Brazil, Mexico and China, large domestic markets, fast urbanisation and improvements in education contributed decisively to lifting millions of people out of poverty. 

Can northern Nigeria, with its large population, deep historical manufacturing roots and access to the largest national market in Africa, develop into a late industrialiser in the twenty-first century? My study suggests that rapid demographic growth will not necessarily result in structural change, but that, through improved market integration and continued expansion of education, the economy could harness the skills and energy of its rising population to produce a more impressive expansion of manufacturing than we have yet seen. 

Cotton, industrialisation and a missing piece of the puzzle

by Alka Raman (London School of Economics)

This study was awarded the prize for the best new researcher poster at the EHS Annual Conference 2019 in Belfast. The poster can be viewed here.

 

Cotton_merchant_in_Bombay_by_Francis_Frith
Cotton merchant, taken by Francis Frith between 1850 and 1870. Available at Wikimedia Commons.

The first Industrial Revolution has long been seen as the beacon of modernity, heralding unprecedented economic growth and the biggest uplift of living standards in human history. Its prominence amid themes in economic history is such that it dwarfs all others in comparison, including the fact that the British cotton industry – the nucleus of industrialisation – was not the world’s first cotton manufacturing industry serving a global demand for cotton goods.

Handmade cotton fabrics were exported from India to the rest of the world as early as the twelfth century. Indeed every textbook on economic history, when charting the growth of the British cotton industry, precedes its achievements with a dutiful narration of the introduction of cotton goods into England by the English East India Company in 1699 and the ‘frenzy’ for these cottons within the domestic and overseas markets.

But a passing reference to imitations quickly gives way to an impressive series of mechanisations and illustrious British inventors associated with them. Any connection to the preceding handmade Indian product is effectively lost.

Consequently, a crucial piece of the puzzle – how the seat of cotton manufacturing went from the Indian subcontinent to the heart of England – has remained inadequately explained. Learning from pre-existing products has been mentioned, but what this learning contained, how it may have been transferred and with what kind of outcomes are concepts that have been under-explored.

Hence the question at the heart of my research: did the pre-existing, handmade and globally demanded Indian cottons influence the growth and technological trajectory of the nascent British cotton industry?

Central to my thesis is the idea that the pre-industrial Indian cotton textiles contained the material knowledge required for their successful imitation and reproduction. These handmade Indian cottons embodied the cloth quality, print, design and product finish that the machine-made goods sought to imitate. Did learning from these pre-existing market-approved products contribute to the growth of early British cotton manufacturing?

My research identifies learning from the benchmark product, as well as competition with it, as two simultaneous stimuli shaping the British cotton industry during its initial phase. In terms of methodology, the thesis tests these two stimuli against historical textual and material evidence.

The writings of manufacturers, traders and historians/commentators of the period show that both manufacturers and innovators recognised that there was a knowledge problem or a ‘skills gap’: British spinners could not spin cotton warp to match Indian hand-spun warp’s quality. Entrepreneurs identified matching the quality of Indian hand-spun warp as a key motivation for innovation. Their language of quality comparisons with reference to Indian cottons is crucial and highlights comparative quality-related learning from Indian cotton goods.

Does the material evidence corroborate this textual finding? To establish if cloth quality improved over time, I study the material evidence (surviving cotton textiles from the period) under a digital microscope and thread counter to chart the quality of these fabrics over the key decades of mechanisation. I use thread count to establish the comparative quality of the machine-made cotton fabrics vis-à-vis the handmade Indian cottons.

My findings show that early British ‘cottons’ were, in reality, mixed fabrics using linen warp and cotton weft. In addition, the results show a marked increase in cloth quality between 1746 and 1820.

Assessed together, the textual and material evidence demonstrate that mechanisation in the early British cotton industry was geared towards overcoming specific sequential quality-related bottlenecks, associated first with the ability to make the all-cotton cloth, followed by the ability to make the fine all-cotton cloth.

Imitation of benchmark Indian cottons steered the growth of the British cotton industry on a specific path of technological evolution – a trajectory that was shaped by the quest to match the quality of the handmade Indian cotton textiles.

Deindustrialisation in ‘troubled’ Belfast: evidence of the links between factory closures and sectarianism – and lessons from the community response

by Christopher Lawson (University of California, Berkeley)

This paper was presented at the EHS Annual Conference 2019 in Belfast.

 

Shankilltroubles
The Shankill road, Belfast during the troubles. Available at Wikimedia Commons. 

My new research provides fresh insights into the relationship between industrial decline and sectarian conflict in late twentieth century Belfast, and increases our understanding of how communities respond to the loss of their economic base.

The poverty and deprivation that continues to afflict much of West Belfast is usually understood as a direct result of the sectarian ‘Troubles’ of the 1960s to 1990s, when ‘ancient’ ethnic and religious hatreds erupted and brought economic misery as investment fled.

But industrial decline actually predated the ‘Troubles’, and was a cause rather than an effect of sectarian tension. The linen industry, on which West Belfast had been built, shed tens of thousands of jobs in the 20 years following the Second World War, leading to some of the highest unemployment rates in the entire UK by the mid-1960s.

I argue that it was the social consequences of the collapse of the linen industry that made West Belfast neighbourhoods like the Shankill and the Falls such centres of conflict in the following decades. West Belfast communities were caught in a downward spiral, where unemployment and urban decay was exploited by those seeking to promote sectarian resentment, leading to violence, which in turn made the economic conditions even worse.

In addition to showing how deindustrialisation helped spur the ‘Troubles’ in West Belfast, my research also shows how new community organisations sprang up to fill the gap left by government and lead the effort to adapt to post-industrial world.

I focus particularly on the creation of the Shankill Community Council and Ardoyne People’s Assembly, on either side of the sectarian divide in West Belfast. These organisations are usually seen as outgrowths of the Troubles, focused on defending their communities from sectarian violence, but my research shows that their primary focus was actually on re-development and reversing economic decline.

These organisations recognised that the linen industry would not be returning, and instead focused on education, daycare, skills retraining and transport linkages. In communities where more than 70% of adolescents left school without any qualifications whatsoever, improved education was essential if young people were to build meaningful lives and resist the temptation to join sectarian paramilitaries.

The emphasis on quality daycare was part of a larger effort to reduce the barriers preventing women from entering the workforce as equals to men, as community leaders recognised that the idea of the ‘male breadwinner’ was a thing of the past.

Although the progress of these organisations was slow, their efforts helped to begin the process of economic and social recovery, and they set the agenda for government support in the post-Good Friday Agreement era. The Shankill Women’s Centre, an outgrowth of the Shankill Community Council, would receive significant government support from New Labour and from the new Northern Ireland Assembly, and it continues to provide subsidised daycare in the neighbourhood.

With deindustrialisation widely recognised as a contributing factor in the UK’s 2016 vote to leave the European Union and the election of Donald Trump, it is important that we understand the serious social and cultural consequences that such dramatic economic dislocation can have.

My research helps to provide a better understanding of the role of deindustrialisation in the outbreak of sectarian violence in Northern Ireland, but also shows how bottom-up social action can make a genuine difference in the process of recovery. In this way, it provides lessons that can be applied to struggling post-industrial communities across the Western world.