Taxation and Wealth Inequality in the German Territories of the Holy Roman Empire 1350-1800

by Victoria Gierok (Nuffield College, Oxford)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

OLYMPUS DIGITAL CAMERA
Nuremberg chronicles – Kingdoms of the Holy Roman Empire of the German Nation. Available at Wikimedia Commons.

Since the French economist, Thomas Piketty, published Capital in the 21st Century in 2014, it has become clear that we need to understand the development of wealth and income inequality in the long run. While Piketty traces inequality over the last 200 years, other economic historians have recently begun to explore inequality in the more distant past,[1] and they report striking similarities of increasing economic inequality from as early as 1450.

However, one major European region has been largely absent from the debate: Central Europe — the German cities and territories of the Holy Roman Empire. How did wealth inequality develop there? And what role did taxation play?

The Holy Roman Empire was vast, but its borders fluctuated greatly over time. As a first step to facilitating analysis, I  focus on cities in the German-speaking regions.  Urban wealth taxation developed early in many of the great cities, such as Cologne and Lübeck. By the fourteenth century, wealth taxes were common in many cities. They are an excellent source for getting a glimpse at wealth inequality (Caption 1).

 

Caption 1. Excerpt from the wealth tax registers of Lübeck (1774-84).

Gierok1
Source: Archiv der Hansestadt Lübeck. Archival reference number: 03.04-05 01.02 Johannis-Quartier: 035 Schoßbuch Johannis-Quartier 1774-1784

 

Three questions need to be clarified when using wealth tax registers as sources:

  • Who was being taxed?
  • What was being taxed?
  • How were they taxed?

 

The first question was also crucial to contemporaries because the nobility and clergy adamantly defended their privileges which excluded them from taxation. It was Citizens and city-dwellers without citizenship who mainly bore the brunt of wealth taxation.

 

Figure 1. Taxpayers in a sample of 17 cities in the German Territories of the Holy Roman Empire.

Gierok2
Note: In all cities, citizens were subject to wealth taxation, whereas city-dwellers were fully taxed in only about half of them.
Source: Data derived from multiple sources. For further information, please contact the author.

 

The cities’ tax codes reveal a level of sophistication that might be surprising. Not only did they tax real estate, cash and inventories, but many of them also taxed financial assets such as loans and perpetuities (Figure 2).

 

Figure 2. Taxable wealth in 19 cities in the German Territories of the Holy Roman Empire.

Gierok3
Note: In all cities, real estate was taxed, whereas financial assets were taxed only in 13 of them.
Source: Data derived from multiple sources. For further information, please contact the author.

 

Wealth taxation was always proportional. Many cities established wealth thresholds below which citizens were exempt from taxation, and basic provisions such as grain, clothing and armour were also often exempt. Taxpayers were asked to estimate their own wealth and to pay the correct amount of taxes to the city’s tax collectors. To prevent fraud, taxpayers had to swear under oath (Caption 2).

 

Caption 2. Scene from the Volkacher Salbuch (1500-1504) shows the mayor on the left, two tax collectors at a table and a taxpayer delivering his tax payment while swearing his oath.

Gierok4
Source: Image: Pausch, Alfons & Jutta Pausch, Kleine Weltgeschichte der Steuerobrigkeit, 1989, Köln: Otto Schmidt KG, p.75

 

Taking the above limitations seriously, one can use tax registers to trace long-run wealth inequality in cities across the Holy Roman Empire (Figure 3).

 

Figure 3. Gini Coefficients showing Wealth Inequality in the Urban Middle Ages.

Gierok5
Source: Guido Alfani, G.,  Gierok, V., and Schaff, F.,  “Economic Inequality in Preindustrial Germany, ca. 1300 – 1850”.  Stone Center Working Paper Series, February 2020, no. 03.

 

Two main trends emerge: First, most cities experienced declining wealth inequality in the aftermath of the Black Death around 1350. The only exception was Rostock, an active trading city in the North. Second, from around 1500, inequality was rising in most cities until the onset of the Thirty Years War (1618-1648). This war, in which large armies marauded through German lands bringing along plague and other diseases, as well as the shift in trade from the Mediterranean to the Atlantic, might be the reason for the decline seen in this period. This sets the German lands apart from the development of inequality in other European regions, such as Italy and the Netherlands, in which inequality continued to rise throughout the early modern period.

 

Notes

[1] Milanovic, B., Lindert, P.H., and  Williamson, J.,  ‘Pre-Industrial Inequality’, Economic Journal 121, no. 551 (2011): 255-272;  Guido, A. ‘Economic Inequality in Northwestern Italy: A Long-Term View’, Journal of Economic History 75, no. 4 (2015): 1058-1096; Guido, A.,  and Ammannati, F.,  ‘Long-term trends in economic inequality: the case of the Florentine state, c.1300-1800’, Economic History Review 70, 4 (2017): 1072-1102; Wouter, R.,  ‘Economic Inequality and Growth before the Industrial Revolution: The Case of the Low Countries’,  European Review of Economic History 20, no. 1 (2016): 1-22;  Reis, J.,  ‘Deviant Behavior? Inequality in Portugal 1565-1770’,  Cliometrica 11, no. 3  (2017): 297-319; Malinowski, M.,  and  van Zanden J.L., ‘Income and Its Distribution in Preindustrial Poland’, Cliometrica 11, no. 3 (2017): 375-404.

 


 

Victoria Gierok: victoria.gierok@nuffield.ox.ac.uk

 

 

 

 

Corporate Social Responsibility for workers: Pirelli (1950-1980)

by Ilaria Suffia (Università Cattolica, Milan)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

 

Suffia1
Pirelli headquarters in Milan’s Bicocca district. Available at Wikimedia Commons.

Corporate social responsibility (CSR) in relation to the workforce has generated extensive academic and public debate. In this paper I evaluate Pirelli’s approach to CSR, by exploring its archives over the period 1950 to 1967.

Pirelli, founded in Milan by Giovanni Battista Pirelli in 1872, introduced industrial welfare for its employees and their family from its inception. In 1950, it deepened its relationship with them by publishing ‘Fatti e Notizie’ [Events and News], the company’s in-house newspaper. The journal was intended to share information with workers, at any level and, above all, it was meant to strengthen relationships within the ‘Pirelli family’.

Pirelli industrial welfare began in the 1870s and, by the end of the decade, a mutual aid fund and some institutions for its employees families (kindergarten and school), were established. Over the next 20 years, the company set the basis of its welfare policy which encompassed three main features: a series of ‘workplace’ protections, including accident and maternity assistance;  ‘family assistance’, including (in addition to kindergarten and school), seasonal care for children and, finally,  commitment to the professional training of its workers.

In the 1920s, the company’s welfare enlarged. In 1926, Pirelli created a health care service for the whole family and, in the same period, sport, culture and ‘free time’ activities became the main pillars of its CSR. Pirelli also provided houses for its workers, best exemplified in 1921, with the ‘Pirelli Village’. After 1945, Pirelli continued its welfare policy. The Company started a new programme of construction of workers’ houses (based on national provision), expanding its Village, and founding a professional training institute, dedicated to Piero Pirelli. The establishment in 1950 of the company journal, ‘Fatti e Notizie’, can be considered part of Pirelli’s welfare activities.

‘Fatti e Notizie’ was designed to improve internal communication about the company, especially Pirelli’s workers.  Subsequently, Pirelli also introduced in-house articles on current news or special pieces on economics, law and politics. My analysis of ‘Fatti e Notizie’ demonstrates that welfare news initially occupied about 80 per cent of coverage, but after the mid-1950s it decreased to 50 per cent in the late 1960s.

The welfare articles indicate that the type of communication depended on subject matter. Thus, health care, news on colleagues, sport and culture were mainly ‘instructive’, reporting information and keeping up to date with events. ‘Official’ communications on subjects such as CEO reports and financial statements, utilised ‘top to bottom’ articles. Cooperation, often reinforced with propaganda language, was promoted for accident prevention and workplace safety. Moreover, this kind of communication was applied to ‘bottom to top’ messages, such as an ‘ideas box’ in which workers presented their suggestions to improve production processes or safety.

My analysis shows that the communication model implemented by Pirelli in the 1950s and 1960s, navigated models of capitulation, (where the managerial view prevails) in the 1950s, to trivialisation (dealing only with ‘neutral’ topics, from the 1960s.

 

 

Ilaria Suffia: ilaria.suffia@unicatt.it

Infant and child mortality by socioeconomic status in early nineteenth century England

by Jaadla Hannaliis (University of Cambridge)

The full article from this blog (co-authored with E. Potter, S. Keibek,  and R.J.  Davenport) was published on The Economic History Review and is now available on Early View at this link

Picture 1nn
Figure 1. Thomas George Webster ‘Sickness and health’ (1843). Source: Photo credit: The Wordsworth Trust, licenced under CC BY-NC-SA

Socioeconomic gradients in health and mortality are ubiquitous in modern populations. Today life expectancy is generally positively correlated with individual or ecological measures of income, educational attainment and status within national populations. However, in stark contrast to these modern patterns, there is little evidence for such pervasive advantages of wealth to survival in historical populations before the nineteenth century.

In this study, we tested whether a socioeconomic gradient in child survival was already present in early nineteenth-century England using individual-level data on infant and child mortality for eight parishes from the Cambridge Group family reconstitution dataset (Wrigley et al. 1997). We used the paternal occupational descriptors routinely recorded in the Anglican baptism registers for the period from 1813–1837 to compare infant (under 1) and early childhood (age 1–4) mortality by social status. To capture differences in survivorship we compared multiple measures of status: HISCAM, HISCLASS, and also a continuous measure of wealth which was estimated by ranking paternal occupations by the propensity for their movable wealth to be inventoried upon death (Keibek 2017).  The main analytical tool was event history analysis, where individuals were followed from baptism or birth through the first five years of life, or until their death, or leaving the sample for other reasons.

Were socioeconomic differentials in mortality present in the English population by the early nineteenth century, as suggested by theorists of historical social inequalities (Antonovsky 1967; Kunitz 1987)? Our results provide a qualified yes. We did detect differentials in child survival by paternal or household wealth in the first five years of life. However the effects of wealth were muted, and non-linear. Instead we found a U-shaped relationship between paternal social status and survival, with the children of poor labourers or wealthier fathers enjoying relatively high survival chances.  Socioeconomic differentials emerged only after the first year of life (when mortality rates were highest), and were strongest at age one. Summed over the first five years of life, however, the advantages of wealth were marginal. Furthermore, the advantages of wealth were only observed once the anomalously low mortality of labourers’ children was taken into account.

As might be expected, these results provide evidence for the contribution of both environment and household or familial factors. In infancy, mortality varied between parishes, however the environmental hazards associated with industrialising or urban settlements appear to have operated fairly equally on households of differing socioeconomic status. It is likely that most infants in our eight  reconstitution parishes were breastfed throughout the first year of life – which  probably conferred a ubiquitous advantage that overwhelmed other material differences in household conditions, for example, maternal nutrition.

To the extent that wealth conferred a survival advantage, did it operate through access to information, or to material resources? There was no evidence that literacy was important to child survival. However, our results suggest that cultural practices surrounding weaning may have been key. This was indicated by the peculiar age pattern of the socioeconomic gradient to survival, which was strongest in the second year of life, the year in which most children were weaned. We also found a marked survival advantage of longer birth intervals post-infancy, and this advantage accrued particularly to labourers’ children, because their mothers had longer than average birth intervals.

Our findings point to the importance of breastfeeding patterns in modulating the influence of socioeconomic status on infant and child survival. Breastfeeding practices varied enormously in historical populations, both geographically and by social status (Thorvaldsen 2008). These variations, together with the differential sorting of social groups into relatively healthy or unhealthy environments, probably explains the difficulty in pinpointing the emergence of socioeconomic gradients in survival, especially in infancy.

At ages 1–4 years we were able to demonstrate that the advantages of wealth and of a labouring father operated even at the level of individual parishes. That is, these advantages were not simply a function of the sorting of classes or occupations into different environments. These findings therefore implicate differences in household practices and conditions in the survival of children in our sample. This was clearest in the case of labourers. Labourers’ children enjoyed higher survival rates than predicted by household wealth, and this was associated with longer birth intervals (consistent with longer breastfeeding), as well as other factors that we could not identify, but which were probably not a function of rural isolation within parishes. Why labouring households should have differed in these ways remains unexplained.

To contact the author:  Hj309@cam.ac.uk

References

Antonovsky, A., ‘Social class, life expectancy and overall mortality’, Milbank Memorial Fund Quarterly, 45 (1967), pp. 31–73.

Keibek, S. A. J., ‘The male occupational structure of England and Wales, 1650–1850’, (unpub. Ph.D. thesis, Univ. of Cambridge, 2017).

Kunitz, S.J., ‘Making a long story short: a note on men’s height and mortality in England from the first through the nineteenth centuries’, Medical History, 31 (1987), pp. 269–80.

Thorvaldsen, G., ‘Was there a European breastfeeding pattern?’ History of the Family, 13 (2008), pp. 283–95.

Land, Ladies, and the Law: A Case Study on Women’s Land Rights and Welfare in Southeast Asia in the Nineteenth Century

by Thanyaporn Chankrajang and Jessica Vechbanyongratana (Chulalongkorn University)

The full article from this blog is forthcoming on The Economic History Review

 

Security of land rights  empowers women with greater decision-making power (Doss, 2013), potentially impacting both land-related investment decisions and the allocation of goods within the households (Allendorf, 2007; Goldstein et al., 2008; Menon et al., 2017). In historical contexts where land was the main factor of production for most economic activities, little is known about women’s land entitlements. Historical gender-disaggregated land ownership data are scarce, making  quantitative investigations of the past challenging. In new research we overcome this problem by analyzing rare, gender-disaggregated, historical land rights records to determine the extent of women’s land rights, and its implications, in nineteenth-century Bangkok.

First, we utilized orchard land deeds issued in Bangkok during the 1880s (Figure 1).  These deeds were both landownership and tax documents. Land tax assessment  was assessed based on the enumeration of mature orchard trees producing  high-value fruits, such as areca nuts, mangoes and durian. From 9,018 surviving orchard deeds, we find that 82 per cent of Bangkok orchards listed at least one woman as an owner, indicating that women did possess de jure usufruct land rights under the traditional land rights system. By analyzing the number of trees cultivated on each property (proxied by tax per hectare), we find these rights were upheld in practice and incentivized agricultural productivity. Controlling for owner and plot characteristics, plots with only female owners on average cultivated 6.7 per cent more trees per hectare than plots with mixed gender ownership, while male-owned plots cultivated 6.7 per cent fewer trees per hectare than mixed gender plots. The evidence indicates higher levels of investment in cash crop cultivation among female landowners.

Picture 1fd
Figure 1. An 1880s Government Copy of an Orchard Land Deed Issued to Two Women. Source: Department of Lands Museum, Ministry of Interior.

 

The second part of our analysis assesses 217 land-related court cases to determine whether women’s land rights in Bangkok were protected from the late nineteenth century when land disputes increased. We find that ‘commoner’ women acted as both plaintiffs and defendants, and were able to win cases even against politically powerful men. Such secure land rights helped preserve women’s livelihoods.

Finally, based on an internationally comparable welfare estimation (Allen et al. 2011; Cha, 2015), we calculate an equivalent measure of a ‘bare bones’ consumption basket. We find that the median woman-owned orchard could annually support up to 10 adults. By recognizing women’s contributions to family income (Table 1), Bangkok’s welfare ratio was as high as 1.66 for the median household, demonstrating a larger household surplus than found in Japan, and comparable to those in Beijing and Milan during the same period (Allen et al. 2011).

Superficially, our findings seem to contradict historical and contemporary observations that land rights structures favor men (Doepke et al., 2012). However, our study typifies women’s economic empowerment in Thailand and Southeast Asia more generally. Since at least the early modern period, women in Southeast Asia possessed relatively high social status and autonomy in marriage and family, literacy and literature, diplomacy and politics, and economic activities ( Hirschman, 2017; Adulyapichet, 2001; Baker et al., 2017). The evidence we provide supports the latter interpretation, and is consonant with other Southeast Asian land-related features, such as matrilocality and matrilineage (Huntrakul, 2003).

 

Screenshot 2020-06-15 at 16.23.37

To contact the authors:

Thanyaporn Chankrajang, Thanyaporn.C@chula.ac.th

Jessica Vechbanyongratana, ajarn.jessica@gmail.com
@j_vechbany

 

References

Adulyapichet, A., ‘Status and roles of Siamese women and men in the past: a case study from      Khun Chang Khun Phan’ (thesis, Silpakorn Univ., 2001).

Allen, R. C., Bassino, J. P., Ma, D., Moll‐Murata, C., and Van Zanden, J. L. ‘Wages, prices, and living standards in China, 1738–1925: in comparison with Europe, Japan, and India’, Economic History Review, 64 (2011), pp. 8-38.

Allendorf, K., ‘Do women’s land rights promote empowerment and child health in Nepal?’,  World development, 35 (2007), pp. 1975-88.

Baker, C., and Phongpaichit, P., A history of Ayutthaya: Siam in the early modern world (Cambridge, 2017).

Cha, M. S. ‘Unskilled wage gaps within the Japanese Empire’, Economic History Review, 68 (2015), pp. 23-47.

Chankrajang, T. and Vechbanyongratana, J. ‘Canals and orchards: the impact of transport network access on agricultural productivity in nineteenth-century Bangkok’, Journal of Economic History, forthcoming.

Chankrajang, T. and Vechbanyongratana, J. ‘Land, ladies, and the law: a case study on women’s land rights and welfare in Southeast Asia in the nineteenth century’, Economic History Review, forthcoming.

Doepke, M., Tertilt, M., and Voena, A., ‘The economics and politics of women’s rights’, Annual Review of Economics, 4 (2012), pp. 339-72.

Doss, C., ‘Intrahousehold bargaining and resource allocation in developing countries’, World Bank Research Observer 28 (2013), pp.52-78.

Goldstein, M., and Udry, C., ‘The profits of power land rights and agricultural investment in Ghana’, Journal of Political Economy, 116 (2008), pp. 981-1022.

Hirschman, C. ‘Gender, the status of women, and family structure in Malaysia’, Malaysian Journal of Economic Studies 53 (2017), pp. 33-50.

Huntrakul, P., ‘Thai women in the three seals code: from matriarchy to subordination’, Journal of Letters, 32 (2003), pp. 246-99.

Menon, N., van der Meulen Rodgers, Y., and Kennedy, A. R., ‘Land reform and welfare in Vietnam: why gender of the land‐rights holder matters’, Journal of International Development, 29 (2017), pp. 454-72.

 

The Great Indian Earthquake: colonialism, politics and nationalism in 1934

by Tirthankar Ghosh (Department of History, Kazi Nazrul University, Asansol, India)

This blog is part of our EHS Annual Conference 2020 Blog Series.

 

Ghosh1
Gandhi in Bihar after the 1934 Nepal–Bihar earthquake. Available at Wikipedia.

The Great Indian earthquake of 1934 gave new life to nationalist politics in India. The colonial state too had to devise a new tool to deal with the devastation caused by the disaster. But the post-disaster settlements became a site of contestation between government and non-governmental agencies.

In this earthquake, thousands of lives were lost, houses were destroyed, crops and agricultural fields were devastated, towns and villages were ruined, bridges and railway tracks were warped, and drainage and water-sources had been distorted for a vast area of Bihar.

The multi-layered relief works, which included official and governmental measures, involvement of the organised party leadership and political workers, and voluntary private donations and contributions from several non-political and charitable organisations had to accommodate with several contradictory forces and elements.

Although it is sometime argued that the main objective of these relief works was to gain ‘political capital’ and ‘goodwill’; the mobilisation of fund, sympathy and fellow feelings should not be underestimated. Thus, a whole range of new nationalist politics emerged from the ruins of the disaster, which mobilised a great amount of popular engagement, political energy, and public subscriptions. The colonial state had to release prominent political leaders who could massively contribute to the relief operations.

Now the question is: was there any contestation or competition between the government and non-governmental agencies in the sphere of relief and reconstruction? Or did the disaster temporarily redefine the relationship between the state and subjects during the period of anti-colonial movement?

While the government had to embark on relief operations without having a proper idea about the depth of sufferings of the people, the political organisations, charged with sympathy and nationalism, performed the great task with more efficient organisational skills and dedication.

This time, India had witnessed what was the largest political involvement in a non-political agenda to date, where public involvement and support had not only compensated the administrative deficit, but shared an equal sense of victimhood. The non-political or non-governmental organisations, like Ramakrishna Mission, Marwari Relief Society etc. had also played a leading role in the relief operations.

The 1934 earthquake drew on massive popular sentiment, which was similar to the Bhuj earthquake of 2001 in India. In the long run, the disaster prompted the state to introduce the concept of public safety, hitherto unknown in India, and a whole new set of earthquake resistant building codes and modern urban planning using the latest technologies.

Real urban wage in an agricultural economy without landless farmers: Serbia, 1862-1910

by Branko Milanović (City University New York and LSE)

This blog is based on a forthcoming article on The Economic History Review

Screenshot 2020-06-10 at 17.10.50
Railway construction workers, ca.1900.

Calculations of historical welfare ratios (wages expressed in relation to the  subsistence needs of a wage-earner’s family) exist for many countries and time periods.  The original methodology was developed by Robert Allen (2001).  The objective of real wage studies is not only to estimate real wages but to assess living standards before the advent of national accounts.  This methodology has been employed to address key questions in economic history: income divergence between Northern Europe and China (Li and van Zanden, 2012; Allen, Bassino, Ma, Moll-Murata, and van Zanden, 2011); the  “Little Divergence”  (Pamuk 2007); development of North v. South America (Allen, Murphy and Schneider, 2012), and even the causes of the Industrial Revolution (Allen 2009; Humphries 2011; Stephenson 2018, 2019).

We apply this methodology to Serbia between 1862 and 1910, to consider  the extent to which  small, peasant-owned farms and backward agricultural technology can be  used to approximate  real income.   Further,  we develop debates on   North v. South European divergence by focusing on  Serbia (a South-Eastern country), in contrast to previous studies which focus on Mediterranean countries (Pamuk 2007; Losa and Zarauz, forthcoming). This approach allows us to formulate a hypothesis regarding the social determination of wages.

Using Serbian wage and price data from 1862 to 1910, we calculate welfare ratios for unskilled (ordinary) and skilled (construction) urban workers. We use two different baskets of goods for wage comparison: a ‘subsistence’ basket that includes a very austere diet, clothing and housing needs, but no alcohol, and a ‘respectability’ basket, composed of a greater quantity and variety of goods, including alcohol.  We modify some of the usual assumptions found in the literature to better reflect the economic and demographic conditions of Serbia in the second half of the 19th century.  Based on contemporary sources, we make the assumption that the ‘work year’ was 200, not  250 days, and that the average family size was six, not four.  Both assumptions reduce the level of the welfare ratio, but do not affect its evolution.

We find that the urban wage of unskilled workers was, on average, about 50 per cent higher than the subsistence basket for the family (Figure 1), and remained broadly constant throughout the period. This result confirms the absence of modern economic growth in Serbia (at least as far as the low income population is concerned), and indicates economic divergence between South-East and Western Europe. Serbia, diverged from Western Europe’s standard of living during the second half of the 19th century:  in 1860 the welfare ratio in London was about three times  higher than urban Serbia but by 1907, this gap had widened to more than five  to one (Figure 1).

Picture 1ee
Figure 1. Welfare ratio (using subsistence basket), urban Serbia 1862-1910. Note: Under the assumptions of 200 working days per year, household size of 6, and inclusive of the daily food and wine allowance provided by the employer. Source: as per article.

 

In contrast, the welfare ratio of skilled construction workers was between 20 to 30 percent higher in the 1900s compared to the 1860s (Figure 1). This trend reflects modest economic progress as well as an increase in the skill premium, which has been observed for Ottoman Turkey (Pamuk 2016).

The wages of ordinary workers appear to move more closely with the ‘subsistence basket’, whereas the wages of construction (skilled) workers wage seem to vary with the cost of the ‘respectability basket’. This leads us to hypothesize that the wages of both groups of workers were implicitly “indexed” to different baskets, reflecting the different value of the work done by each group.

Our results enhance provide further insights on economic conditions in 19th century Balkans, and generate searching questions about the assumptions used in Allen-inspired work on real wages. The standard assumption of 250 days work per annum and a ‘typical’ family size of four, may be undesirable for comparative purposes. The ultimate objective of real wage/welfare ratio studies is to provide more accurate assessments of real incomes between counties. Consequently, the assumptions underlying welfare ratios need to be country-specific.

To contact the authorbmilanovic@gc.cuny.edu

https://twitter.com/BrankoMilan

 

REFERENCES

Allen, Robert C. (2001), “The Great Divergence in European Wages and Prices from the Middle Ages to the First World War“, Explorations in Economic History, October.

Allen, Robert C. (2009), The British Industrial Revolution in Global Perspective, New Approaches to Economic and Social History, Cambridge.

Allen Robert C., Jean-Pascal Bassino, Debin Ma, Christine Moll-Murata and Jan Luiten van Zanden (2011), “Wages, prices, and living standards in China, 1738-1925: in comparison with Europe, Japan, and India”.  Economic History Review, vol. 64, pp. 8-36.

Allen, Robert C., Tommy E. Murphy and Eric B. Schneider (2012), “The colonial origins of the divergence in the Americas: A labor market approach”, Journal of Economic History, vol. 72, no. 4, December.

Humphries, Jane (2011), “The Lure of Aggregates and the Pitfalls of the Patriarchal Perspective: A Critique of the High-Wage Economy Interpretation of the British Industrial Revolution”, Discussion Papers in Economic and Social History, University of Oxford, No. 91.

Li, Bozhong and Jan Luiten van Zanden (2012), “Before the Great Divergence: Comparing the Yangzi delta and the Netherlands at the beginning of the nineteenth century”, Journal of Economic History, vol. 72, No. 4, pp. 956-989.

Losa, Ernesto Lopez and Santiao Paquero Zarauz, “Spanish Subsistence Wages and the Little Divergence in Europe, 1500-1800”, European Review of Economic History, forthcomng.

Pamuk, Şevket (2007), “The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600”, European Review of Economic History, vol. 11, 2007, pp. 280-317.

Pamuk, Şevket (2016),  “Economic Growth in Southeastern Europe and Eastern Mediterranean, 1820-1914”, Economic Alternatives, No. 3.

Stephenson, Judy Z. (2018), “ ‘Real’ wages? Contractors, workers, and pay in London building trades, 1650–1800’,  Economic History Review, vol. 71 (1), pp. 106-132.

Stephenson, Judy Z. (2019), “Working days in a London construction team in the eighteenth century: evidence from St Paul’s Cathedral”, The Economic History Review, published 18 September 2019. https://onlinelibrary.wiley.com/doi/abs/10.1111/ehr.12883.

 

 

The South Sea Bubble 300 Years On

by William Quinn (Queen’s University, Belfast)

A special issue on the Tricentenary of the South Sea Bubble was published on The Economic History Review as open access, and it is available at this link

The South Sea Bubble, a Scene in 'Change Alley in 1720 1847, exhibited 1847 by Edward Matthew Ward 1816-1879
Edward Matthew Ward (1847) The South Sea Bubble, a Scene in ‘Change Alley in 1720. Available at Tate Gallery

In 1720, the British Parliament approved a proposal from the South Sea Company to manage the government’s outstanding debt. The Company agreed to issue shares, some of which would be bought using government annuities rather than cash. The Company would then pay the government a reduced rate of interest on these annuities. The government’s debt burden would be reduced, and in exchange, the Company believed it had gained the opportunity to establish itself as a competitor to the Bank of England (Kleer, 2012).

Superficially, the scheme didn’t make much sense. How would the public be convinced to exchange lucrative government annuities for equity in a company whose main asset was a reduced rate of interest on those annuities? The trick was to lure annuity holders with the promise of capital gains on South Sea shares. Consequently,  the Company’s directors, with the implicit support of the government, engineered a bubble, primarily by creating a liquid secondary market for their shares, then extending huge amounts of credit to investors to flood the market with cash (Dickson, 1967). This strategy was too successful:  the scale of the bubble subsequently  provoked a backlash that ruined the South Sea directors (Kleer, 2015).

Picture 1ss
Figure 1. South Sea Company Share Price (£) and Subscriptions, 1719-20. Source: European State Finance Database

Almost as interesting as the scheme itself is how the memory of this event evolved.  A century later, the scheme was recounted as a sorry episode in the nation’s history, an economic disaster never to be repeated (Anderson, 1801). In the mid-nineteenth century it was remembered as an outbreak of collective madness, a cautionary tale for ordinary people on the dangers of being caught up in a speculative frenzy (Mackay, 1852). More recently, the Bubble  has been used as a case study to assess the efficiency of financial markets (Dale et al., 2005, 2007; Shea, 2007).

But, how should it be remembered? None of the available data suggests that 1720 was in any way an economic disaster, which is unsurprising, since participation in the scheme was much too low to have had systemic economic effects (Hoppit, 2002). Others have suggested that the Bubble Act, which accompanied the bubble, hamstrung British finance for the next century. But Harris (1994, 1997) has shown that much of what the Bubble Act outlawed had already been illegal, and as a result, it was almost never invoked.

Remembering 1720 as a sudden outbreak of madness would let the government off the hook: the bubble did not emerge spontaneously, but was deliberately created (Dickson, 1967). The level of political involvement in the market also makes it an unsuitable test case for the efficient markets hypothesis, and in any case, the structure of stock markets in 1720 was so radically different from today that they are unlikely to tell us much about the efficiency of modern markets.

Perhaps, then, the most significant feature of the South Sea scheme was its success. Prior to 1720, Britain’s debt burden was an existential threat, as it kept interest rates high, making it very expensive to fund warfare. The South Sea conversion scheme significantly reduced this burden. In France, the unwinding of the Mississippi scheme led to the reinstatement of  debt at its pre-1720 level (Velde, 2006). But in the aftermath of the South Sea scheme, the British government managed to sustain the improvement in its debt position, largely by redirecting the anger of ruined investors towards the scapegoated South Sea directors (Quinn and Turner, 2020). This allowed it to borrow at much lower interest rates, giving the country a major advantage in subsequent wars. After 300 years, is it time to start remembering the South Sea Bubble as a net positive for Britain?

 

To contact the author: W.Quinn@qub.ac.uk

 

References

Anderson, A. ‘An extract from The Origin of Commerce (1801)’ in R.B. Emmett (ed.), Great Bubbles Volume 3, London: Pickering and Chatto, 2000.

Dale, R.S., Johnson, J.E.V., and Tang, L. ‘Financial markets can go mad: Evidence of irrational behaviour during the South Sea Bubble’, Economic History Review58, 233-71, 2005.

Dale, R.S., Johnson, J.E.V., and Tang, L. ‘Pitfalls in the quest for South Sea rationality’, Economic History Review, 60, 766-772, 2007.

Dickson, P.G.M. The Financial Revolution in England: A Study in the Development of Public Credit, 1688-1756. London: Macmillan, 1967.

Harris, R. ‘The Bubble Act: Its passage and its effects on business organization’, Journal of Economic History54, 610-27, 1994.

Harris, R. ‘Political economy, interest groups, legal institution, and the repeal of the Bubble Act in 1825’, Economic History Review, 50, 675-96, 1997.

Hoppit, J. ‘The myths of the South Sea Bubble’, Transactions of the Royal Historical Society, 12, 141-65, 2002.

Kleer, R. ‘“The folly of particulars”: The political economy of the South Sea Bubble’, Financial History Review19, 175-97, 2012.

Kleer, R. A. ‘Riding a wave: The Company’s role in the South Sea Bubble’, Economic History Review68, 264-85, 2015.

Mackay, C. Memoirs of Extraordinary Popular Delusions and the Madness of Crowds, London: Robson, Levey and Franklin, 2nd edition, 1852.

Quinn, W. and Turner, J.D. Boom and Bust: A Global History of Financial Bubbles, Cambridge: Cambridge University Press, 2020.

Shea, G. S. ‘Financial market analysis can go mad (in the search for irrational behaviour during the South Sea Bubble)’, Economic History Review60, 742-65, 2007.

Velde, F. ‘John Law’s System and Public Finance in 18th c. France.’ Federal Reserve Bank of Chicago, 2006.

 

How JP Morgan Picked Winners and Losers in the Panic of 1907: The Importance of Individuals over Institutions

by Jon Moen (University of Mississippi) & Mary Rodgers (SUNY, Oswego).

This blog is part of our EHS 2020 Annual Conference Blog Series.

 

Moen 1
A cartoon on the cover of Puck Magazine, from 1910, titled: ‘The Central Bank – Why should Uncle Sam establish one, when Uncle Pierpont is already on the job?’. Available at Wikimedia Commons.

 

We study J. P. Morgan’s decision making during the Panic of 1907 and find insights for understanding the outcomes of current financial crises.  Morgan relied as much on his personal experience as on formal institutions like the New York Clearing House, when deciding how to combat the Panic. Our main conclusion is that lenders may rely on their past experience during a crisis rather than on institutional and legal arrangements in formulating a response to a financial crisis. The existence of sophisticated and powerful institutions like the Bank of England or the Federal Reserve System may not guarantee optimal policy responses if leaders make their decisions on the basis of personal experience rather than well-established guidelines.  This will result in decisions yielding sub-par outcomes for society compared to those made if formal procedures and data-based decisions had been proffered.

Morgan’s influence in arresting the Panic of 1907 is widely acknowledged. In the absence of a formal lender of last resort in the United States, he personally determined which financial institutions to save and which to let fail in New York. Morgan had two sources of information about the distressed firms: (1) analysis done by six committees of financial experts he assigned to estimate firms’ solvency and (2) decades of personal experience working with those same institutions and their leaders in his investment banking underwriting syndicates. Morgan’s decisions to provide or withhold aid to the teetering institutions appears to track more closely with his prior syndicate experience with each banker, rather than with the recommendations made by committees’ analysis of available data. Crucially, he chose to let the Knickerbocker Trust fail despite one committee’s estimate it was solvent and another’s that it had too little time to make a strong recommendation. Morgan had had a very bad business experience with the Knickerbocker and its president,  Charles Barney, but he had had positive experiences with all the other firms requesting aid. Had the Knickerbocker been aided, the panic might have been avoided all together.

The lesson we draw for present day policy is that the individuals responsible for crisis resolution will bring to the table policies based on personal experience that will influence the crisis resolution in ways that may not have been expected a priori. Their policies might not be consistent with the general well-being of the financial markets involved, as may have been the case with Morgan letting Knickerbocker fail.  A recent example that echoes the experience of Morgan in 1907 can be seen in the leadership of Ben Bernanke, Timothy Geithner and Henry Paulson during the financial crisis in 2008.  They had a formal lender of last resort, the Federal Reserve System, to guide them in responding to the crisis in 2008.  While they may have had the well-being of financial markets more in the forefront of their decision making from the start, controversy still surrounds the failure of Lehman Brothers and the lack of support to provide them with a lifeline from the Federal Reserve.  The latter could have provided aid, and this reveals that the individuals making the decisions, and not the mere existence of a lender of last resort institution and the analysis such an institution will muster, can greatly affect the course of a financial crisis.  Reliance on personal experience at the expense of institutional arrangements is clearly not limited only to the responses made to financial crises.  The coronavirus epidemic is one such example worth examining with this framework.

 


Jon Moen – jmoen@olemiss.edu

Early-life disease exposure and occupational status

by Martin Saavedra (Oberlin College and Conservatory)

This blog is part H of the Economic History Society’s blog series: ‘The Long View on Epidemics, Disease and Public Health: Research from Economic History’. This blog is based on the article ‘Early-life disease exposure and occupational status: The impact of yellow fever during the 19th century’, in Explorations in Economic History, 64 (2017): 62-81. https://doi.org/10.1016/j.eeh.2017.01.003   

 

Saavedra1
A girl suffering from yellow fever. Watercolour. Available at Wellcome Images.

Like epidemics, shocks to public health have the potential to affect human capital accumulation. A literature in health economics known as the ‘fetal origins hypothesis’ has examined how in utero exposure to infectious disease affects labor market outcomes. Individuals may be more sensitive to health shocks during the developmental stage of life than during later stages of childhood. For good reason, much of this literature focuses on the 1918 influenza pandemic which was a huge shock to mortality and one of the few events that can be visibly observed when examining life expectancy trends in the United States. However, there are limitations to looking at the 1918 influenza pandemic because it coincided with the First World War. Another complication in this literature is that cities with outbreaks of infectious disease often engaged in many forms of social distancing by closing schools and businesses. This is true for the 1918 influenza pandemic, but also for other diseases. For examples, many schools were closed during the polio epidemic of 1916.

So, how can we estimate the long-run effects of infectious disease when cities simultaneously respond to outbreaks? One possibility is to look at a disease that differentially affected some groups within the same city, such as yellow fever during the nineteenth century. Yellow fever is a viral infection that spreads from the Aedes aegypti mosquito and is still endemic in parts of Africa and South America.  The disease kills roughly 50,000 people per year, even though a vaccine has existed for decades. Symptoms include fever, muscle pain, chills, and jaundice, from which the disease derives its name.

During the eighteenth and nineteenth centuries, yellow fever plagued American cities, particularly port cities that traded with Caribbean Islands. In 1793, over 5,000 Philadelphians likely died of yellow fever. This would be a devasting number in any city, even by today’s standards, but it is even more so when considering that in 1790 that Philadelphia had a population of less than 29,000.

By the mid-nineteenth century, Southern port cities grew, and yellow fever stopped occurring in cities as far north as Philadelphia. The graph below displays the number of yellow fever fatalities in four southern port cities — New Orleans, LA; Mobile, AL; Charleston, SC; and Norfolk, VA — during the nineteenth century. Yellow fever was sporadic, devasting a city in one year and often leaving it untouched in the next. For example, yellow fever killed nearly 8,000 New Orleanians in 1853, and over 2,000 in both 1854 and 1855. The next two years, yellow fever killed fewer than 200 New Orleanians per year, then yellow fever come back killing over 3,500 in 1858. Norfolk, VA was only struck once in 1855. Since yellow fever never struck Norfolk during milder years, the population lacked immunity and approximately 10 percent of the city died in 1855. Charleston and Mobile show similar sporadic patterns. Likely due to the Union’s naval blockade, yellow fever did not visit any American port cities in large numbers during the Civil War.

 

Saavedra2
Source: As per original article.

 

Immigrants were particularly prone to yellow fever because they often came from European countries rarely visited by yellow fever. Native New Orleanians, however, typically caught yellow fever during a mild year as children and were then immune to the disease for the rest of their lives. For this reason, yellow fever earned the name the “stranger’s disease.”

Data from the full count of the 1880 census show that yellow fever fatality rates during an individual’s year of birth negatively affected adult occupational status, but only for individuals with foreign-born mothers. Those with US-born mothers were relatively unaffected by the disease. There are also effects for those who are exposed to yellow fever one or two years after their birth, but there are no effects, not even for those with immigrant mothers, when exposed to yellow fever three or four years after their births. These results suggest that early-life exposure to infectious disease, not just city-wide responses to disease, influence human capital development.

 


 

Martin Saavedra

Martin.Saavedra@oberlin.edu

 

Poverty or Prosperity in Northern India? New Evidence on Real Wages, 1590s-1870s

by Pim de Zwart (Wageningen University) and Jan Lucassen (International Institute of Social History, Amsterdam)

The full article from this blog was published on The Economic History Review and it is now available open access on early view at this link 

 

At the end of the sixteenth century, the Indian subcontinent, largely unified under the Mughals, was one of the most developed parts of the global economy, with relatively high incomes and a thriving manufacturing sector. Over the centuries that followed, however, incomes declined and India deindustrialized. The precise timing and causes of this decline remain the subject of academic debate about the Great Divergence between Europe and Asia. Whereas some scholars depicted the eighteenth century in India as a period of economic growth and comparatively high living standards, other have suggested it was an era of decline and relatively low incomes. The evidence on which these contributions have been based is rather thin, however. In our paper, we add quantitative and qualitative data from numerous British and Dutch archival sources about the development of real wages and the functioning of the northern Indian labour market between the late sixteenth and late nineteenth centuries.

In particular, we introduce a new dataset with over 7500 observations on wages across various towns in northern India (Figure 1). The data pertain to the income earned in a wide range of occupations, from unskilled urban workers and farm servants, to skilled craftsmen and bookkeepers, and for both adult men and women and children. All these wage observations were coded following the HISCLASS scheme that allow us to compare trends in wages between groups of workers. The wage database provides information about the incomes of an important body of workers in northern India. There was little slavery and serfdom in India, and wage labour was relatively widespread. There was a functioning free labour market in which European companies enjoyed no clearly privileged position. The data thus obtained for India can therefore be viewed as comparable to those gathered for many European cities in which  the wages of construction workers were often paid by large institutions.

Picture 1
Figure 01 – Map of India and regional distribution of the wage data. Source: as per article

We calculated the value of the wage relative to a subsistence basket of goods. We made further adjustments to the real wage methodology by incorporating information about climate, regional consumption patterns, average heights, and BMI, to more accurately calculate the subsistence cost of living. Comparing the computed real wage ratios for northern India with those prevailing in other parts of Eurasia leads to a number of important insights (Figure 1). Our data suggests that the Great Divergence between Europe and India happened relatively early, from the late seventeenth century. The slightly downward trend since the late seventeenth century lasted and wage labourers saw their purchasing power diminish until the devastating Bengal famine in 1769-1770. Given this evidence, it is difficult to view the eighteenth century as period of generally rising prosperity across northern India. While British colonialism may have reduced growth in the nineteenth century — pretensions about the superiority of European administration and the virtues of the free market may have had long-lasting negative consequences — it is nonetheless clear that most of the decline in living standards preceded colonialism. Real wages in India stagnated in the nineteenth century, while Europe experienced significant growth; consequently, India lagged further behind.

Picture 2
Fig.02 – Real wages in India in comparison with Europe and Asia. Source: as per article

With real wages below subsistence level it is likely that Indian wage labourers worked more than the 250 days per year often assumed in the literature. This is also confirmed in our sources which suggest 30 days of labour per month. To accommodate this observation, we added a real wage series based on the assumption of 360 days labour per year (Figure 2). Yet even with 360 working days per year, male wages were at various moments in the eighteenth and nineteenth centuries insufficient to sustain a family at subsistence level. This evidence indicates the limits of what can be said about living standards based solely on the male wage. In many societies and in most time periods, women and children made significant contributions to household income. This also seems to have been the case for northern India. Over much of the eighteenth and nineteenth centuries, the gap between male and female wages was smaller in India than in England. The important contribution of women and children to household incomes may have allowed Indian families to survive despite low levels of male wages.

 

To contact the authors: 

Pim de Zwart (pim.dezwart@wur.nl)

Jan Lucassen (lucasjan@xs4all.nl)