This blog aims to encourage discussion of economic and social history, broadly defined. We live in a time of major social and economic change, and research in social science is showing more and more that a historical and long-term approach to current issues is the key to understanding our times.
For far too long, our elderly ancestors have been viewed through the prism of the National Health Service and the modern welfare state: old people are regarded as a burden, taking out of society rather than contributing. In contrast, this study of census data for five counties across England and Wales from 1851 to 1911 reveals a reciprocal relationship between those living in old age and wider society.
First, across the whole period, 86-93% of men aged 60 and over were in employment. Even if we exclude those in workhouses, the figure is 80-85%.
Most old men worked in agricultural and general labouring, although an increase was evident by 1911 in the mining industry in Glamorgan and metal manufacturing in Sheffield. Bricklaying, house painting, dock labouring and commercial sales were also pursued in urban areas. Labour force participation rates were higher among men in their sixties than among men in their seventies and eighties.
Second, from 1851 to 1911, between a sixth and a third of women aged over 60 were in employment. Although their occupations were less diverse than those of men, the majority were based in domestic service.
Old women were also involved in cotton and silk textiles and in the manufacture of straw hats. Over time, though, the employment rates of old women did not increase like those of men, owing partly to foreign competition in Asian straw imports and French silks.
Third, retirement was not an innovation brought about by the creation of old age pensions. As early as 1891, over 13% of old men were described in the census as ‘retired’, with high rates in the areas favoured by today’s retirees: the coastal areas of Christchurch and Portsmouth in southern England. More old people retired than went into the workhouse.
But retirement was only an option for those who had inherited or managed to accumulate wealth, such as former smallholders, grocers, innkeepers, civil servants or military officers. Others who lacked land or capital, for example agricultural labourers, or boot and shoe makers were forced to resort to the Poor Law.
Even then, this did not always, or usually, mean the workhouse. Welfare assistance to old people in their own homes was common, especially for women. ‘Outdoor relief’, usually around 2s 6d per week, was issued as a weekly ‘pension’.
Moreover, the women who received it were not always as old as those entitled to a pension in the modern era: in Yorkshire in 1891, over 10% of old women described as ‘on relief’ were under 66, which will be the minimum pension age for women by 2020.
So is it really true to say that nowadays, ‘the elderly have never had it so good’? In a sense it is, as old people lead healthier and longer lives today than they have ever done.
But it would be wrong to conclude that old people in Victorian times were largely condemned to lives of pain and poverty. They had a wide range of experiences, and many had access to employment opportunities and sources of assistance that are no longer offered.
In terms of present day policy, we might learn something from our Victorian forebears about ways to integrate the general population in their sixties into the workforce, so that they can contribute to society as well as receive welfare.
Every few years a child labour scandal in the clothing industry hits the British press, invoking wide public condemnation. This reaction is a modern phenomenon: 250 years ago, child labour in textile production was commonplace, not worthy of a headline.
Attitudes changed in the nineteenth century, leading to the passing of the 1833 Factory Act and 1842 Mines Act. But before this change, child labour was believed to have positive benefits for children.
One notable example was the Foundling Hospital, a charitable institution that supported abandoned children and was a keen believer in the benefits of child labour. The Hospital sought to produce upright citizens that would be able to support themselves as adults.
A key aim of the Hospital was therefore to train children to be ‘industrious’ from a young age. One governor wrote that the Hospital aimed ‘to give [the Foundlings] an early Turn to Industry by giving them constant employment’. This ‘Turn’ would train the children into economic self-sufficiency, stopping them from relying on parish poor relief as adults.
The Foundling Hospital opened its doors in 1741. Parliament recognised the value of its work and funded the acceptance of all children presented to it aged 12 months or under over the period 1756-60. This ‘General Reception’ brought 14,934 children into the Hospital.
The London Hospital could not cope with these unprecedentedly high numbers and new branches were founded, including one in Ackworth, Yorkshire, which received 2,664 children in the period 1757-72. Ackworth closed because Parliament regretted its generosity and stopped funding the General Reception generation in 1771.
Thousands of children required thousands of uniforms and Ackworth chose to make as many garments as possible in-house. On-site production both trained children to be industrious and offered financial benefits for the Hospital. Work completed on-site was cheap and reliable, and there was greater quality control.
The Ackworth ‘manufactory’ produced woollen cloth. The children prepared the fibre for spinning, span it and wove the yarn into cloth that was worn by their peers at Ackworth and was sold to the London Hospital and externally. Some cloth manufacturing work was outsourced, particularly finishing processes that required a higher level of skill.
Few concessions were made for the age of the makers and the London branch criticised and sent orders back that were considered to be of insufficient quality or inappropriate size. These were primarily business rather than charitable transactions.
The skill division also applied in the making of clothing. Underwear, stockings and girls’ clothing were made in-house because it was less skilled work. Garments were produced in high volumes. From 1761 to 1770, 13,442 pieces of underwear (shirts and shifts) and 19,148 pairs of stockings were made by the children.
Tasks such as tailoring, and hat and shoe making required long apprenticeships to develop the necessary skill – this work was therefore outsourced. But external supply had its problems. It was difficult to source enough garments for the hundreds of children at the branch. Products were more expensive because labour was not free and the Hospital had little influence on suppliers’ timeframes.
A Foundling started work young, aged 4 or 5, and continued to work through their residence at the Hospital. Despite this, they were luckier than their peers in the workhouse who endured worse conditions.
Many parents chose to send their children to the Foundling Hospital to give them better life chances through the greater educational and apprenticeship opportunities offered. Putting the children to work, which seems cruel to us, was a key educational strategy to help them achieve economic independence in adulthood. Its financial and logistical benefits were welcome too.
by Rui Esteves and Gabriel Geisler Mesevage (University of Oxford)
The possibility that politicians might act to further their private financial interests, as opposed to the general public interest, has led to the creation of conflict-of-interest rules in modern democracies. For example, the code of conduct of the British Parliament requires that MPs disclose private interests related to their public duties.
In the mid-nineteenth century, Parliament went further, and created a system for the approval of new major public works projects in which MPs with a conflict were barred from voting. But the effectiveness of these rules can be undermined if politicians agree to trade votes with their colleagues — a practice known as ‘logrolling’.
This research use a unique episode in the mid-nineteenth century to determine whether, and to what extent, British politicians traded their votes to further their private interests.
In the mid-1840s, hundreds of new railway companies petitioned the British Parliament for the right to build railway lines. It was Parliament’s responsibility to pick the railway lines they wanted to see built, and in this way shape the development of the modern British transport network.
Since many MPs were also investors in railroads, Parliament created a system of subcommittees, in which the applications of railways would be considered only by MPs without financial conflicts, and who did not represent a constituency that the railway was intending to service.
As a result of this system, MPs with vested interests could not vote for their preferred projects directly. But they could further their interests indirectly by trading their vote on another project with the vote of the MP overseeing the project in which they had an interest.
Drawing on methods from social network analysis, the study identifies all of the potential trades between MPs, and then test statistically for evidence of vote trading. The statistical evidence reveals significant collusion in the voting patterns of MPs who were deciding which railway lines to approve.
These findings reveal significant levels of vote-trading, with politicians coordinating their behaviour so as to ensure that the projects they preferred – which they were banned from influencing directly – were nonetheless approved by their colleagues. As much as a quarter of all of the approved projects were likely the result of this logrolling, and the economic costs of this behaviour were significant, leading to Britain creating a less efficient railway network.
This research highlights the importance of understanding politician’s private interests. Moreover, it illustrates how merely acknowledging conflicts of interest, and abstaining from voting when conflicted, may not resolve the problem of vested interests if politicians are able to collude. The findings shed light on a perennial problem; the methods developed to detect logrolling in this setting may prove useful for detecting vote-trading in other contexts.
by Anne Sofie Beck Knudsen (University of Copenhagen)
Although a hotly debated topic, we know surprisingly little of the long-term cultural impact of international migration. Does it boil down to the risk of clashes between different cultures; or do we see cultural changes in migrant-sending and migrant-receiving countries along other dimensions as well?
Using novel empirical data, this research documents how past mass migration flows carried values of individualism across the Atlantic ocean from the mid-nineteenth to early twentieth century. This inter-cultural exchange was so significant that its impact is still observed today.
When talking about individualism versus collectivism, this study refers to the emphasis on independence from society that is prevalent in these cultures. With this in mind, it becomes clear why it has a role to play. The act of migration involves leaving familiar surroundings to embark on a journey where you are bound to rely on yourself. An individual with strong ties to the surroundings will be less likely to undergo this act. Collectivists are thus less likely migrate, while the opposite is true for individualists.
To test the idea of individualistic migration and its long-term impact empirically, this research constructs novel indicators of culture, which allow to go back and study the past. It looks at two everyday cultural manifestations: how we name our children; and how we speak our language.
Giving a child commonplace names like ‘John’ reflects parents of a more conformist motivation as they, perhaps unconsciously, are more concerned about their child fitting in rather than standing out. Likewise, the relative use of singular (‘I’, ‘mine’, ‘me’) over plural (‘we’, ‘ours’, us) personal pronouns tells us something about the focus on the individual over the collective.
The study constructs historical indicators of culture from the distribution of names in historical birth registers and from the written language of local newspapers at the time.
With new data in hand, the research can document the prevalence of individualistic migration during the settlement of the United States around the turn of the twentieth century. Among inhabitants of major migrant-sending countries like Norway and Sweden, only those with more uncommon names were more likely actually migrate to. This cultural effect remains even when considering a host of other potential explanations related to economic prospects and family background.
If more individualistic types are more likely to migrate, we would expect to observe an impact on the overall culture of a given location. That is exactly what this research finds. Districts in Sweden and Norway that experienced high emigration flows of people with an individualistic spirit did indeed become more collectivistic – both in terms of child naming trends and in written language pronoun use.
This leaves with the question of whether an impact from this historical event is still visible today. Does international migration have long-term cultural consequences other than the risk of producing cultural clashes?
In this study, this seems to be the case. Scandinavian districts that experience more emigration are still relatively more collectivist today than those that experienced less. Moreover, it is widely agreed that New World countries like the United States are the most individualistic in the world today – a fact that seems to be explained by the type of migrants they once received.
by Vellore Arthi (University of Essex), Brian Beach (College of William & Mary), and Walker Hanlon (University of California, Los Angeles)
Are recessions good for health? A number of recent studies suggest that mortality actually goes down during recessions – at least in developed countries, where social safety nets help cushion the blow of unemployment and income loss.
This striking conclusion rests on one of two assumptions: either that people do not respond by migrating away from recession-stricken areas; or that if they move, these population flows can be perfectly measured. But are these assumptions realistic?
Migrant movements can be notoriously difficult to track, and famous episodes such as the Depression-era migration from the US Great Plains to California suggest that these sorts of internal population movements may indeed be a natural response to changes in local economic conditions. This raises the question: what does unaccounted migration mean for our assessment of the recession-mortality relationship?
Our research shows that unobserved migration from recession-stricken regions may actually lead us to underestimate systematically how deadly recessions really are.
To test how migration influences estimates of the relationship between recessions and mortality, we draw on a unique historical natural experiment: the temporary but severe economic downturn in the cotton textile-producing regions of Britain that resulted from the American civil war (1861-65).
The cotton textile industry was England’s largest industrial sector in the second half of the nineteenth century and, prior to the civil war, received the majority of its raw cotton inputs from the American South. The onset of the civil war sharply reduced these supplies, leading to a severe but temporary economic downturn that left several hundred thousand workers unemployed.
Digitising a wealth of historical data on births, deaths and population, and exploiting variation in both the geographical distribution of the British cotton textile industry and the timing of the civil war, we show that standard approaches yield the familiar result: the downturn, popularly termed the ‘cotton famine,’ reduced mortality.
But we also find evidence that migratory responses to this event were substantial, with much of this mobility occurring over short distances, as displaced cotton workers sought opportunities for work in nearby districts.
After making a series of methodological adjustments that account for this recession-induced migration, we show that the sign of the recession-mortality relationship flips: this downturn in fact appears to have been bad for health, raising mortality in both cotton regions and in the regions to which unemployed cotton operatives fled.
After accounting for migration bias, we find that:
The civil war-era downturn in the cotton textile regions of Britain increased total mortality in the affected districts
But the downturn appears to have led to improved infant and maternal mortality outcomes, probably by freeing up maternal time for breastfeeding, childcare, and other health-improving behaviours.
Gains in infant health were offset by large and significant increases in mortality among the elderly.
There was no net effect on mortality among working-age adults, who were also the most mobile during the downturn.
This outcome appears to have been driven by worsening mortality due to the deteriorating nutrition and living conditions associated with income loss, which was in turn offset by improvements in maternal mortality and by fewer deaths by accidents and violence. (The latter finding is further supported by evidence that alcohol consumption and industrial accident rates fell during the recession.)
Our study provides both a methodological and factual contribution to our understanding of the relationship between recessions and health. The methodological contribution consists of showing that migration undertaken in response to a recession has the potential to introduce substantial bias into estimates of the recession-mortality relationship using the standard approach – particularly if these population flows are not well measured.
This bias is likely to be greater in settings, such as developing countries, where labour forces are more mobile, where weak social safety nets induce migration in response to recessions, and where the intercensal population data used to track these movements are poor. Studies applying the standard approach in these settings are likely to generate misleading results, which may lead to poorly targeted public health responses.
On a factual level, our study also contributes new evidence on the relationship between recessions and mortality in a historical setting, with the implication that studies focused on just one age group, such as infants, may generate results that are not representative of other segments of the population, or indeed of the overall relationship between recessions and mortality.
by Robert Warren Anderson (University of Michigan-Dearborn), Noel D. Johnson and Mark Koyama (George Mason University).
Jewish communities in pre-industrial European societies were more likely to be vulnerable to persecutions during periods of economic hardship.
The authors’ study finds that colder springs and summers, which led to reduced food supply, were associated with a higher probability of Jewish persecutions. What’s more, the effect of colder weather on the probability of Jewish persecutions was larger in cities with poor quality soil and in states that were weaker.
Throughout most of history, religious minorities were the victims of persecution. Violence against religious and ethnic minorities remains a major problem in many developing countries today. This study investigates why some societies persecute minorities.
To answer these questions, the researchers focus on the persecution of Jews in medieval and early modern Europe. Violence against Jews was caused by a complex set of factors that have been studied intensively by historians. These include religiously motivated anti-semitism, the need to blame outsider groups and the economic role that Jews played in pre-industrial European societies.
The new study focuses on the hypothesis that Jews were more likely to be vulnerable during periods of economic hardship. The researchers test this hypothesis by combining two novel datasets.
The first dataset is drawn from the 26-volume Encyclopaedia Judaica and contains yearly information on 1,366 city-level persecutions of Jews from 936 European cities between 1100 and 1800. The location of these cities as well as the intensity with which they persecuted Jews is illustrated in Figure 1.
Figure 1: The distribution of cities with Jewish persecutions and total persecutions, 1100-1800
The second source contains data on yearly growing season temperature (April to September), which have been reconstructed from proxies including tree rings, ice cores and pollen counts (Guiot and Corona, 2010).
The first result is that colder springs and summers are indeed associated with a higher probability of persecution. A one standard deviation decrease in average growing season temperature in the previous five-year period (about one-third of a degree Celsius) raised the probability that a community would be persecuted from a baseline of about 2% to between 3% and 3.5% in the subsequent five-year period or a 50% to 75% increase in persecution probability.
To explain this effect, the researchers develop a conceptual framework that outlines the political equilibrium under which pre-modern rulers would tolerate the presence of a Jewish community. They argue that this equilibrium was vulnerable to shocks to agricultural output and why this vulnerability may have been greater in locations with poor quality soil and in polities where sovereignty was divided or which were more susceptible to unrest.
Consistent with their conceptual framework, the researchers find that the effect of colder weather on persecution probability was larger in cities with poor quality soil and in states that were weaker. Moreover, the relationship between colder weather and persecution probability was strongest in the late Middle Ages.
Furthermore, as Figure 2 illustrates, the relationship disappeared after 1600, which the researchers attribute to various factors: the rise of stronger states (which were better able to protect minorities); increased agricultural productivity; and the development of more integrated markets, which reduced the impact of local weather shocks on the food supply.
Figure 2: The effect of cold weather shocks on persecution probability over time
The researchers support their results with extensive narrative evidence consistent with these claims and with further evidence that the relationship between colder weather and higher wheat prices also diminished after 1600.
‘Jewish Persecutions and Weather Shocks: 1100-1800’ by Robert Warren Anderson, Noel D. Johnson and Mark Koyama is published in the June 2017 issue of the Economic Journal.
A blog article also appeared on the media briefings of the Royal Economic Society.
Fifty Years of Growth in American Consumption, Income, and Wages By Bruce Sacerdote (Darmouth) Abstract: Despite the large increase in U.S. income inequality, consumption for families at the 25th and 50th percentiles of income has grown steadily over the time period 1960-2015. The number of cars per household with below median income has doubled since […]
by Avni önder Hanedar (Dokuz Eylül University and Sakarya University, Turkey) and Elmas Yaldız Hanedar (Yeditepe University, Turkey)
Were the military conflicts of 1910–1914 related to higher risks for market investors at the İstanbul Stock Exchange? Wars are often perceived as bad news, correlated with increasing risks for investors and fluctuations in volatility: there would be fall in stock prices due to expected macroeconomic costs, such as higher inflation and lower production, as companies’ activities and expected returns decrease. On the other hand, if wars’ outcomes were perceived as unimportant for companies’ activities and expected returns, then there would be no significant changes in stock prices and volatility.
Many researchers on financial economics have created a large literature on the effects of different wars, and addressed mixed findings. A pioneering research for the political crises of 1880–1914 is Ferguson (2006), contributing to answering how did investors at the London Stock Exchange view the conflicts on the eve of the First World War. He showed the absence of higher war risk on bonds of Great Powers traded on the London Stock Exchange. In addition, Hanedar et al. (2015) evince that the outbreak of the Turco-Italian and Balkan wars were correlated with a lower likelihood of Ottoman debt repayments, using data on two Ottoman government bonds traded on the İstanbul bourse. As the literature on the İstanbul bourse is limited, new light on this question required to explore risk perceived by stock investors due to the historical conflicts.
We focus on the influence of stock returns at the İstanbul bourse during the Turco-Italian and Balkan wars, using unique data on stock prices of 9 popular domestic joint-stock companies in the Ottoman Empire. All these companies played a crucial role for the Ottoman economy and operated in the most attractive sectors, i.e. banking, mining, agriculture, and transportation. Some of them are the Ottoman General Insurance company (Osmanlı Sigorta Şirket-i Umûmiyesi), the Regie (Tobacco) company (Tütün Rejisi), and the Imperial Ottoman Bank (Bank-ı Osmanî-i Şâhâne). The data are manually collected from Tanin, which was a widely circulated daily Ottoman newspaper. This research is the first to provide a historical narrative explaining the changes of Ottoman stock returns due to the wars that took place on the eve of the First World War. It observes only small reactions to the Turco-Italian war, and only for three stocks out of ten examined (see Table 1). This is interesting, as previously (Hanedar et al., 2015) we observed higher responsiveness of government bond prices during the same period.
It would be possible to argue that investors might have believed that the war would not be that harmful for the non-governmental economic and financial sectors. An important aspect supporting the finding is that the companies were either established or supported by foreign investors. Great Powers protected their home countries’ investments both economically and politically. The companies obtained revenue guarantees and privileges from the Ottoman state, making the investors’ investments secure. Great Powers that invested in the Ottoman Empire were expecting its demise soon. Therefore, investors were likely to invest in the companies just for the sake of having territorial claim without much consideration of risk. During the nineteenth century, wars were important sources of the solvency problem, which could explain the sensitivity of government bond prices to the conflicts studied here.
Post-Brexit UK-European Union (EU) trading relations will take one of three forms:
(1) The UK will remain part of the EU customs union
(2) UK-EU trade will be governed by World Trade Organisation (WTO) rules
(3) The UK and EU will enter a free trade pact.
Option (1) is economically optimal but has been declared politically unfeasible because it requires the UK to commit to the free movement of labour between the EU and the UK. Such conditionality is essential because economies grow unevenly and, in the absence of independent currencies across Europe and/or a central European state to pool the risk of unemployment, free movement of labour is the mechanism for redistributing the gains from EU growth.
Economics (not history) is the best guide here.
Most parties agree that option (2) is the solution of last resort. Much has been made of its impact on complex cross-border trade in manufactured goods, but trade in services may be more problematic. The General Agreement on Trade in Services governs international trade, but can these rules handle disputes regarding trade in services across highly integrated economies subject to disintegration post-Brexit?
The law (not history) is the best guide here.
Britain’s economic history however is key to analysis of option (3).
Women were important workers in the past, but they are still under-studied and their contributions largely absent from big-picture discussions of historical living standards. This is largely because women’s work remains to some extent a black box, but recent research has both challenged assumptions about how women participated in the paid labor market (c.f. Humphries and Sarasua 2012) and provided data about women’s payment for different kinds of labor (c.f. Humphries and Weisdorf 2015). The current work contributes to both these areas, by creating series of men’s and women’s wages in early modern Sweden, and by exploring both the mechanisms behind the gender gap in pay as well as the conditions under which women enter paid labor, with the goal of better understanding work in the past in general.
Primary data come from unskilled workers in the construction industry in Southern Sweden, predominantly from the towns Malmö and Kalmar; these are combined with published data from Stockholm, also from construction workers (Jansson, Andersson Palm, and Söderberg 1991). All data are for individuals paid by the day; relative wages are simply the percentage of men’s wages that women earn.
Figure 1 shows women’s relative wages from 1550 to 1759. Relative wages are high at the beginning of the period, around 80 percent, and increase to levels of parity in the early 17th century, after which they decline substantially, reaching as low as 40 percent during the end of the seventeenth century and into the eighteenth. This is a substantial decline over the period of not much more than a generation.
Some relative wage peaks are related to events that change both the demand for and supply of labor. Kalmar was a border town between Sweden and Denmark; from 1611 to 1613 the two countries fought the Kalmar War. Following these years women’s wages peaked, likely due to necessary rebuilding and a shortage in the supply of men. There is a wage spike in the same city following a fire in 1647 – while the national average weighs down the peak values, the deviations are still clear in the series, and when Kalmar is examined individually women’s relative wages peak as high as 1.33.
Table 1: Women’s work days as a percentage of all workdays in Kalmar, 1614-1710
Women’s ability to earn high wages goes against many of our theories about women’s earning potential – women are expected to earn less than men in physical tasks, because women are not as strong as men, and so are less productive physical laborers (Burnette 2008). Other theories suggest that women face constant wage discrimination (c.f. Bardsley 1999) – but this, too, is confounded by women’s ability to out-earn men, and by the large changes in the relative wage series. Something else is happening.
To understand we must look more closely at the data. In Kalmar workers are almost universally identifiable, allowing for deeper examination of the workforce. Table 1 shows the percentage of paid workdays that were worked by women, compared with the total number of paid work days in five year periods. Comparing the proportional feminization of the workforce with the amount of work, we see that the periods with the greatest amount of work are those in which the workforce is the most feminized – these periods are also those during which women’s relative wages are highest (see figure 1).
In combination with the relationship between total paid workdays and women’s relative wages across the whole country (figure 2), we are faced with a pattern that is familiar from the first and second world wars – when labor demand is high, women enter the labor force in higher numbers and are able to command higher wages. There is less evidence that women were systematically paid less either due to discrimination or because of their lower productivity – instead, women are responsive to economic forces, and especially to demand forces.
It is simple to to extend our sense of what is ‘traditional’ deep into the past, and to apply broad categories of ‘men’s’ and ‘women’s’ work. However, when we are able to suspend our assumptions and dig deeper into the evidence, the data tell a less expected story; women in Sweden worked in physical occupations, alongside men, often for similar wages. They worked especially hard when the need was highest, and women’s wages only fell away from men’s when work became less regular and men and women weren’t employed together.
Accounting for women’s work shifts our understanding of household living standards in the long run, and provides strong evidence for what is intuitively clear: we cannot truly understand the past if we continue to discount the experiences or contribution of half the population.
The full working paper can be read here, and a shorter version from the EHS annual conference is available here.