This blog aims to encourage discussion of economic and social history, broadly defined. We live in a time of major social and economic change, and research in social science is showing more and more that a historical and long-term approach to current issues is the key to understanding our times.
Here are ten reasons to know more about women’s work and read our article on ‘The gender division of labour in early modern England’. We have collected evidence about work tasks in order to quantify the differences between women’s and men’s work in the period from 1500-1700. This research allows us to dispel some common misconceptions.
Men did most of the work didn’t they? This is unlikely, when both paid and unpaid work are counted, modern time-use studies show that women do the majority of work – 55% of rural areas of developing countries and 51% in modern industrial countries (UN Human Development report 1995). There is no reason why the pattern would have been markedly different in preindustrial England.
But we know about occupational structure in the past don’t we? Documents from the medieval period onwards describe men by their occupations, but women by their marital status. As a result we know quite a lot about male occupations but very little about women’s.
But women worked in households headed by their father, husband or employer. Surely, if we know what these men did, then we know what women were doing too? Recent research undertaken by Amy Erickson, Alex Shepard and Jane Whittle shows that married women often had different occupations from their husbands. If we do not know what women did, we are missing an important part of the economy.
But we have evidence of women working for wages. It shows that around 20% of agricultural workers were women, surely this demonstrates that women’s work wasn’t as important as men’s in the wider economy? This evidence only relates to labourers paid by the day, and before 1700 most agricultural labour was not carried out by day labourers, so this isn’t a very good measure. Our article shows that women carried out a third of agricultural work tasks, not 20%.
But women mostly did domestic stuff – cooking, housework and childcare – didn’t they, and that type of work doesn’t change much across history? Women did do most cooking, housework and childcare, but our research suggests it did not take up the majority of their working time. These forms of work did change markedly over time. A third of early modern housework took place outside, and our data suggests the majority was done for other households, not as unpaid work for one’s own family.
But women only worked in a narrow range occupations, didn’t they? Our research shows that women worked in all the major sectors of the economy, but often doing slightly different tasks from men. They undertook a third of work tasks in agriculture, around half of the work in everyday commerce and almost two thirds of work tasks in textile production. But women also did forms of work we might not expect, such as shearing sheep, dealing in second-hand iron, and droving cattle.
Women’s work was all low skilled wasn’t it? Women very rarely benefitted from formal apprenticeship in the way that men did, but that does not mean the tasks they undertook were unskilled. Women undertook many tasks, such as making lace and providing medical care, which required a great deal of skill.
But this was all in the past, what relevance does it have now? Many gendered patterns of work are remarkably persistent over time. Analysis by the Office of National Statistics states that one third of the gender pay gap in modern Britain can be explained by men and women working in different occupations, and by the lower rates of pay for part-time work, which is more commonly undertaken by women than men.
So nothing ever changes …? Well, not necessarily. In fact looking carefully at patterns of women’s work in the past shows some noticeably shifts over time. For instance, women worked as tailors and weavers in the medieval period and in the eighteenth century, but not in the sixteenth century.
But we know why women work differently from men, particularly in preindustrial societies – isn’t it because they are less physically strong and all the child-bearing stuff? Physical strength does not explain why women did some physically taxing forms of work and not others (why they walked for miles carrying heavy loads on their heads rather than driving carts). And not all women were married or had children. Neither physical strength nor child-bearing can explain why women were excluded from tailoring between 1500 and 1650, but worked successfully and skilfully in this and other closely related crafts in other periods.
We now have data which allows us to look more carefully at these issues, but there is still much more to uncover.
Britain’s unusually high house price to income ratio plays an important role in reducing living standards and increasing “housing poverty”. This article shows that Britain’s housing shortage partly stems from deliberate long-term government policies aimed at restricting both public and private sector house-building. From the 1950s to the early 1980s, successive governments reduced housing starts as part of `stop-go’ macroeconomic policy, with major cumulative impacts.
This policy had its roots in the Second World War, when an influential coalition of Bank of England and Treasury officials pressed for a post-war policy of savage deflation, to restore sterling’s credibility and re-establish London as a major financial centre. John Maynard Keynes warned that prioritising international ‘obligations’ over the war-time commitment to build a fairer society would be repeating the 1920s gold standard error – though his direct influence ended with his untimely death. Deflationary policy proved politically impracticable in the short-term, as evidenced by Labour’s 1945 landslide election victory, though its supporters bided their time and were able to implement much of their agenda in the changed political climate of the 1950s.
The Conservatives’ 1951 election victory was based on a pledge to build 300,000 new homes per year. This was achieved in 1953 and building peaked at 340,000 completions in 1954. However, officials took advantage of the 1955-57 credit squeeze to press for severe cuts in housing investment. Municipal house-building was cut, while private house-building was depressed largely through restricting the growth of building society funds (by pressurising the building societies’ cartel to keep interest rates at such low levels that they were starved of mortgage funds). While the severity of policy varied over time, these restrictions were maintained almost continually until the early 1980s.
These restrictions were never formally announced and were hidden from Cabinet for much of this period. Meanwhile, given the political importance of housing, the Conservative government simultaneously proposed ever-larger housing targets (culminating in a 1964 election pledge to build 400,000 per annum). This created a perverse situation, whereby the government was spending substantial sums on highly publicised policies to increase demand for private housing (such as the 1959 House Purchase and Housing Act and the 1963 abolition of Schedule A income tax), while covertly reducing housing supply through restricting mortgage funding, limiting building firms’ access to credit, and reducing municipal housing investment. The following Labour government found itself drawn in to a similarly restrictive housing policy, as part of its ill-fated commitment to avoid sterling devaluation (arguably based on misleading Treasury advice), while housing restrictions were also used as an instrument of macroeconomic stabilisation in the 1970s.
A 1974 Bank of England analysis found that this policy had created both an exaggerated housing cycle and a structural deficit (with house-building being held below market-clearing levels at all points in the cycle). This had in turn reduced the capacity of the housing market to respond to rising demand, by reducing builders’ land banks, building materials capacity, and building labour, which raised house-prices while lowering productivity and technical progress. There is also evidence of “learning effects” by house-builders, who avoided expanding their activities during cyclical upturns, as they correctly perceived that tighter government restrictions might be imposed before their houses were ready to sell. These pressures fuelled house price inflation, both directly, and because housing became increasingly regarded as as a hedge against inflation.
Figure 1: Capital formation in dwellings, as percentage of total capital formation, and housing completions per thousand families, private houses and all houses, 1924-38 and 1954-79
British house-building during this era compared unfavourably to inter-war levels, as shown in Figure 1. Moreover, private house-building was even more depressed that total housing – as the Treasury found it easier to covertly restrict private housing than to reduce municipal building starts, where policy was more open to Cabinet and public scrutiny. British gross domestic fixed capital investment in housing was also very low relative to other European nations. Our time-series econometric analysis for 1955-1979 corroborates the `success’ of the restrictions and also shows the predicted asymmetric impact in `stop’ and `go’ phases of policy. This is an important finding – as stop-go policy is often examined in terms of the volatility of the variable under examination – based on the unrealistic assumption that industry would fail to realise that demand upturns might be rapidly terminated by the re-imposition of controls.
Housing restriction policy has persisting consequences. Additions to the housing stock were depressed for several decades, while the inflationary-hedge benefits for house-purchase became a self-fulfilling prophecy. Meanwhile restrictive planning policy (which was substantially intensified in the 1950s, as a further measure of housing restriction) has proved difficult to reverse. Average house-prices to income ratios have thus continued the upward trend established in this era, currently excluding a substantial and growing proportion of the population from owner-occupation.
Rising income inequality in the United States has attracted scholars’ attention for decades, resulting in an extensive and detailed literature on the trend’s causes and consequences. An equally large but much less studied decline in income inequality occurred in the US during the 1940s. This led to an era of relatively compressed income inequality that lasted into the 1970s. Goldin and Margo (1992) called this ‘The Great Compression.’
Our recent research has explored the role of changing labour market institutions in contributing to the Great Compression, with a focus on the role of labour unions. In the US, labour unions rose to prominence starting in the late 1930s, following the Wagner Act of 1935 and a Supreme Court decision in 1937 upholding the Act. This recast the legal framework under which unions formed and collectively bargained by creating the National Labor Relations Board to oversee representation elections and enforce the Act’s provisions, including prohibitions of various ‘unfair practices’ which employers had used to discourage unions. Unions continued to grow through the 1940s, especially during the Second World War, and they peaked as a share of employment in the early 1950s.
Time series graphs of union density and income inequality over the full twentieth century in the US are nearly mirror images of each other (Figure 1). But it is difficult to evaluate the role of unions in influencing this period’s inequality due to limitations of standard data sources. The US census, for instance, has never inquired about union membership, which makes it impossible to link individual-level wages to individual-level union status in nationally representative samples for this period (see Callaway and Collins 2018 and Farber et al. 2017 for efforts to develop data from other sources). Research on US unions later in twentieth century, when data are more plentiful, highlight their wage compressing character, as does some of the historical literature on wage setting during the Second World War, but there is much left to learn.
Figure 1: Unions and income inequality trends in the 20th-century United States
Sources: See Collins and Niemesh (forthcoming).
In a paper titled ‘Unions and the Great Compression of wage inequality in the United States at mid-century: evidence from labour markets,’ we provide a novel perspective on changes in inequality at the local level during the 1940s (Collins and Niemesh, forthcoming). The building blocks for the empirical work are as follows: the “complete count” census microdata for 1940 provide information on wages and industry of employment (Ruggles et al. 2015); Troy’s (1957) work on mid-century unionization provides information on changes in unionization at the industry level over the 1940s; and subsequent censuses provide sufficient information to form comparable local-level measures of wage inequality. We use a combination of local employment data circa 1940 and changes in unionization by industry after 1939 to create a variable for local ‘exposure’ to changes in unionization.
We ask whether places with more exposure to unionization due to their pre-existing industrial structure experienced more compression of wages during the 1940s and beyond, conditional on many other features of the local economy including wartime production contracts and allowing for differences in regional trends. The answer is yes: a one standard-deviation increase in the exposure to unionization variable is associated with a 0.072 log point decline in inequality between the 90th and 10th wage percentile in the 1940s (equivalent to 32 percent of the mean decline). The association between local union exposure and wage compression is concentrated in the lower part of wage distribution. That is, the change in inequality between the 50th and 10th percentile is more strongly associated with exposure to unionization than the change between 90th and 50th percentile. As far as we can tell, this mid-century pattern was not driven by the re-sorting of workers (e.g., high skilled workers sorting out of unionizing locations) or by firms exiting places that were highly exposed to unionization.
We also explore whether the impression unions likely made on local wage structures persisted, even as private sector unions declined through the last decades of the twentieth century. In fact, the pattern fades a bit with time, but it remains visible to the end of the twentieth century. We leave for future research important questions about the mechanisms of persistence in local wage structures, non-wage aspects of unionization (e.g., implications for benefits or safety), implications for firm behaviour in the long run, and international comparisons.
Callaway, B. and W.J. Collins. ‘Unions, workers, and wages at the peak of the American labor movement.’ Explorations in Economic History 68 (2018), pp. 95-118.
Collins, W.J. and G.T. Niemesh. ‘Unions and the Great Compression of wage inequality in the US at mid-century: evidence from local labour markets.’ Economic History Review (forthcoming). https://doi.org/10.1111/ehr.12744
Farber, H.S., Herbst D., Kuziemko I., and Naidu, S. ‘Unions and inequality over the twentieth century: new evidence from survey data.” NBER Working Paper 24587 (Cambridge MA, 2018).
Goldin, C. and R.A. Margo, ‘The Great Compression: the wage structure in the United States at midcentury.’ Quarterly Journal of Economics 107 (1992), pp. 1-34.
Ruggles, S., K. Genadek, R. Goeken, J. Grover, and M. Sobek. Integrated public use microdataseries: version 6.0 [Machine-readable database]. (Minneapolis: University of Minnesota, 2015).
Troy, L., The distribution of union membership among the states, 1939 and 1953. (New York: National Bureau of Economic Research, 1957).
by Alan de Bromhead (Queens University Belfast), Alan Fernihough (Queens University Belfast), Markus Lampe (Vienna University of Economics and Business) and Kevin Hjortshøj O’Rourke (All Souls College, Oxford)
With a protectionist president in the White House, the future of the multilateral, rules-based international trading system seems much less certain. So it is not surprising that politicians and commentators are turning to the 1930s for examples of what protectionism can imply for international trade flows.
World trade not only collapsed during the early 1930s: it also became much less multilateral. Countries like Britain and France, which already had empires, traded more with those empires. And countries like Germany and Japan, which were looking to acquire empires of their own, similarly traded more intensively with their respective spheres of influence.
This research focuses on the experience of Britain, which in 1931 broke decisively with a longstanding tradition of free trade. From November that year, substantial tariffs could be imposed on manufactured goods from outside the Empire. Similar duties on non-Empire fruit, flowers, and vegetables were possible soon after. And following the Ottawa conference of 1932, Britain’s trade policy explicitly served the interests of ‘the home producer first, Empire producers second, and foreign producers last’.
What was the impact of this dramatic policy shift? This study analyzes detailed data on British imports of 258 consistently defined commodities from 42 countries over the period 1924-38, as well as information on tariffs, quotas, voluntary export restraints, and other variables potentially influencing trade flows. To quantify the impact of the switch to protection, the authors compare actual trade flows from 1931 with counterfactual flows that would have taken place had tariffs and quotas remained unchanged.
The shift towards protection reduced the value of British imports by 9-10% on average, with the biggest impact being felt in 1933. Protection accounted for about a quarter of the total decline in British imports, which is consistent with results for the United States.
But in contrast with the findings of previous studies (which analyze aggregate data on trade and trade policies), the new research finds that the shift towards protection had a big effect on the geographical composition of British imports. For example, the Empire’s share of British imports rose from 27% to 39.2% between 1930 and 1935, while in the absence of protection it would only have increased to 31.4%.
Overall, the research shows that using disaggregated data does not significantly change the estimated impact of protection on the total value of trade. But it matters a great deal for the estimated impact of protection on the geographical composition of trade. Studies using aggregate data find that imperial trade blocs did not have a big influence on trade patterns during the 1930s. In contrast, this research finds that trade policy was crucial in increasing the share of the British Empire in British imports.
The clear ‘Balkanization’ of world trade shown in these results had wider effects, as several contemporary observers recognized. It reflected and probably also exacerbated the international tensions of the times, the later outcomes of which are well known.
What would you choose to buy from a store if money was no object? This was a decision eighteenth-century shoplifters made in practice on a daily basis. We might assume them to be attracted to the novel range of silk and cotton textiles, foodstuffs, ornaments and silver toys that swelled the consumer market in this period. Demand for these home-manufactured and imported goods was instrumental in a trebling of the number of English shops in the first half of the century, escalating the scale of the crime. However, as my book Shoplifting in Eighteenth-Century England shows, this was not the case. Consumer desire was by no means shoplifters’ major imperative.
Shoplifting occurred nationwide, but it was disproportionately a problem in the capital. A study of a sample of the many thousand prosecutions at the Old Bailey reveals that linen drapers, shoemakers, hosiers and haberdashers were the retailers most at risk. Over 70% of goods stolen, particularly by women, were fabrics, clothing and trimmings. Though thefts were highly gendered, men also stole these items far more frequently than the food, jewellery and household goods which were largely their preserve. Yet items stolen were not predominantly the most fashionable. Traditional linens, wool stockings and leather shoes were stolen as often as silk handkerchiefs and cotton prints. A prolific shoplifter who confessed to her crime found it profitable over the course of a year to steal printed linen at four times the quantity of the more stylish cotton, lawns, muslins and silk handkerchiefs she also took.
The shoplifters prosecuted were overwhelmingly from plebeian backgrounds. Professional gangs did exist but for most the crime was a source of occasional subsistence. Shop thieves came from the most economically vulnerable sections of society, seeking to weather an urban economy of low-paid and insecure work; many were older women or children. As the stolen goods needed to be convertible to income they were very commonly sold. So thieves sought the items which were most negotiable, those in greatest demand and least conspicuous in the working neighbourhoods in which they lived. A parcel of handkerchiefs stolen unopened was found to be ‘too fine’ for a market seller to whom it was offered. While there was undoubtedly an eagerness for popular fashion, the call for neat and appropriate daily dress in working communities was as insistent. We find the frequency with which shoplifters stole different types of clothing is consistent with a market demand governed in great part by the customary turnover of clothing items in labouring families. Handkerchiefs, shoes and stockings which were replaced regularly, were stolen frequently, jackets and stays more rarely.
There were also some practical reasons why shoplifters avoided the high-fashion goods that elite shops sold. To enter the emporiums in which the rich shopped added a heightened degree of risk. Testimony confirms shopkeepers’ deep reluctance to suspect any customer who appeared genteel, but in elite areas such as London’s West End retailers had an established clientele and a new face was likely to draw attention. A few shoplifters did try their luck by making an effort to dress the part and their polite fashioning and acting skill, witnesses recall, was often masterly. But an accidental slip into plebeian manners was easily done. Three customers dressed in silk drew the suspicion of a Covent Garden shopwoman as, she explained, ‘they called me my dear in a very sociable way’.
In general, shoplifters restricted themselves to plundering smaller local shops that were convenient to reconnoitre and with fewer staff to mount surveillance. A mapping of incidents in London shows this bias towards poorer and less fashionable districts, particularly to the north and east of the capital. The research found that within these working neighbourhoods shoplifted goods played an instrumental role in the intricate social and economic relations that underpinned community survival. Local associates earned money selling or pawning goods for the thief, their reputation serving to give the transaction an added credibility. Neighbours were informally sold stolen items on favourable terms, often including an element of exchange and credit, which acted to secure their complicity and future loyalty. We also come across shoplifted goods that were pawned to fund the shoplifter’s ongoing business or even recommodified as stock for their small retail concerns. Need rather than consumption fever motivated these shoplifters. Shoplifting was a capital crime throughout the century but this seems to have been of very little moment when the dictate was economic survival. As a shoplifter bluntly testified of her friend in 1747, ‘The prisoner came to me to go with her to the prosecutor’s shop, she wanted money, and she should go to the gallows’.
Since the Victorian period, it has been commonly assumed that inventors were rarely remunerated for their inventions. To contemporaries they were ‘the miserable victim of [their] own powerful genius’, ‘Martyrs of Science’ who worked ‘alone, unfriended, solitary’, while ‘the recorded instances of the[ir] martyrdom would be a task of enormous magnitude’. Prominent examples of important inventors from the industrial revolution period, but who had the misfortune to die in penury (the steam engineer Richard Trevithick, for example), has meant that this view has passed into the modern literature almost without scrutiny.
This assumption, though, is significant, as it directly informs how we might explain probably ‘the’ big problem in economic history: what were the origins of the industrial revolution, and concomitantly, of modern economic growth. In particular, if inventors did usually fail to obtain financial rewards, this precludes potential explanations of the industrial revolution that invoke incentives to explain the actions of those who invented and commercialised the new technology industrialisation required. It also precludes the applicability of endogenous growth theory to the industrial revolution (theory which has earnt two of its progenitors 2018 Nobel prizes) as it assumes that profit incentives determine the amount of inventive activity that occurs.
In an attempt to determine the wealth of inventors, I have collected probate data for over 700 inventors born in Britain between 1660 and 1830, from a list first compiled by Ralf Meisenzahl and Joel Mokyr. This probate data indicates that inventors were in fact extremely wealthy. For instance, in one exercise, I compared the probated wealth of 422 inventors who died between 1800 and 1870, with that of the overall adult male population.
Table 1. Probated wealth of inventors, 1800-1870
Adult male population (1839-1841)
Adult male population (1858)
<£200 or no will
Notes: For details on how the distribution of male probated wealth was estimated for 1839-41, and 1858, please refer to the appendix in the original article published in the Economic History Review.
The table above shows us that approximately 5 to 6 percent of adult males who died in 1839-41 and 1858 (years for when these figures can be collated), left behind wealth probated in excess of £1,000. The equivalent figure for inventors was over 60 percent. The disparity only increases as we move up through the wealth categories. Whereas only 0.16 percent of adult males left behind wealth probated in excess of £50,000 in 1858 (one in 650), for inventors it was 14.2 percent (one in 7).
It does not, however, automatically follow that the wealth of inventors was actually derived from their inventions. These were presumably talented individuals and their income may have been accrued over the course of a ‘normal’ business career and/or inherited. Unfortunately, this is a prohibitively difficult subject to approach directly: accounts rarely survive for these inventors and in any case, it is doubtful whether income from an invention could be neatly distinguished from ‘normal’ business income. As an indirect approach, I have also collected probate information for the brothers of inventors. Brothers are an especially apposite group for comparison: they would have enjoyed a very similar inheritance to their brothers (although inheriting financial capital appears to have mattered less than inheriting social capital) and they tended to enter similar occupations to their (inventive) brothers. Indeed, 24 of the inventors in the entire dataset were related as brothers – the talents and opportunities required to become an inventor were clearly not evenly distributed among the adult male population.
For 143 of the 422 inventors discussed in table 1, it was possible to confirm the existence of at least one adult brother who reached at least the age of 25 and who died in Britain between 1800 and 1870 (253 brothers in total). In the table below, the top row divides these 143 inventors into the same wealth categories as those used in the table above, with the number in parentheses denoting how many of the 143 inventors are in each category. The columns beneath this then show the distribution of the wealth of their brothers. So, there are 25 inventors in this exercise whose estate was worth less than £200. Of their 45 brothers, 31 were also left behind less than £200. Three had probated wealth between £200 and £1,000, nine between £1,000 and £10,000 and two between £10,000 and £50,000. None left behind more than £50,000.
Table 2. Brother’s Probates, 1800-1870
< £200 (25)
< £1,000 (11)
< £10,000 (35)
< £50,000 (44)
Notes: as Table 1
Overall, if inventors were wealthier than their brothers, then the latter should be concentrated at the top and to the right of the table, and away from the bottom left corner. Clearly, they are – overwhelmingly so when one considers how important simple happenstance can be in influencing an individual’s financial success over the course of their career.
Previous work has relied on impressionistic evidence to suggest that inventors in this period rarely obtained financial rewards commensurate with their technical achievements. Probate information, though, shows that inventors were extremely wealthy relative to the adult male population. Inventors were also significantly wealthier than another group who would have received a similar inheritance (in terms of both financial and social capital) and entered similar occupations: their brothers. Their additional wealth was derived from inventive activities: invention paid.
Economic activity is highly unevenly distributed across geographical space. This is reflected in the existence of cities as well as the concentration of economic functions in specific locations within cities, such as Manhattan in New York and the Square Mile in London. Understanding the strength of the forces of agglomeration that underlie these concentrations of economic activity is central to a range of policy questions.
What makes cities thrive? Is it proximity to natural resources – such as rivers, oceans, and energy sources – that make places attractive for firms to locate production? Is it shared amenities – such as leafy streets and scenic views – that make them attractive places for people to live? Or does the cumulative effect of growing population density itself make cities more productive, thereby attracting more firms and workers, boosting productivity further and raising demand for services, such as shops, cafés, and theatres?
Because of the complex history of many cities, identifying the sources of urban development is difficult. This study, for which its authors have recently been awarded the prestigious Frisch Medal, develops a model that shows a positive relationship between urban density and productivity growth in a virtuous circle of ‘cumulative causation’. The researchers then apply their model to the unique natural experiment of the construction and demolition of the Berlin Wall – and the impact on economic activity in neighbouring locations.
When Berlin was divided at the end of the Second World War, the western part lost access to the heart of the city; when the wall came down in 1989, the city was reunified. The researchers track the fortunes of West Berlin, which remained a market economy during the 41-year period of division, collecting data on employment, population, and rents between the 1930s and the 2000s.
They find that property prices and economic activity in the eastern side of West Berlin, close to the historic central business district in East Berlin, began to fall when the city was divided. Then, after reunification, the same area began to redevelop: West Berlin suddenly had access to all the knowledge and public resources in the resurgent central business district it had been denied. That spurred development in these areas, raising land prices close to the central business district and demonstrating the positive effect of exposure to density in neighbouring areas.
The model is successful in explaining the observed reorganisation of economic activity within West Berlin not only qualitatively but also quantitatively. What’s more, it has practical applications for urban planners making decisions on infrastructure and housing. For example, if a city is considering a subway, the model can be used to show how property prices are likely to increase.
The model also makes it possible to simulate what will happen to places that are close to proposed new infrastructure, and the potential economic spillovers to other locations. And it can show when improving one area is likely to hurt another area, when firms and workers might move away to better connected and more desirable locations.
In his seventh-century Etymology Isidore of Seville wrote ‘bees originate from oxen, just as hornets come from horses, drone bees from mules, and wasps from asses’, reflecting the belief that bees were the tiniest of birds, which sprang spontaneously from the putrefying flesh of cows. Such ideas were not new to the Middle Ages, and had been common from Antiquity, when Pliny the Elder commented that dead bees could be brought back to life if covered with mud and bovine carcass.
Yet despite this peculiar (to modern eyes) belief, medieval people were in fact keen observers of the natural world. They knew that there was a larger bee which was especially important—although they thought this was a king, rather than a queen—which the other bees protected, even to the death. They knew that bees lived in well-ordered communities, where every bee had a particular task which it dutifully carried out. They especially emphasized worker bees, which went out tirelessly collecting dew, from which they thought honey came, and flowers, which they thought turned to wax. But they observed no mating in bee colonies, and the implications of this were profound. Medieval theologians associated the virginity and chastity of bees with the two figures whose virginity and chastity were central to the Christian faith: Christ and Mary. This religious symbolism had a singularly important practical consequence, for it meant that beeswax candles were required for observance of the ritual of the Mass.
Over the high and late middle ages Christian religious practice became increasingly elaborate, with a greater number of services celebrated at an expanding number of cathedrals, churches, chapels, chantries and shrines. All of these required wax candles. Candles also burned on the rood screens and before each image, shrine, and many tombs in every church in Europe. Every stage of a medieval Christian’s life, from the baptismal font to the grave, was accompanied by candles.
The imagery of light and dark, fundamental to Christian devotion, was reliant on the supply of vast quantities of beeswax for candles and torches. The cost of provisioning religious institutions with lights was significant. In England wax accounted for on average half of the total running cost of the main chapel of major religious institutions and, apart from the fabric and bells, was the most expensive single item in parish churches. The need for wax across medieval Europe was continuous and persistent, yet the extent and significance of the production, trade, and consumption of wax has yet to be fully considered.
Figure 1. Bees (apes) are so-called because they are born without feet. A medieval bestiary
By permission of the British Library: Bestiary: BL Royal MA 12 C XIX f45r
Where did this beeswax come from? Although demand for wax was high across Europe, production itself was unevenly spread. In northern and central Europe high medieval urbanization and settlement expansion came at the expense of favourable bee habitats. This meant that the areas with the greatest need for wax were under intense pressure to meet demand through local production. These regions were therefore especially attractive to merchants bringing wax from the Baltic hinterland, where large-scale sylvan wax production took place in forests which had not been felled to make room for arable fields. This high-quality wax became an important feature of Hanseatic trade, and a brisk westward trade brought this wax ‘de Polane’ to England and Bruges where eager buyers were readily found.
Yet even this thriving international trade was not enough to meet the demand for wax from the c.9,000 parish churches which existed in England by the early fourteenth century. Comparing the total amount of wax needed for basic religious observance with wax imports suggests that foreign wax accounted for only a fifth of the amount of wax needed in England before 1475. The remaining wax must have been the product of hundreds of thousands of skeps kept by small domestic producers. This local beekeeping is almost invisible in manorial documents, and it is only by considering the total demand for wax that the importance of beekeeping within the peasant economy becomes apparent.
What emerges, then, is a dual economy for wax. Wealthy religious institutions attracted merchants bringing high-quality Baltic wax in great quantities, demonstrating that geographically peripheral areas were not only vital to European trade, but that the cultural practices of high and late medieval society were dependent on these regions. At the same time, small producers found ready markets for the product of their hives in their local parish churches, supplying much-needed injections of income within the household economy.
Figure 2 Bees in the Luttrell Psalter
By permission of the British Library: Luttrell Psalter: BL Add MS 42130 f240r
Bees and bee products held a uniquely important place in medieval culture, and consequently in the medieval economy. In these tiny golden creatures medieval people saw something flung from Paradise, imbued with mystical qualities and powerfully symbolic. Today, as we face climate change, habitat destruction and the decline of bee colonies, we might do well to look at the natural world with something of the same wonder.
This research is being expanded in the Leverhulme project ‘Bees in the medieval world: Economic, environmental and cultural perspectives’, which will also explore the Mediterranean trade in beeswax and consider encounters between the Christian and Muslim worlds.
Every day we are confronted with new questions that require an in-depth understanding of international trade–debates on tariffs, ‘renegotiating’ NAFTA, talks of ‘no deal’ with the EU, and attacks on the WTO. But where did these institutions come from, how can we understand their economic rationale, and how can we know what share of our living standards we owe to them? Understanding the origins and consequences of different types of institution, from early nation-states to global parliaments, is the core of a growing branch of economic history.
Late sixteenth- and early seventeenth-century England was one of the most litigious societies on record. If much of this litigation was occasioned by debt disputes, a sizeable proportion involved gentlemen suing each other in an effort to secure claims to landed property. In this genre of suits, gentlemen not infrequently enlisted their social inferiors and subordinates to testify on their behalf. These labouring witnesses were usually qualified to comment on the matter at hand a result of their employment histories. When they deposed, they might recount their knowledge of the boundaries of some land, of a deed or the like. In the course of doing so, they might also comment on all sorts of quotidian affairs. Because testifying enabled illiterate and otherwise anonymous people to speak on-record about all sorts of issues, historians have rightly regarded depositions as a singularly valuable source: for all their limitations, they offer us access to worlds that would otherwise be lost.
But we don’t know much about what labouring people thought about the prospect of testifying for (and against) their superiors, or how they came to testify in the first place. Did they think that it presented an opportunity to assert themselves? Did it – as some contemporary legal commentators claimed – provide them with an opportunity to make a bit of money on the side by ‘selling’ dubious evidence to their litigious superiors? Or were they reluctant to depose in such circumstances and, if so, why? Where subordinated individuals deposed for their ‘betters’, what was the relationship between the ‘pull’ of economic reward and the ‘push’ of extra-economic coercion?
I wrote an article that considers these questions. It doesn’t have any tables or graphs; the issues with which it’s concerned don’t readily lend themselves to quantification. Rather, this piece tries to think about how members of the labouring population conceived of the possibilities that were afforded to and the constraints that were imposed upon them by dint of their socio-economic position.
In order to reconstruct these areas of popular thought, I read loads of late sixteenth- and early seventeenth-century suits from the court of Star Chamber. In these cases, labouring witnesses who had deposed for one superior against another were subsequently sued for perjury (this was typically done in an effort to have a verdict that they had helped to secure overturned). Allegations against these witnesses got traction because it was widely assumed that people who worked for their livings were poor and, as a result, would lie under oath for anyone who would pay them for doing so. Where these suits advanced to the deposition-taking phase, labouring witnesses who were accused of swearing falsely under oath and witnesses of comparable social position provided accounts of their relationship with the litigious superiors in question, or commentaries on the perceived risks and benefits of giving evidence. They discussed the economic dispensations (or the promise thereof) which they had been given, or the coercion which had been used to extract their testimony.
Taken in aggregate, this evidence suggests that members of the labouring population had a keen sense of the politics of testimony. In a dynamic and exacting economy such as that of late sixteenth- and early seventeenth-century England, where labouring people’s material prospects were irrevocably linked to their reputation and ‘honesty,’ deposing could be risky. Members of the labouring population were aware of this, and many were hesitant to depose at all. Their reluctance may well have been born of an awareness that doubt was likely to be cast upon their testimony as a result of their subordinated and dependent social position, which lent credibility to accusations that they had sworn falsely for gain. More immediately, it reflected concerns about the material reprecussions that they feared would follow from commenting on the affairs of their ‘betters.’ Such projections were not merely the stuff of paranoid speculation. In 1601, a carpenter from Buckinghamshire called Christopher Badger had put his mark to a statement defending a gentleman, Arthur Wright, who had frustrated efforts to impose a stinting arrangement on the common to, as many locals claimed, the ‘damadge of the poorer sorte and to the comoditie of the riche.’ Badger recalled that one of Wright’s opponents – also a gentleman – later approached him and said ‘You have had my worke and the woorke of divers’ other pro-stinting individuals. To discourage Badger from further involvement, he added a thinly veiled threat: ‘This might be an occasion that you maie have lesse worke then heretofore you have had.’ For members of the labouring population, material circumstance often militated against opening their mouths.
But there was an irony to the politics of testimony, which was not lost on common people. If material conditions made some prospective witnesses reluctant to depose, they all but compelled others to do so (even when they expressed reservations). In some instances, labouring people’s poverty rendered the rewards – a bit of coal, a cow, promises of work that was not dictated by the vagaries of seasonal employment, or nebulous offers of a life freed from want – that they were promised (and less often given) in return for their testimony compelling. In others, the dependency, subordination and obligation that characterized their relations with their superiors necessitated that they speak as required, or face the consequences. In the face of such pressures, a given individual’s reservations about testifying were all but irrelevant.
 For debt and debt-related litigation, see Craig Muldrew, The Economy of Obligation: The Culture of Credit and Social Relations in Early Modern England (Basingstoke, 1998).
 For suspicions surrounding the testimony of poor and/or labouring witnesses, see Alexandra Shepard, Accounting for Oneself: Worth, Status, and the Social Order in Early Modern England (Oxford, 2015).