This blog aims to encourage discussion of economic and social history, broadly defined. We live in a time of major social and economic change, and research in social science is showing more and more that a historical and long-term approach to current issues is the key to understanding our times.
The Medieval Clothieris published by Boydell Press. SAVE 25% when you order direct from the publisher – offer ends on the 13th December 2018. See below for details.
Casual wage-earners dependent on wealthy entrepreneurs for their work are not just a modern phenomenon. A new book by John S. Lee, The Medieval Clothier, charts the rise of clothiers in the period 1350-1550, and the innovative ways in which they organised their workforce.
Clothiers co-ordinated the different stages of textile production and found markets for their finished cloth. They increasingly managed all the stages of cloth-making, operating what historians have called the ‘putting out’ system. In this method of organising work, clothiers put-out raw materials for spinners, weavers, fullers and other cloth-workers to process. Clothiers paid these cloth-workers, often based in their own homes, on a piece-rate basis, rather than receiving a regular wage.
Like the modern ‘gig’ economy, the benefits of this system were hotly contested. Clothworkers enjoyed the independence to choose their own hours, and combine their craft with other activities; clothiers incurred no overheads for work done in the homes of their workers and benefitted from lower labour costs. When clothworkers protested in Suffolk in 1525, their spokesman, John Green, explained that the clothiers
‘give us so little wages for our workmanship that scarcely we be able to live, and thus in penury we pass the time, we our wives and children.’
Another, a group of weavers, accused ‘the rich men, the clothiers’ of setting a single price for their work. Others complained that clothiers reimbursed their workers in ‘pins, girdles, and other unprofitable wares’ rather than in cash. Clothiers were even accused in 1549 of paying poor labourers with ‘soap, candles, rotten cloth, stinking fish, and such like baggage’.
Local and national governments responded with ambivalence. On the one hand, cloth exports brought welcome revenue through customs. Governments were also aware though, that disruptions to overseas cloth sales created unemployment and unrest. The putting-out system relied on outworkers, whom the clothier only paid when work was available, and on keeping labour costs low. Following disruption to overseas markets, the government tried to prevent clothiers from laying off their workers. Even the king’s leading minister, Cardinal Wolsey, pressurised London merchants to continue buying cloth from the clothiers.
A few clothiers were able to amass great wealth from this industry, construct lavish mansions and erect elaborate church memorials, which can still be seen today. Thomas Paycocke’s house at Coggeshall, Essex, built to impress in 1509-10 with its stunning woodcarving and elaborate panelling, is now a National Trust property. The wealth of Thomas Spring III, ‘the rich clothier’ of Lavenham, Suffolk, caught the attention of the royal court’s poet, John Skelton, in 1522. The screen constructed to surround Spring’s tomb in Lavenham church in Suffolk engaged craftsmen familiar with commissions for the royal court.
Clothiers that profited from their trade often remembered their workers in their wills. Thomas Paycocke, who died in 1518, left bequests in their wills to ‘my weavers, fullers and shearmen’. He gave additional sums for those ‘that have wrought me very much work’. Paycocke’s bequests to his workers, which totalled £4, may have stretched to as many as 240 workers, while those of Thomas Spring II of Lavenham, who died in 1486, may have supported nearly 4,000 workers. Both these clothiers operated large-scale production through the putting-out system, although exactly how large must remain a matter for discussion.
Cloth-making became England’s leading industry in the late Middle Ages – no other industry created as much employment or generated as much wealth. By the 1540s, as many as 1 in 7 of the country’s workforce were probably making cloth and 1 in 4 households were involved in spinning. This book offers the first recent survey of this hugely important and significant trade and its practitioners, examining the clothiers and their impact within the industry and in their wider communities.
Intended for the general reader, as well as students and academics, this book is the first in a new series – Working in the Middle Ages – which will examine different trades, professions and industries. The series aims to provide authoritative, accessible guides to medieval trades and professions, offering surveys of their origins and development, alongside the practicalities of the occupation.
New proposals for the series are welcomed, and should be sent to the series editor, Dr James Davis, School of History, Queens University Belfast (firstname.lastname@example.org) or to the Editorial Director (Medieval Studies), Boydell and Brewer (email@example.com)
This article is published by the Economic History Review, and it is available here
Gender discrimination, in the form of sex-selective abortion, female infanticide and the mortal neglect of young girls, constitutes a pervasive feature of many contemporary developing countries, especially in South and East Asia and Africa. Son preference stemmed from economic and cultural factors that have long influenced the perceived relative value of women in these regions and resulted in millions of “missing girls”. But, were there “missing girls” in historical Europe? The conventional narrative argues that there is little evidence for this kind of gender discrimination. According to this view, the European household formation system, together with prevailing ethical and religious values, limited female infanticide and the mortal neglect of young girls.
However, several studies suggest that parents treated their sons and daughters differently in 19th-century Britain and continental Europe (see, for instance, here, here or here). These authors stress that an unequal allocation of food, care and/or workload negatively affected girls’ nutritional status and morbidity, which translated in worsened heights and mortality rates. In order to provide more systematic historical evidence of this type of behaviour, our research (with Domingo Gallego-Martínez) relies on sex ratios at birth and at older ages. In the absence of gender discrimination, the number of boys per hundred girls in different age groups is remarkably regular, so comparing the observed figure to the expected (gender-neutral) sex ratio permits assessing the cumulative impact of gender bias in peri-natal, infant and child mortality and, consequently, the importance of potential discriminatory practices. However, although non-discriminatory sex ratios at birth revolve around 105-106 boys per hundred girls in most developed countries today, historical sex ratios cannot be compared directly to modern ones.
We have shown here that non-discriminatory infant and child sex ratios were much lower in the past. The biological survival advantage of girls was more visible in the high-mortality environments that characterised pre-industrial Europe due to poor living conditions, lack of hygiene and the absence of public health systems. Subsequently, boys suffered relatively higher mortality rates both in utero and during infancy and childhood. Historical infant and child sex ratios were therefore relatively low, even in the presence of gender-discriminatory practices. This is illustrated in Figure 1 below which plots the relationship between child sex ratios and infant mortality rates using information from seventeen European countries between 1750 and 2001. In particular, in societies where infant mortality rates were around 250 deaths (per 1,000 live births), a gender-neutral child sex ratio should have been slightly below parity (around 99.5 boys per hundred girls).
Compared to this benchmark, infant and child sex ratios in 19th-century Spain were abnormally high (see black dots in Figure 1 above; the number refers to the year of the observation), thus suggesting that some sort of gender discrimination was unduly increasing female mortality rates at those ages. This pattern, which is not the result of under-enumeration of girls in the censuses, mostly disappeared at the turn of the 20th century. Notwithstanding that average sex ratios remained relatively high in nineteenth- century Spain, some regions exhibited even more extreme figures. In 1860, 54 districts (out of 471) had infant sex ratios above 115, figures that are extremely unlikely to have occurred by chance. Relying on an extremely rich dataset at the district level, our research analyses regional variation in order to examine what lies behind the unbalanced sex ratios. Our results show that the presence of wage labour opportunities for women and the prevalence of extended families in which different generations of women cohabited had beneficial effects on girls’ survival. Likewise, infant and child sex ratios were lower in dense, more urbanized areas.
This evidence thus suggests that discriminatory practices with lethal consequences for girls constituted a veiled feature of pre-industrial Spain. Excess female mortality was then not necessarily the result of ill-treatment of young girls but could have been just based on an unequal allocation of resources within the household, a circumstance that probably cumulated as infants grew older. In contexts where infant and child mortality is high, a slight discrimination in the way that young girls were fed or treated when ill, as well as in the amount of work which they were entrusted with, was likely to have resulted in more girls dying from the combined effect of undernutrition and illness. Although female infanticide or other extreme versions of mistreatment of young girls may not have been a systematic feature of historical Europe, this line of research would point to more passive, but pervasive, forms of gender discrimination that also resulted in a significant fraction of missing girls.
Historical research on labour and wages has been an object of considerable attention
for both industrial and post-industrial societies. Even in the contemporary period, issues surrounding concepts of work and remuneration, such as growing inequality and the gender pay gap, are regularly debated topics. Indeed, modern English society is currently dealing with fallout in these areas as a result of deindustrialisation and the idea of a universal basic income is gaining traction.
But for the more distant past, understanding these issues often becomes a battle with shadows. My research uses a new method for computing real wages (income adjusted for cost of living) for agricultural labourers in medieval England.
An accurate understanding of these wages is critically important for our conceptions of historic economic development, especially as existing scholarship on medieval wage rates are incompatible with the most recent estimates of historical GDP data and therefore our understanding of precisely how, when and why Western Europe grew rich while other parts of the world did not.
Current scholarship on wages and labour before 1500 tends to be highly extrapolated and interpolated and lacks systematic analyses grounded in precise evidence. My project employs a methodology connecting wage payments to precise data on the number of days worked by individual labourers and the prices for the goods that these same individuals needed to purchase, and facilitates the creation of a wage series based entirely on accurate historical data.
The systematic analysis and quantification of wage levels for the medieval period has been frustrated by the relative lack of records, or, even where records might be plentiful, by the inconsistency or obscurity in the ways in which wage levels are framed. As a result, current discussions of wages and labour before 1500 lack the bite of more systematic analyses grounded in precise evidence and leads to the divergence in results that we currently see in the literature.
My study attempts to break through this impasse by adopting a new method for determining the wage profile of workers on medieval English demesnes (the home farms of lords as against those of their tenants). It uses uniquely detailed agricultural accounts from these demesnes, which survive in tens of thousands for the period of this study (c. AD 1250 – AD 1450).
The method depends on connecting precise data on wages paid both in cash and ‘in kind’ in a manner that allows wages to be calculated without the distorting effect of proxy measurements. This approach promises to facilitate the creation of an accurate wage series for medieval England, based entirely on historical data both over region and over time and to allow surveys of the degree of both female and male labour evident in medieval demesne agriculture.
by Stephen Broadberry (Oxford University) and Mark Harrison (University of Warwick)
There has always been disagreement over the origins of the Great War, with many authors offering different views of the key factors. One dimension concerns whether the actions of agents should be characterised as rational or irrational. Avner Offer continues to take the popular view that the key decision-makers were irrational in the common meaning of “stupid”, arguing that “(t)he decisions for war were irresponsible, incompetent, and worse”. Roger Ransom, by contrast, uses behavioural economics to introduce bounded rationality for the key decision makers. In his view, over-confidence caused leaders to gamble on war in 1914. At this stage, they expected a large but short war, and when a quick result was not achieved, they then faced the decision of whether or not to continue fighting, or seek a negotiated settlement. Here, Ransom views the decisions of leaders to continue fighting as driven by a concern to avoid being seen to lose the war, consistent with the predictions of prospect theory, where people are more concerned about avoiding losses than making gains. Mark Harrison sees the key decision makers as acting rationally in the sense of standard neoclassical economic thinking, choosing war as the best available option in the circumstances that they faced. For Germany and the other Central Powers in 1914, the decision for war reflected a rational pessimism: locked in a power struggle with the Triple Entente, they had to strike then because their prospects of victory would only get worse.
Almhouses in Early Modern Englandis published by Boydell Press. SAVE 25% when you order direct from the publisher – offer ends on the 13th December 2018. See below for details.
Almshouses, charitable foundations providing accommodation for poor people, are a feature of many towns and villages. Some are very old, with their roots in medieval England as monastic infirmaries for the sick, pilgrims and travellers, or as chantries offering prayers for the souls of their benefactors. Many survived the Reformation to be joined by a remarkable number of new foundations between around 1560 and 1730. For many of them their principal purpose was as sites of memorialisation and display, tangible representations of the philanthropy of their wealthy donors. But they are also some of the few examples of poor people’s housing to have survived from the early modern period, so can they tell us anything about the material lives of the people who lived in them?
Paul Slack famously referred to almspeople as ‘respectable, gowned Trollopian worthies’, and there are many examples to justify that view, for instance Holy Cross Hospital, Winchester, refounded in 1445 as the House of Noble Poverty. But these are not typical. Nevertheless, many early modern almshouse buildings are instantly recognisable, with the ubiquitous row of chimneys often the first indication of the identity of the building.
Individual chimneys and, often, separate front doors are evidence of private domestic space, far removed from the communal halls of the earlier medieval period, or the institutional dormitories of the nineteenth century workhouses which came later. Accommodating almspeople in their own rooms was not just a reflection of general changes in domestic architecture at the time, which placed greater emphasis on comfort and privacy, but represented a change in how almspeople were viewed and how they were expected to live their lives. Instead of living communally with meals provided, in the majority of post-Reformation almshouses the residents would have lived independently, buying their own food, cooking it themselves on their own hearth and eating it by themselves in their rooms. The importance of the hearth was not only as the practical means of heating and cooking, but was central to questions of identity and social status. Together with individual front doors, these features gave occupants a degree of independence and autonomy; they enabled almspeople to live independently despite their economic dependence, and to adopt the appearance if not the reality of independent householders.
The retreat from communal living also meant that almspeople had to support themselves rather than have all their needs met by the almshouse. This was achieved in many places by a transition to monetary allowances or stipends with which almspeople could purchase their own food and necessities, but the existence and level of these stipends varied considerably. Late medieval almshouses often specified an allowance of a penny a day, which would have provided a basic but adequate living in the fifteenth century, but was seriously eroded by sixteenth-century inflation. Thus when Lawrence Sheriff, a London mercer, established in 1567 an almshouse for four poor men in his home town of Rugby, his will gave each of them the traditional penny a day, or £1 10s 4d a year. Yet with inflation, if these stipends were to match the real value of their late-fifteenth-century counterparts, his almsmen would actually have needed £4 5s 5d a year.
The nationwide system of poor relief established by the Tudor Poor Laws, and the survival of poor relief accounts from many parishes by the late seventeenth century, provide an opportunity to see the actual amounts disbursed in relief by overseers of the poor to parish paupers. From the level of payments made to elderly paupers no longer capable of work it is possible to calculate the barest minimum which an elderly person living rent free in an almshouse might have needed to feed and clothe themself and keep warm. Such a subsistence level in the 1690s equates to an annual sum of £3 17s which can be adjusted for inflation and used to compare with a range of known almshouse stipends from the late sixteenth and seventeenth centuries.
The results of this comparison are interesting, even surprising. Using data from 147 known almshouse stipends in six different counties (Durham, Yorkshire, Norfolk, Warwickshire, Buckinghamshire and Kent) it seems that less than half of early modern almshouses provided their occupants with stipends which were sufficient to live on. Many provided no financial assistance at all.
The inescapable conclusion is that the benefits provided to early modern almspeople were in many cases only a contribution towards their subsistence. In this respect almshouse occupants were no different from the recipients of parish poor relief, who rarely had their living costs met in full.
Yet, even in one of the poorer establishments, almshouse residents had distinct advantages over other poor people. Principally these were the security of their accommodation, the permanence and regularity of any financial allowance, no matter how small, and the autonomy this gave them. Almshouse residents may also have had an enhanced status as ‘approved’, deserving poor. The location of many almshouses, beside the church, in the high street, or next to the guildhall, seems to have been purposely designed to solicit alms from passers-by, at a time when begging was officially discouraged.
SAVE 25% when you order direct from the publisher. Discount applies to print and eBook editions. Click the link, add to basket and enter offer code BB500 in the box at the checkout. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291 and quote the same code. Offer ends one month after the date of upload. Any queries please email firstname.lastname@example.org
 Inflation index derived from H. Phelps Brown and S. V. Hopkins, A Perspective of Wages and Prices (London and New York, 1981) pp. 13-59.
 L. A. Botelho, Old Age and the English Poor Law, 1500 – 1700 (Woodbridge, 2004) pp. 147-8.
by Joel Mokyr (Northwestern University), Assaf Sarid (Haifa University), Karine van der Beek (Ben-Gurion University)
The main manifestation of an industrial revolution taking place in Britain in the second half of the eighteenth century was the shift of textile production (that is, the spinning process), from a cottage-based manual system, to a factory-based capital-intensive system, with machinery driven by waterpower and later on by steam.
The initial shift in production technology in the 1740s took place in all the main textile centres (the Cotswolds, East Anglia, and in the middle Pennines in Lancashire and the West-Riding). But towards the end of the century, as the intensity of production and the application of Watt’s steam engine increased, the supremacy of the cotton industry of the northwestern parts of the country began to show, and this is where the industrial revolution eventually took place and persisted.
Our research examines the role of factor endowments in determining the location of technology adoption in the English textile industry and its persistence since the Middle Ages. In line with recent research on economic growth, which emphasises the role of factor endowments on long run economic development, we claim that the geographical and institutional environment determined the location of watermill technology adoption in the production of foodstuffs.
In turn, the adoption of the watermill for grain grinding (around the tenth and eleventh centuries), affected the area’s path of development by determining the specialisation and skills that evolved, and as a result, its suitability for the adoption of new textile technologies, textile fulling (thirteenth and fourteenth centuries) and, later on, spinning (eighteenth century).
The explanation for this path dependence is that all these machines, including other machinery that was developed in various production processes (such as sawing mills, forge mills, paper mills, etc.), were all based on similar mechanical principles as the grinding watermills. Thus, their implementation did not require additional resources or skills and it was therefore more profitable to invest in them and expand textile production, in places that were specialised and experienced in the construction and maintenance of grinding watermills.
As textile exports expanded in the second half of the eighteenth century (both woollen and cotton textiles), Watt’s steam engine was introduced. The watermills that operated the newly introduced spinning machinery began to be replaced with the more efficient steam engines, and almost disappeared by the beginning of the nineteenth century. This stage of technological change took place in Lancashire’s textile centre, which enjoyed both the proximity of coal as well as of strong water flows, and was therefore suitable for the implementation of steam engine technology.
We use information from a variety of sources, including the Apprenticeship Stamp-Tax Records (eighteenth century), Domesday Book (eleventh century), as well as geographical databases, and show that the important English textile centres of the eighteenth century, evolved in places that had more grinding watermills during the Domesday Survey (1086).
To be more precise, we find that on average, there was an additional textile merchant in 1710 in areas that had three more watermills in 1086. The magnitude of this effect is important given that there were on average 1.2 textile cloth merchants in an area (the maximum was 34 merchants).
We also find that textile centres in these areas persisted well into the eighteenth century and specialised in skilled mechanical human capital (measured by the number of apprentices to masters specialising in watermill technology, that is, wrights, in the eighteenth century), which was essential for the development, implementation and maintenance of waterpower as well as mechanical machinery.
The number of this type of worker increased in the 1750s in all the main textile centres until the 1780s, when their number was declining in Lancashire as it was adopting a new technology that was no longer dependent on their skills.
In this post I write about the connections between Paul Romer’s work, which is essentially applied theory, and the empirical work on long-run economic growth done by economic historians. How was Romer influenced by the work of economic historians? has he influenced economic history? and have his theories been confirmed by the recent work of economic historians? (preview: I will argue that the answers are: yes; not much; and no). Nevertheless, my point above is not that Romer is wrong in general; in fact some of his ideas *about ideas* are fundamental for us to think about growth in the past (read on if this isn’t clear yet.)
Paul Romer’s was a well-deserved and long-anticipated prize. Many predicted he would eventually win, including myself in my very first academic article, written when I was a undergradute and published in 2008. (ungated version here). I now find it mildly amusing how assertive I was when I wrote: “Paul Romer is going to win the Nobel Prize in economics”. I continue to believe that this was a good choice.
Many have written about the nature of his main contributions, all of which, as I have said, have been on applied theory; see for instance, see the posts in Dietrich Vollrath’s blog, here and here, or in Paul Romer’s own blog.
Romer’s work had some influence on economic history, but not much. There is, for instance, a 1995 article by Nick Crafts which looks at the Industrial Revolution from a New Growth perspective, but it is fair to say that economic historians were perhaps not quick to pick up the New Growth theory train. Part of this was surely because its implications seemed to apply mostly to frontier economies and did not seem to apply to much of human history, a limitation which Unified Growth Theory would later attempt to overcome.
And yet, Romer himself has often spoken about economic history and relied on the data of economic historians. He now seems to have won mostly due to his 1990 article, but his earlier work on increasing returns (ungated version here) had a graph from Maddison, for instance.
One of the most empirical papers Romer has written is “Why indeed in America?”, which was the culmination of much of what he had done before. It was also one of the last papers he wrote before entering a writing hiatus. In this paper he explicitly argues for the complementarity of economic history and growth theory. He argues that the USA achieved economic supremacy after 1870 due to having the largest integrated market in the world. He writes:
“differences in saving and education do not explain why growth was so much faster in the United States than it was in Britain around the turn of this century. In 1870, per capita income in the United States was 75 percent of per capita income in Britain. By 1929, it had increased to 130 percent. In the intervening decades, years of education per worker increased by a factor of 2.2 in Britain and by a nearly identical factor of 2.3 in the United States. In 1929, this variable remained slightly lower in the United States. (Data are taken from Angus Maddison .”
Notice that there are three empirical statements here. Romer’s story builds on these facts, so if the facts change, the story must too. Theory depends on facts.
The first fact (according to Romer) is that the US only converged to British per capita GDP levels after 1870. Second, that this was not due to matters such as education or savings. Third, the reason was market size. As economic historians, we have made much progress in measuring each of these matters since 1995. Let me consider each in turn.
Timing of convergence of the USA to Britain
The important thing to keep in mind here is that it is by no means certain that the USA had not catched up earlier. The methodological issues are complicated and in fact today’s other (and equally deserving) Nobel prize, Nordhaus, wrote a fascinating paper about the problems involved in these types of measurements. (A popular description of this work can be read here) As far as USA vs Britain is concerned, though, Marianne Ward and John Devereux summarize the debate as follows:
“Prados De la Escosura (2000) and Ward and Devereux (2003, 2004, 2005) argue for an early US income lead using current price estimates. Broadberry (2003) and Broadberry and Irwin (2006) defend the Maddison projections while Woltjer (2015) hews to a middle ground. The literature has recently taken an unexpected turn as Peter Lindert and Jeffrey Williamson, Lindert and Williamson (2016), find a larger US lead before 1870 and one that stretches further back in time than claimed by either Prados De la Escosura (2000) or Ward and Devereux (2005).”
Comparative levels of education
Recent evidence suggests that the average years of post-primary education actually declined in Britain after about 1700 (ungated version here). This was not the case at all in the USA, where it is well-known that the state invested in high schools, so it seems unlikely that average human capital grew at similar levels in the latter part of the 19th century, as Maddison/Romer claimed.
I used to believe this part of Romer’s story. That was until I read this brilliant paper by Leslie Hannah: “Logistics, Market Size, and Giant Plants in the Early Twentieth Century: A Global View” (ungated version here). Notice that Hannah does not refer to Romer’s argument or even cite him. What he does instead is he destroys the commonly held idea that USA’s market size was larger that Europe’s already before the Great War (aka World War I). It is true that the USA had more railroads, but it also had much longer distances. In Northwestern Europe, transportation by a mix of ships, trains and horses was cheaper, especially once we consider the much denser (and highly urbanized) population. It is important to remember that prior to WWI, Europe was living the “first age of globalization”, with high levels of integration and relatively low tariffs.
So, this part of Romer’s story cannot be right.
In conclusion, what does this all mean? will these new facts affect where growth theory will go? only time will tell, and growth theory itself is by no means moving much these days, as Paul Romer himself has addmited in recent interviews. What these facts suggest, though, is that other things must have mattered.
As I said in the beginning, I believe that Paul Romer’s applied theory work is important (as it is that of others that might have won, such as Aghion and Howitt). The natural complementaries between the work of economic historians and applied theorists suggest that we need to listen to each other in order for science to move forward. Hopefully, new generations of economists will do a bit of both, as have some people who now work on Unified Growth.
But in the future, it is fair that the Nobel committee gives more prizes to empirical work as well. Because theory can’t live without facts, but economics Nobels have been highly biased towards theorists (whether pure or applied).
Extending voting rights to broader segments of the population considerably affects the way countries finance their economies. This is the key finding of our new research recently published in the Economic Journal, available here
Financial systems fulfil a number of key functions in the economy, thereby contributing to its growth. By transferring funds from savers/investors to borrowers such as households and firms, financial systems are the oil for the wheels that keep the economy turning.
Therefore, it is vital to have a clear understanding of the fundamental factors that support the development of financial systems. The research shows that suffrage institutions – that is, the institutions defining who holds the right to vote in the population – play a critical role.
Financial systems encompass financial institutions (such as banks) and financial markets (such as stock markets). However, the population is composed of corporate stakeholders (workers, investors, managers) and thus is not indifferent about whether governments should promote – through their policy choices – the development of stock markets or the banking sector. Indeed, both fulfil similar functions in the economy, but have a different impact on corporate stakeholders as they affect differently the degree to which each corporate stakeholder bears corporate risk. Stock markets lead to riskier but more profitable investments, at the cost of potentially higher labor-risk bearing. In contrast, banks have a tendency to limit risk-taking behavior of corporate managers because, as debtholders, they do not benefit from the upside potential of riskier investments.
The voting population will prefer politically to support bank finance if it relies more, in the aggregate, on labor income. However, the voting population will prefer relying on stock market finance if it has a sufficient amount of capital income relative labor income. By defining who has the right to vote in the population, suffrage institutions thus play a pivotal role in the way countries finance their economies.
Our research analyzes the gradual extensions of suffrage to various segments of the population over the nineteenth and twentieth centuries in 18 countries. It demonstrates that suffrage extensions change the political preferences of the voting population and, thereby, policy choices supporting either stock market finance or bank finance. Specifically, it provides empirical evidence that extending suffrage to broader segments of the population hampers the development of stock markets. In contrast, broadening suffrage is conducive to banking sector development.
Further evidence reveals longer-term effects produced by the extension of suffrage: A 25-year delay in the introduction of women universal suffrage increase today’s importance of stock markets relative to the banking sector by 17.5%.
Overall, these findings are consistent with the insight that small elites pursue economic opportunities by promoting capital raised on stock markets. In contrast, a broader political participation empowers a middle class which prefers bank finance as it is composed of voters with proportionally more exposure to labor income relative to capital income.
The research has broader implications. The scope of voting rights may drive the adoption and content of financial regulation shaping the way that financial intermediation takes place. This ultimately determines the long-term performance of economies.
by David M. Higgins (Newcastle University), originally published on 09 October 2018 on the LSE Business Review
When doing your weekly shop have you ever observed the small blue/yellow and red/yellow circles that appear on the wrappers of Wensleydale cheese or Parma ham? Such indicia are examples of geographical indications (GIs), or appellations: they show that a product possesses certain attributes (taste, smell, texture) that are unique to a specific product and which can only be derived from a tightly demarcated and fiercely protected geographical region. The relationship between product attributes and geography can be summed up in one word: terroir. These GIs formed an important part of the EU’s agricultural policy, launched in 1992 and represented by the logos PDO and PGI, to insulate EU farmers from the effects of globalisation by encouraging them to produce ‘quality’ products that were unique.
GIs have a considerable lineage: legislation enacted in 1666 reserved the sole right to ‘Roquefort’ to cheese cured in the caves at Roquefort. Until the later nineteenth century domestic legislation was the primary means by which GIs were protected from misrepresentation. Thereafter, the rapid acceleration of international trade necessitated global protocols, of which the Paris Convention for the Protection of Industrial Property (1883) and its successors, including the Madrid Agreement for the Repression of False or Deceptive Indications of Source on Goods (1890).
The last century has witnessed unprecedented improvements in survivorship and life expectancy. In the United Kingdom alone, infant mortality fell from over 150 deaths per thousand births at the start of the last century to 3.9 deaths per thousand births in 2014 (see the Office for National Statistics for further details). Average life expectancy at birth increased from 46.3 to 81.4 years over the same period (see the Human Mortality Database). These changes reflect fundamental improvements in diet and nutrition and environmental conditions.
The changing body: health, nutrition and human development in the western world since 1700 attempted to understand some of the underlying causes of these changes. It drew on a wide range of archival and other sources covering not only mortality but also height, weight and morbidity. One of our central themes was the extent to which long-term improvements in adult health reflected the beneficial effect of improvements in earlier life.
The changing body also outlined a very broad schema of ‘technophysio evolution’ to capture the intergenerational effects of investments in early life. This is represented in a very simple way in Figure 1. The Figure tries to show how improvements in the nutritional status of one generation increase its capacity to invest in the health and nutritional status of the next generation, and so on ‘ad infinitum’ (Floud et al. 2011: 4).
We also looked at some of the underlying reasons for these changes, including the role of diet and ‘nutrition’. As part of this process, we included new estimates of the number of calories which could be derived from the amount of food available for human consumption in the United Kingdom between circa 1700 and 1913. However, our estimates contrasted sharply with others published at the same time (Muldrew 2011) and were challenged by a number of other authors subsequently. Broadberry et al. (2015) thought that our original estimates were too high, whereas both Kelly and Ó Gráda (2013) and Meredith and Oxley (2014) regarded them as too low.
Given the importance of these issues, we revisited our original calculations in 2015. We corrected an error in the original figures, used Overton and Campbell’s (1996) data on extraction rates to recalculate the number of calories, and included new information on the importation of food from Ireland to other parts of what became the UK. Our revised Estimate A suggested that the number of calories rose by just under 115 calories per head per day between 1700 and 1750 and by more than 230 calories between 1750 and 1800, with little changes between 1800 and 1850. Our revised Estimate B suggested that there was a much bigger increase during the first half of the eighteenth century, followed by a small decline between 1750 and 1800 and a bigger increase between 1800 and 1850 (see Figure 2). However, both sets of figures were still well below the estimates prepared by Kelly and Ó Gráda, Meredith and Oxley, and Muldrew for the years before 1800.
These calculations have important implications for a number of recent debates in British economic and social history (Allen 2005, 2009). Our data do not necessarily resolve the debate over whether Britons were better fed than people in other countries, although they do compare quite favourably with relevant French estimates (see Floud et al. 2011: 55). However, they do suggest that a significant proportion of the eighteenth-century population was likely to have been underfed.
Our data also raise some important questions about the relationship between nutrition and mortality. Our revised Estimate A suggests that food availability rose slowly between 1700 and 1750 and then more rapidly between 1750 and 1800, before levelling off between 1800 and 1850. These figures are still broadly consistent with Wrigley et al.’s (1997) estimates of the main trends in life expectancy and our own figures for average stature. However, it is not enough simply to focus on averages; we also need to take account of possible changes in the distribution of foodstuffs within households and the population more generally (Harris 2015). Moreover, it is probably a mistake to examine the impact of diet and nutrition independently of other factors.