Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69

by Jean-Pascal Bassino (ENS Lyon) and Pierre van der Eng (Australian National University)

This blog is part of a larger research paper published in the Economic History Review.

 

Bassino1
Vietnam, rice paddy. Available at Pixabay.

In the ‘great divergence’ debate, China, India, and Japan have been used to represent the Asian continent. However, their development experience is not likely to be representative of the whole of Asia. The countries of Southeast Asia were relatively underpopulated for a considerable period.  Very different endowments of natural resources (particularly land) and labour were key parameters that determined economic development options.

Maddison’s series of per-capita GDP in purchasing power parity (PPP) adjusted international dollars, based on a single 1990 benchmark and backward extrapolation, indicate that a divergence took place in 19th century Asia: Japan was well above other Asian countries in 1913. In 2018 the Maddison Project Database released a new international series of GDP per capita that accommodate the available historical PPP-based converters. Due to the very limited availability of historical PPP-based converters for Asian countries, the 2018 database retains many of the shortcomings of the single-year extrapolation.

Maddison’s estimates indicate that Japan’s GDP per capita in 1913 was much higher than in other Asian countries, and that Asian countries started their development experiences from broadly comparable levels of GDP per capita in the early nineteenth century. This implies that an Asian divergence took place in the 19th century as a consequence of Japan’s economic transformation during the Meiji era (1868-1912). There is now  growing recognition that the use of a single benchmark year and the choice of a particular year may influence the historical levels of GDP per capita across countries. Relative levels of Asian countries based on Maddison’s estimates of per capita GDP are not confirmed by other indicators such as real unskilled wages or the average height of adults.

Our study uses available estimates of GDP per capita in current prices from historical national accounting projects, and estimates PPP-based converters and PPP-adjusted GDP with multiple benchmarks years (1913, 1922, 1938, 1952, 1958, and 1969) for India, Indonesia, Korea, Malaya, Myanmar (then Burma), the Philippines, Sri Lanka (then Ceylon), Taiwan, Thailand and Vietnam, relative to Japan. China is added on the basis of other studies. PPP-based converters are used to calculate GDP per capita in constant PPP yen. The indices of GDP per capita in Japan and other countries were expressed as a proportion of GDP per capita in Japan during the years 1910–70 in 1934–6 yen, and then converted to 1990 international dollars by relying on PPP-adjusted Japanese series comparable to US GDP series. Figure 1 presents the resulting series for Asian countries.

 

Figure 1. GDP per capita in selected Asian countries, 1910–1970 (1934–6 Japanese yen)

Bassino2
Sources: see original article.

 

The conventional view dates the start of the divergence to the nineteenth century. Our study identifies the First World War and the 1920s as the era during which the little divergence in Asia occurred. During the 1920s, most countries in Asia — except Japan —depended significantly on exports of primary commodities. The growth experience of Southeast Asia seems to have been largely characterised by market integration in national economies and by the mobilisation of hitherto underutilised resources (labour and land) for export production. Particularly in the land-abundant parts of Asia, the opening-up of land for agricultural production led to economic growth.

Commodity price changes may have become debilitating when their volatility increased after 1913. This was followed by episodes of import-substituting industrialisation, particularly during after 1945.  While Japan rapidly developed its export-oriented manufacturing industries from the First World War, other Asian countries increasingly had inward-looking economies. This pattern lasted until the 1970s, when some Asian countries followed Japan on a path of export-oriented industrialisation and economic growth. For some countries this was a staggered process that lasted well into the 1990s, when the World Bank labelled this development the ‘East Asian miracle’.

 

To contact the authors:

jean-pascal.bassino@ens-lyon.fr

pierre.vandereng@anu.edu.au

 

References

Bassino, J-P. and Van der Eng, P., ‘Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69’, Economic History Review (forthcoming).

Fouquet, R. and Broadberry, S., ‘Seven centuries of European economic growth and decline’, Journal of Economic Perspectives, 29 (2015), pp. 227–44.

Fukao, K., Ma, D., and Yuan, T., ‘Real GDP in pre-war Asia: a 1934–36 benchmark purchasing power parity comparison with the US’, Review of Income and Wealth, 53 (2007), pp. 503–37.

Inklaar, R., de Jong, H., Bolt, J., and van Zanden, J. L., ‘Rebasing “Maddison”: new income comparisons and the shape of long-run economic development’, Groningen Growth and Development Centre Research Memorandum no. 174 (2018).

Link to the website of the Southeast Asian Development in the Long Term (SEA-DELT) project:  https://seadelt.net

Institutional choice in the governance of the early Atlantic sugar trade: diasporas, markets and courts

by Daniel Strum (University of São Paulo)

This article is published by The Economic History Review, and it is available for EHS members here.

 

Strum Pic
Figure 1. Cartographic chart of the Atlantic Ocean (c. 1600). Source: Biblioteca Nazionale Centrale di Firenze, Florence, Italy. Port. 27.  By kind permission of the Ministero per i Beni e le Attivitá Culturali della Repubblica Italiana.
Reproduction of this image by any means is strictly prohibited.

In the age of sailboats, how could traders be confident that the parties with whom they were considering working on the other side of the ocean would not act opportunistically? Commercial agents overseas spared merchants time and the hazards of travel and allowed them to diversify their investments; but agents might also cheat or renege on or neglect their commitments.

My research about the merchants of Jewish origin plying the sugar trade linking Brazil, Portugal and the Netherlands demonstrates that the same merchants chose different feasible mechanisms (institutions) to curb opportunism in different types of transactions. Its main contribution is to establish a clear pattern linking the attributes of these transactions to those of the mechanisms chosen to enforce them. It also shows how these mechanisms interrelated.

Around 1600, Europe experienced rapidly growing urban populations and dependence on trade for supplies of basic products, while overseas possessions contributed to a surging output of marketable commodities, including sugar. Brazil was turned into the first large-scale plantation economy and became the world’s main sugar producer, with Amsterdam emerging as its main distribution and refining centre. Most of the Brazilian sugar trade was intermediated by merchants in Portugal, and traders of Jewish origin scattered along this trade route played a prominent role in the sugar trade.  The Brazilian sugar trade required institutions with low costs in agency services and contract enforcement because it was a significantly competitive market. Its political, legal, and administrative framework raised relatively few obstacles to market entrants, and trade in a semi-luxury commodity necessitated low start-up costs.

Sources reveal that merchants of Jewish origin engaged mostly individuals of other backgrounds in transactions in which agents had little latitude, performed simple tasks over short periods, and managed small sums (see table 1). Insiders were not left out in these transactions, but the background of agents was not determinant.The research shows that these transactions were primarily enforced by an informal mechanism that linked one’s expected income to one’s professional reputation. Bad conduct led to marginalization while good behaviour vouched for more opportunities by the same and other principals. This mechanism functioned among all traders, despite their differing backgrounds, who were active in these interconnected marketplaces. This professional reputation mechanism worked because a standardization of basic mercantile practices produced a shared understanding of how trade should be conducted. At the same time, the marketplaces’ structure together with patterns of transportation and correspondence increased the speed, frequency, volume, and diversity of the information flow within and between these marketplaces. This information system facilitated both the detection of good and bad conduct and relatively rapid response to news about it.

 

Strum Pic 2
Figure 2. Sugar crate being weighted at the Palace Square in Lisbon. Source: Dirk Stoop – Terreiro do Paço no século XVII, 1662. Painting. Museu da Cidade, Lisboa, Portugal. MC.PIN.261.© Museu da Cidade – Câmara Municipal de Lisboa.

The professional reputation mechanism worked better on transactions involving small sums and fewer, simpler, and shorter tasks. Misconduct in these tasks were easier to detect and expose amid an extensive and heterogeneous network; and if the agent cheated, the small sums assigned were not enough to live on while forsaking trade.

 

Table 1. Backgrounds of agents in complex and simple arrangements

Type of transaction Outsiders Probable outsiders Insiders Probable insiders Relatives
Complex 2.6% 4.9% 69.9% 2.1% 20.6%
Simple 20.0% 70.0% 0% 10.0% 0%

Source: original article in the Economic History Review.

 

On the other hand, merchants of Jewish origin preferred to engage members of their diaspora in complex, larger, and longer transactions (see table 1). A reputation mechanism within diaspora was more effective in governing transactions that were difficult to follow. Although enforcement within the diaspora benefitted from the general information system, the diaspora’s social structure generated more information more rapidly about the conduct of its members. In each centre, insiders knew each other and marriages and socialization within the group prevailed. Insiders usually had personal acquaintances and often relatives in other centres as well. They were conscious of their common history and fragile status. Such social structure also provided greater economic and social incentives for honesty and diligence than the professional mechanism, making the internal mechanism preferable in transactions involving larger sums and wider latitude.

Finally, the research shows that the legal system was able to impose sanctions across wide distances and political units. Yet owing to courts’ slowness and costliness, merchants resorted to litigation only after nonjudicial mechanisms failed. Furthermore, courts could not punish inattention that did not breach legal, customary, or contractual specifications, nor could courts reward accomplishment.

Litigation had to supplemented the professional mechanism because its incentives were not homogeneous across all marketplaces and diasporas. Courts also reinforced the diaspora mechanism by limiting the future income an agent expected to gain from misappropriating large sums from one or many principals. Finally, the professional mechanism supplemented the diaspora mechanism by limiting alternative agency relations with outsiders for insiders who had engaged in misconduct.

Because merchants were capable of matching transactions with the most appropriate governing mechanisms, they were able to diversify their transactions, expand the market for agents, better allocate agents to tasks, and stimulate competition among them. The resulting decrease in agency costs was critical in a significantly competitive market as the sugar trade. Institutional choice thus supported and reinforced—rather than caused—expansion of exchange.

Almshouses in early modern England: charitable housing in the mixed economy of welfare 1550-1725

review by David Hitchcock (Christ Church University)

book written by Angela Nicholls

‘Almshouses in early modern England: charitable housing in the mixed economy of welfare 1550-1725’ is published by Boydell and Brewer. SAVE  25% when you order direct from the publisher – offer ends on the 7th May 2019. See below for details.

 

Almhouses

Almshouses were ‘curious institutions’, ‘built by the rich to be lived in by the poor’ (p. 1). In the first monograph to focus exclusively on the role of early modern almshouses in welfare provision, Angela Nicholls traces not only the development of almshouse foundations and the motivations of their founders, but also crucially the lived experience and material benefits of an alms place as a respectable or ancient pauper in early modern English parishes. Until recently a ‘known unknown’ (p. 3) in early modern welfare history, charitable housing of any kind was of course far more than simply the provision of a roof and walls, it was also a guarantee of place, of belonging and of social meaning within the context of parish and community. Nicholls examines the almshouse from many angles; first set within an overview of early modern housing policy, and subsequently in chapters dedicated to donors and founders, to residents and their experiences, and finally to a detailed case study of the parish of Leamington Hastings. Nicholls argues that early modern almshouses were distinct from their medieval predecessors and eighteenth-century descendants for a number of reasons, not least their prominent and sustained place in the mixed economy of parish welfare between monastic dissolution and Knatchbull’s Workhouse Test Act of 1723. The study focuses broadly on evidence from three dispersed counties; Durham, Warwickshire, and Kent, and importantly uses a generous definition of what constitutes an ‘almshouse’ in the first place, thus excavating many more humble institutions than previous historiography accounts for.

Chapter one on housing policy opens with a strong statement about the quintessential purpose of Tudor and Stuart poor relief, and particularly of welfare legislation: the prevention of vagrancy and of idleness. Nicholls’ reading of the roles housing provision played within the poor laws chimes generally with the historiographical consensus, though she makes some important new suggestions. For instance, the 1547 act actually enjoined parishes to provide ‘cotages’ to vagrants once they had been forcibly returned to their places of origin (p. 22), and Nicholls makes a strong case that the language of ‘Abiding Places’ in the ’47 and indeed 1572 laws might well refer to the English equivalent of hôpital général places for former vagrants and not strictly to their commitment to houses of correction. The effective result of these sorts of injunctions was the accumulation of a robust stock of pauper housing in parishes across the kingdom, housing which remained reserved to the poor well into the eighteenth century, until attitudes towards personal subsistence and idleness hardened still further. Chapter two charts the surge in almshouse provision and endowment across the period and visualizes this provision brilliantly across several figures and maps (Figure 2.2, p. 45, graphing almshouse foundations by decade is particularly revealing). Nicholls concludes here that endowing an almshouse was often a response to generalised, national anxieties or prompts rather than just to local concerns, in effect demonstrating another way that the ‘integration’ thesis of Keith Wrightson was borne out by the bequests of local propertied elites.

The second set of chapters focus on founders and inhabitants. Nicholls unpacks the manifold motivations of almshouse founders such as Rev. Nicholas Chamberlaine with dexterity, going well beyond the traditional ‘purchase of prayer’ model (p. 62). She disagrees with W.K. Jordan’s account of a secular shift in the rationales behind charitable giving, and outlines a suite of additional motives which prominently included local memorialisation and social status and the buttressing of confessional Protestant identities. I found it interesting that Nicholls actually explores ‘order and good governance’ (pp. 86-88) of the parish in subsequent chapters as a desired outcome of endowment, and broadly from the historical perspective of almshouse inhabitants, rather than in the same chapter as other founder motivations. In the section on inhabitants and the material benefits of alms places Nicholls questions how ‘fastidious’ early modern almshouse foundations actually were with respect to inhabitants (p. 90). Some criteria such as geographical proximity were consistent across most almshouses; others such as old age, gender, infirmity, or fraternal or confessional membership were endowment specific. Nicholls also notes that the historiographical interest in ‘rules of behaviour’ for almshouses is out of proportion with the actual number of houses (very few) which actually had rules at all (p. 126). She also debunks the contention that the material benefits of an alms place created a ‘pauper elite’ (p. 184) and demonstrates wide variation across hundreds of endowed places.

The final chapter brings together the rich records of county Warwickshire to produce a parish history of a ‘seventeenth-century Welfare Republic’ in Leamington Hastings (p. 188). Nicholls traces the origins of the Hastings house to Humphrey Davis and his will of 1607, which subsequently falls into ‘legal limbo’ (p. 195) until revival under Thomas Trevor as lord of Hasting manor estate in the 1630s. Nicholls situates the almshouse within the private charitable economy of Leamington Hastings which also included the ‘Poors Plot’ charity subsidising access to land and schemes for parish stock and further cottage housing (p. 221). Nicholls concludes that we cannot view almshouses—however privately endowed and idiosyncratically managed—as hermetically sealed off from state welfare provision as it was, after all, often the same people managing both. Almshouses in early modern England is a definitive monograph, cogently assembled and clearly written, with the histories of alms-people and charity at its heart. It is also filled with evidence of the care and nuance with which Nicholls approaches her subject, visible not least in the author’s photography, detailed online appendices and databases, and encyclopaedic knowledge of the associated archives. If you want to learn about the history of early modern charitable housing, you should read this book.

 

SAVE 25% when you order direct from the publisher using the offer code B125 online hereOffer ends 7th May 2019. Discount applies to print and eBook editions. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291, and quote the same code. Any queries please email marketing@boydell.co.uk

 

Note: this post appeared as a book review article in the Review. We have obtained the necessary permissions.

The age of mass migration in Latin America

by Blanca Sánchez-Alonso (Universidad San Pablo-CEU, Madrid)

This article is published by The Economic History Review, and it is available on the EHS website.

 

Blanca Blog
General Carneiro station which belonged to Minas and Rio railway. Minas Gerais province, Brazil, c.1884. Available at Wikimedia Commons.

Latin America was considered a ‘land of opportunity’ between 1870 and 1930.  During that period 13 million Europeans migrated to this region.  However, the experiences of Latin American countries are not fully incorporated into current debates concerning the age of mass migration.

The main objective of my article, ‘The age of mass migration in Latin America’,  is to rethink the role of European migration to the region in the light of new research. It addresses several major questions suggested by the economic literature on migration: whether immigrants were positively selected from their sending countries, how immigrants assimilated into host economies, the role of immigration policies, and the long-run effects of European immigration on Latin America.

Immigrants overwhelmingly originated from the economically backward areas of southern Europe. Traditional interpretations have tended to extrapolate the economic backwardness of Italy, Spain, and Portugal (measured in terms of per capita GDP and relative to advanced European countries) to emigration flows. Yet, judging by literacy levels, migrants to Latin America from southern European countries were positively selected. Immigrants to Latin America from Spain, Italy and Portugal were drawn from the northern regions which had higher levels of literacy.  There were very few immigrants to Latin America  from the southern regions of these countries. .  When immigrant literacy is compared with that of potential emigrants from regions of high emigration, positive selection appears quite clear.

One proxy often used to signal positive self-selection is upward mobility within and across generations. Recent empirical research shows that it was the possibility of rapid social upgrading that made Argentina attractive to immigrants. First-generation immigrants experienced faster occupational upgrading than natives; upward occupational mobility occurred for a large proportion of those who declared unskilled occupations on arrival. Immigrants to Argentina experienced a very fast growth in occupational earnings (6 per cent faster than natives) between 1869 and 1895. For the city of Buenos Aires in 1895, new evidence shows that Italian and Spanish males received, on average, 80 per cent of average native-born earnings. In some categories, such as crafts and services, immigrants obtained higher wages than natives. These findings provide an economic rationale why some Europeans chose Argentina over the US, despite a smaller wage differential between originating country  and destination.

Immigrants appear to have adjusted successfully to Latin American labour markets.  This is evidenced by access to property and in the large ownership of businesses.  Almost all European communities experienced strong and fast upward social mobility in the destination countries. Whether this was because of positive selection at home or because of the relatively low skill levels in the host societies is still an open question.

European immigrants to Latin America had higher levels of literacy than the native population. Despite non-selective immigration policies, Latin American countries received immigrants with higher levels of human capital compared to natives. Linking immigrants’ human capital to long run economic and educational outcomes has been the focus of recent research for Brazil and Argentina. The impact of immigration in those areas with higher shares of Europeans appears to be important since immigrants demanded and created schools (public or private). New research presents evidence of path dependency linking past immigrants’ human capital with present outcomes in economic development in the region.

Immigration policies in Latin America raised few barriers to European immigration. However, the political economy of immigration policy of Argentina shows a more complicated story than the classic representation of landowners constantly supporting an open-door policy.

Brazil developed a long-lasting programme of subsidized immigration. The expected income of immigrants to São Paulo was augmented by prospective savings, a guaranteed job on arrival, and the subsidized transportation cost. Going to Brazil was perceived as a good investment in southern Europe. Transport subsidies and the peculiarities of the colono contract in the coffee areas seem more important explanations than real wage differentials for understanding how Brazil competed for workers in the international labour market. The Lewis model merits further investigation for two main reasons. First, labour supply increased faster than the number of workers needed for the coffee expansion because of subsidies and, second, labour markets in São Paulo were segmented. European immigrants supplied only a fraction (though a substantial one) of the total labour force needed for the coffee plantations. The internal supply of workers became increasingly important and must be included in the total labour supply.

Recent literature shows that researchers are either identifying new quantitative evidence or exploiting existing data in new ways. Consequently,  new research is providing answers and posing questions to show that Latin America has much to add to debates on the economic and social impact of historical immigration.

 

To contact Blanca Sánchez-Alonso: blanca@ceu.es

The gender division of labour in early modern England: why study women’s work?

by Jane Whittle (University of Exeter) and Mark Hailwood (University of Bristol)

This article is published by The Economic History Review, and it is available on the EHS website.

 

WhittleandHailwood
Interior with an Old Woman at the Spinning Wheel. Available at Wikimedia Commons.

Here are ten reasons to know more about women’s work and read our article on ‘The gender division of labour in early modern England’. We have collected evidence about work tasks in order to quantify the differences between women’s and men’s work in the period from 1500-1700. This research allows us to dispel some common misconceptions.

 

  1. Men did most of the work didn’t they? This is unlikely, when both paid and unpaid work are counted, modern time-use studies show that women do the majority of work – 55% of rural areas of developing countries and 51% in modern industrial countries (UN Human Development report 1995). There is no reason why the pattern would have been markedly different in preindustrial England.
  2. But we know about occupational structure in the past don’t we? Documents from the medieval period onwards describe men by their occupations, but women by their marital status. As a result we know quite a lot about male occupations but very little about women’s.
  3. But women worked in households headed by their father, husband or employer. Surely, if we know what these men did, then we know what women were doing too? Recent research undertaken by Amy Erickson, Alex Shepard and Jane Whittle shows that married women often had different occupations from their husbands. If we do not know what women did, we are missing an important part of the economy.
  4. But we have evidence of women working for wages. It shows that around 20% of agricultural workers were women, surely this demonstrates that women’s work wasn’t as important as men’s in the wider economy? This evidence only relates to labourers paid by the day, and before 1700 most agricultural labour was not carried out by day labourers, so this isn’t a very good measure. Our article shows that women carried out a third of agricultural work tasks, not 20%.
  5. But women mostly did domestic stuff – cooking, housework and childcare – didn’t they, and that type of work doesn’t change much across history? Women did do most cooking, housework and childcare, but our research suggests it did not take up the majority of their working time. These forms of work did change markedly over time. A third of early modern housework took place outside, and our data suggests the majority was done for other households, not as unpaid work for one’s own family.
  6. But women only worked in a narrow range occupations, didn’t they? Our research shows that women worked in all the major sectors of the economy, but often doing slightly different tasks from men. They undertook a third of work tasks in agriculture, around half of the work in everyday commerce and almost two thirds of work tasks in textile production. But women also did forms of work we might not expect, such as shearing sheep, dealing in second-hand iron, and droving cattle.
  7. Women’s work was all low skilled wasn’t it? Women very rarely benefitted from formal apprenticeship in the way that men did, but that does not mean the tasks they undertook were unskilled. Women undertook many tasks, such as making lace and providing medical care, which required a great deal of skill.
  8. But this was all in the past, what relevance does it have now? Many gendered patterns of work are remarkably persistent over time. Analysis by the Office of National Statistics states that one third of the gender pay gap in modern Britain can be explained by men and women working in different occupations, and by the lower rates of pay for part-time work, which is more commonly undertaken by women than men.
  9. So nothing ever changes …? Well, not necessarily. In fact looking carefully at patterns of women’s work in the past shows some noticeably shifts over time. For instance, women worked as tailors and weavers in the medieval period and in the eighteenth century, but not in the sixteenth century.
  10. But we know why women work differently from men, particularly in preindustrial societies – isn’t it because they are less physically strong and all the child-bearing stuff? Physical strength does not explain why women did some physically taxing forms of work and not others (why they walked for miles carrying heavy loads on their heads rather than driving carts). And not all women were married or had children. Neither physical strength nor child-bearing can explain why women were excluded from tailoring between 1500 and 1650, but worked successfully and skilfully in this and other closely related crafts in other periods.

We now have data which allows us to look more carefully at these issues, but there is still much more to uncover.

 

To contact Jane Whittle: j.c.whittle@ex.ac.uk, Twitter: @jcwhittle1

To contact Mark Hailwood: m.hailwood@bristol.ac.uk, Twitter: @mark_hailwood

‘Stop-go’ policy and the restriction of post-war British house-building

by Peter Scott (Henley Business School, University of Reading) and James T. Walker (Henley Business School, University of Reading)

This article is published by The Economic History Review, and it is available on the EHS website.

 

British house-building
A member of the Pioneer Corps assists a civilian building labourer in tiling a roof. Available at Wikimedia Commons.

Britain’s unusually high house price to income ratio plays an important role in reducing living standards and increasing “housing poverty”. This article shows that Britain’s housing shortage partly stems from deliberate long-term government policies aimed at restricting both public and private sector house-building. From the 1950s to the early 1980s, successive governments reduced housing starts as part of `stop-go’ macroeconomic policy, with major cumulative impacts.

This policy had its roots in the Second World War, when an influential coalition of Bank of England and Treasury officials pressed for a post-war policy of savage deflation, to restore sterling’s credibility and re-establish London as a major financial centre. John Maynard Keynes warned that prioritising international ‘obligations’ over the war-time commitment to build a fairer society would be repeating the 1920s gold standard error – though his direct influence ended with his untimely death. Deflationary policy proved politically impracticable in the short-term, as evidenced by Labour’s 1945 landslide election victory, though its supporters bided their time and were able to implement much of their agenda in the changed political climate of the 1950s.

The Conservatives’ 1951 election victory was based on a pledge to build 300,000 new homes per year. This was achieved in 1953 and building peaked at 340,000 completions in 1954. However, officials took advantage of the 1955-57 credit squeeze to press for severe cuts in housing investment. Municipal house-building was cut, while private house-building was depressed largely through restricting the growth of building society funds (by pressurising the building societies’ cartel to keep interest rates at such low levels that they were starved of mortgage funds). While the severity of policy varied over time, these restrictions were maintained almost continually until the early 1980s.

These restrictions were never formally announced and were hidden from Cabinet for much of this period. Meanwhile, given the political importance of housing, the Conservative government simultaneously proposed ever-larger housing targets (culminating in a 1964 election pledge to build 400,000 per annum). This created a perverse situation, whereby the government was spending substantial sums on highly publicised policies to increase demand for private housing (such as the 1959 House Purchase and Housing Act and the 1963 abolition of Schedule A income tax), while covertly reducing housing supply through restricting mortgage funding, limiting building firms’ access to credit, and reducing municipal housing investment. The following Labour government found itself drawn in to a similarly restrictive housing policy, as part of its ill-fated commitment to avoid sterling devaluation (arguably based on misleading Treasury advice), while housing restrictions were also used as an instrument of macroeconomic stabilisation in the 1970s.

A 1974 Bank of England analysis found that this policy had created both an exaggerated housing cycle and a structural deficit (with house-building being held below market-clearing levels at all points in the cycle). This had in turn reduced the capacity of the housing market to respond to rising demand, by reducing builders’ land banks, building materials capacity, and building labour, which raised house-prices while lowering productivity and technical progress. There is also evidence of “learning effects” by house-builders, who avoided expanding their activities during cyclical upturns, as they correctly perceived that tighter government restrictions might be imposed before their houses were ready to sell. These pressures fuelled house price inflation, both directly, and because housing became increasingly regarded as as a hedge against inflation.

 

Figure 1: Capital formation in dwellings, as percentage of total capital formation, and housing completions per thousand families, private houses and all houses, 1924-38 and 1954-79

Housing Graph

 

British house-building during this era compared unfavourably to inter-war levels, as shown in Figure 1. Moreover, private house-building was even more depressed that total housing – as the Treasury found it easier to covertly restrict private housing than to reduce municipal building starts, where policy was more open to Cabinet and public scrutiny. British gross domestic fixed capital investment in housing was also very low relative to other European nations. Our time-series econometric analysis for 1955-1979 corroborates the `success’ of the restrictions and also shows the predicted asymmetric impact in `stop’ and `go’ phases of policy. This is an important finding – as stop-go policy is often examined in terms of the volatility of the variable under examination – based on the unrealistic assumption that industry would fail to realise that demand upturns might be rapidly terminated by the re-imposition of controls.

Housing restriction policy has persisting consequences. Additions to the housing stock were depressed for several decades, while the inflationary-hedge benefits for house-purchase became a self-fulfilling prophecy. Meanwhile restrictive planning policy (which was substantially intensified in the 1950s, as a further measure of housing restriction) has proved difficult to reverse. Average house-prices to income ratios have thus continued the upward trend established in this era, currently excluding a substantial and growing proportion of the population from owner-occupation.

 

To contact Peter Scott: p.m.scott@henley.ac.uk

To contact James T. Walker: j.t.walker@henley.ac.uk

 

 

Unions and American Income Inequality at Mid-Century

by William J. Collins (Vanderbilt University) and Gregory T. Niemesh (Miami University)

This article is published by The Economic History Review, and it is available on the EHS website.

 

EHS Great Depression
Crowd of depositors gather in the rain outside the Bank of United States after its failure. Available at Wikimedia Commons

Rising income inequality in the United States has attracted scholars’ attention for decades, resulting in an extensive and detailed literature on the trend’s causes and consequences.  An equally large but much less studied decline in income inequality occurred in the US during the 1940s.  This led to an era of relatively compressed income inequality that lasted into the 1970s. Goldin and Margo (1992) called this ‘The Great Compression.’

Our recent research has explored the role of changing labour market institutions in contributing to the Great Compression, with a focus on the role of labour unions.  In the US, labour unions rose to prominence starting in the late 1930s, following the Wagner Act of 1935 and a Supreme Court decision in 1937 upholding the Act.  This recast the legal framework under which unions formed and collectively bargained by creating the National Labor Relations Board to oversee representation elections and enforce the Act’s provisions, including prohibitions of various ‘unfair practices’ which employers had used to discourage unions.  Unions continued to grow through the 1940s, especially during the Second World War, and they peaked as a share of employment in the early 1950s.

Time series graphs of union density and income inequality over the full twentieth century in the US are nearly mirror images of each other (Figure 1).  But it is difficult to evaluate the role of unions in influencing this period’s inequality due to limitations of standard data sources.  The US census, for instance, has never inquired about union membership, which makes it impossible to link individual-level wages to individual-level union status in nationally representative samples for this period (see Callaway and Collins 2018 and Farber et al. 2017 for efforts to develop data from other sources).  Research on US unions later in twentieth century, when data are more plentiful, highlight their wage compressing character, as does some of the historical literature on wage setting during the Second World War, but there is much left to learn.

 

Figure 1: Unions and income inequality trends in the 20th-century United States

Great Depression Table

Sources: See Collins and Niemesh (forthcoming).

 

In a paper titled ‘Unions and the Great Compression of wage inequality in the United States at mid-century: evidence from labour markets,’ we provide a novel perspective on changes in inequality at the local level during the 1940s (Collins and Niemesh, forthcoming).  The building blocks for the empirical work are as follows: the “complete count” census microdata for 1940 provide information on wages and industry of employment (Ruggles et al. 2015); Troy’s (1957) work on mid-century unionization provides information on changes in unionization at the industry level over the 1940s; and subsequent censuses provide sufficient information to form comparable local-level measures of wage inequality.  We use a combination of local employment data circa 1940 and changes in unionization by industry after 1939 to create a variable for local ‘exposure’ to changes in unionization.

We ask whether places with more exposure to unionization due to their pre-existing industrial structure experienced more compression of wages during the 1940s and beyond, conditional on many other features of the local economy including wartime production contracts and allowing for differences in regional trends. The answer is yes: a one standard-deviation increase in the exposure to unionization variable is associated with a 0.072 log point decline in inequality between the 90th and 10th wage percentile in the 1940s (equivalent to 32 percent of the mean decline).  The association between local union exposure and wage compression is concentrated in the lower part of wage distribution.  That is, the change in inequality between the 50th and 10th percentile is more strongly associated with exposure to unionization than the change between 90th and 50th percentile.  As far as we can tell, this mid-century pattern was not driven by the re-sorting of workers (e.g., high skilled workers sorting out of unionizing locations) or by firms exiting places that were highly exposed to unionization.

We also explore whether the impression unions likely made on local wage structures persisted, even as private sector unions declined through the last decades of the twentieth century. In fact, the pattern fades a bit with time, but it remains visible to the end of the twentieth century. We leave for future research important questions about the mechanisms of persistence in local wage structures, non-wage aspects of unionization (e.g., implications for benefits or safety), implications for firm behaviour in the long run, and international comparisons.

 

To contact William J. Collins: william.collins@Vanderbilt.Edu

To contact Gregory T. Niemesh: niemesgt@miamioh.edu

 

Notes

Callaway, B. and W.J. Collins. ‘Unions, workers, and wages at the peak of the American labor movement.’ Explorations in Economic History 68 (2018), pp. 95-118.

Collins, W.J. and G.T. Niemesh. ‘Unions and the Great Compression of wage inequality in the US at mid-century: evidence from local labour markets.’ Economic History Review (forthcoming). https://doi.org/10.1111/ehr.12744

Farber, H.S., Herbst D., Kuziemko I., and Naidu, S. ‘Unions and inequality over the twentieth century: new evidence from survey data.” NBER Working Paper 24587 (Cambridge MA, 2018).

Goldin, C. and R.A. Margo, ‘The Great Compression: the wage structure in the United States at midcentury.’ Quarterly Journal of Economics 107 (1992), pp. 1-34.

Ruggles, S., K. Genadek, R. Goeken, J. Grover, and M. Sobek. Integrated public use microdataseries: version 6.0 [Machine-readable database]. (Minneapolis: University of Minnesota, 2015).

Troy, L., The distribution of union membership among the states, 1939 and 1953. (New York: National Bureau of Economic Research, 1957).

The returns to invention during the British industrial revolution

by Sean Bottomley (Max Planck Institute for European Legal History)

This article is published by The Economic History Review, and it is available on the EHS website

 

british industry
Ancoats, Manchester. McConnel & Company’s mills, about 1820. Available at Wikimedia Commons

Since the Victorian period, it has been commonly assumed that inventors were rarely remunerated for their inventions. To contemporaries they were ‘the miserable victim of [their] own powerful genius’, ‘Martyrs of Science’ who worked ‘alone, unfriended, solitary’, while ‘the recorded instances of the[ir] martyrdom would be a task of enormous magnitude’. Prominent examples of important inventors from the industrial revolution period, but who had the misfortune to die in penury (the steam engineer Richard Trevithick, for example), has meant that this view has passed into the modern literature almost without scrutiny.

This assumption, though, is significant, as it directly informs how we might explain probably ‘the’ big problem in economic history: what were the origins of the industrial revolution, and concomitantly, of modern economic growth. In particular, if inventors did usually fail to obtain financial rewards, this precludes potential explanations of the industrial revolution that invoke incentives to explain the actions of those who invented and commercialised the new technology industrialisation required. It also precludes the applicability of endogenous growth theory to the industrial revolution (theory which has earnt two of its progenitors 2018 Nobel prizes) as it assumes that profit incentives determine the amount of inventive activity that occurs.

In an attempt to determine the wealth of inventors, I have collected probate data for over 700 inventors born in Britain between 1660 and 1830, from a list first compiled by Ralf Meisenzahl and Joel Mokyr. This probate data indicates that inventors were in fact extremely wealthy. For instance, in one exercise, I compared the probated wealth of 422 inventors who died between 1800 and 1870, with that of the overall adult male population.

 

Table 1.           Probated wealth of inventors, 1800-1870

Probated wealth Adult male population (1839-1841) Adult male population (1858) Inventors
<£200 or no will 73302 (88.14%) 87043 (87.70%) 124 (29.4%)
<£1,000 5570   (6.70%) 6690   (6.74%) 39   (9.2%)
94.84% 94.44% 163 (38.6%)
<£10,000 4296 (5.16%) 4554 (4.59%) 104 (24.6%)
<£50,000 812 (0.82%) 95 (22.5%)
   £50,000+ 154 (0.16%) 60 (14.2%)
5.16% 5.56% 259 (61.4%)

Notes: For details on how the distribution of male probated wealth was estimated for 1839-41, and 1858, please refer to the appendix in the original article published in the Economic History Review.

 

The table above shows us that approximately 5 to 6 percent of adult males who died in 1839-41 and 1858 (years for when these figures can be collated), left behind wealth probated in excess of £1,000. The equivalent figure for inventors was over 60 percent. The disparity only increases as we move up through the wealth categories. Whereas only 0.16 percent of adult males left behind wealth probated in excess of £50,000 in 1858 (one in 650), for inventors it was 14.2 percent (one in 7).

It does not, however, automatically follow that the wealth of inventors was actually derived from their inventions. These were presumably talented individuals and their income may have been accrued over the course of a ‘normal’ business career and/or inherited. Unfortunately, this is a prohibitively difficult subject to approach directly: accounts rarely survive for these inventors and in any case, it is doubtful whether income from an invention could be neatly distinguished from ‘normal’ business income. As an indirect approach, I have also collected probate information for the brothers of inventors. Brothers are an especially apposite group for comparison: they would have enjoyed a very similar inheritance to their brothers (although inheriting financial capital appears to have mattered less than inheriting social capital) and they tended to enter similar occupations to their (inventive) brothers. Indeed, 24 of the inventors in the entire dataset were related as brothers – the talents and opportunities required to become an inventor were clearly not evenly distributed among the adult male population.

For 143 of the 422 inventors discussed in table 1, it was possible to confirm the existence of at least one adult brother who reached at least the age of 25 and who died in Britain between 1800 and 1870 (253 brothers in total). In the table below, the top row divides these 143 inventors into the same wealth categories as those used in the table above, with the number in parentheses denoting how many of the 143 inventors are in each category. The columns beneath this then show the distribution of the wealth of their brothers. So, there are 25 inventors in this exercise whose estate was worth less than £200. Of their 45 brothers, 31 were also left behind less than £200. Three had probated wealth between £200 and £1,000, nine between £1,000 and £10,000 and two between £10,000 and £50,000. None left behind more than £50,000.

 

Table 2.           Brother’s Probates, 1800-1870

< £200 (25) < £1,000 (11) < £10,000 (35) < £50,000 (44) £50,000+ (28)
     < £200 31 12 26 35 23
  < £1,000 3 3 7 7 2
< £10,000 9 2 14 31 13
< £50,000 2 2 3 9 8
    £50,000+ 3 2 6

Notes: as Table 1

 

Overall, if inventors were wealthier than their brothers, then the latter should be concentrated at the top and to the right of the table, and away from the bottom left corner. Clearly, they are – overwhelmingly so when one considers how important simple happenstance can be in influencing an individual’s financial success over the course of their career.

Previous work has relied on impressionistic evidence to suggest that inventors in this period rarely obtained financial rewards commensurate with their technical achievements. Probate information, though, shows that inventors were extremely wealthy relative to the adult male population. Inventors were also significantly wealthier than another group who would have received a similar inheritance (in terms of both financial and social capital) and entered similar occupations: their brothers. Their additional wealth was derived from inventive activities: invention paid.

 

To contact Sean Bottomley: bottomley@rg.mpg.de

Bees in the Medieval Economy

by Alex Sapoznik (King’s College, London)

This article is published by The Economic History Review, and it is available on the EHS website

In his seventh-century Etymology Isidore of Seville wrote ‘bees originate from oxen, just as hornets come from horses, drone bees from mules, and wasps from asses’, reflecting the belief that bees were the tiniest of birds, which sprang spontaneously from the putrefying flesh of cows. Such ideas were not new to the Middle Ages, and had been common from Antiquity, when Pliny the Elder commented that dead bees could be brought back to life if covered with mud and bovine carcass.

Yet despite this peculiar (to modern eyes) belief, medieval people were in fact keen observers of the natural world. They knew that there was a larger bee which was especially important—although they thought this was a king, rather than a queen—which the other bees protected, even to the death. They knew that bees lived in well-ordered communities, where every bee had a particular task which it dutifully carried out. They especially emphasized worker bees, which went out tirelessly collecting dew, from which they thought honey came, and flowers, which they thought turned to wax. But they observed no mating in bee colonies, and the implications of this were profound. Medieval theologians associated the virginity and chastity of bees with the two figures whose virginity and chastity were central to the Christian faith: Christ and Mary. This religious symbolism had a singularly important practical consequence, for it meant that beeswax candles were required for observance of the ritual of the Mass.

Over the high and late middle ages Christian religious practice became increasingly elaborate, with a greater number of services celebrated at an expanding number of cathedrals, churches, chapels, chantries and shrines. All of these required wax candles. Candles also burned on the rood screens and before each image, shrine, and many tombs in every church in Europe. Every stage of a medieval Christian’s life, from the baptismal font to the grave, was accompanied by candles.

The imagery of light and dark, fundamental to Christian devotion, was reliant on the supply of vast quantities of beeswax for candles and torches. The cost of provisioning religious institutions with lights was significant. In England wax accounted for on average half of the total running cost of the main chapel of major religious institutions and, apart from the fabric and bells, was the most expensive single item in parish churches. The need for wax across medieval Europe was continuous and persistent, yet the extent and significance of the production, trade, and consumption of wax has yet to be fully considered.

 

Figure 1. Bees (apes) are so-called because they are born without feet. A medieval bestiary

ehs 1

By permission of the British Library: Bestiary: BL Royal MA 12 C XIX f45r

 

Where did this beeswax come from? Although demand for wax was high across Europe, production itself was unevenly spread. In northern and central Europe high medieval urbanization and settlement expansion came at the expense of favourable bee habitats. This meant that the areas with the greatest need for wax were under intense pressure to meet demand through local production. These regions were therefore especially attractive to merchants bringing wax from the Baltic hinterland, where large-scale sylvan wax production took place in forests which had not been felled to make room for arable fields. This high-quality wax became an important feature of Hanseatic trade, and a brisk westward trade brought this wax ‘de Polane’ to England and Bruges where eager buyers were readily found.

Yet even this thriving international trade was not enough to meet the demand for wax from the c.9,000 parish churches which existed in England by the early fourteenth century.  Comparing the total amount of wax needed for basic religious observance with wax imports suggests that foreign wax accounted for only a fifth of the amount of wax needed in England before 1475. The remaining wax must have been the product of hundreds of thousands of skeps kept by small domestic producers. This local beekeeping is almost invisible in manorial documents, and it is only by considering the total demand for wax that the importance of beekeeping within the peasant economy becomes apparent.

What emerges, then, is a dual economy for wax. Wealthy religious institutions attracted merchants bringing high-quality Baltic wax in great quantities, demonstrating that geographically peripheral areas were not only vital to European trade, but that the cultural practices of high and late medieval society were dependent on these regions. At the same time, small producers found ready markets for the product of their hives in their local parish churches, supplying much-needed injections of income within the household economy.

 

Figure 2 Bees in the Luttrell Psalter

picture 2

By permission of the British Library:  Luttrell Psalter: BL Add MS 42130 f240r

 

Bees and bee products held a uniquely important place in medieval culture, and consequently in the medieval economy. In these tiny golden creatures medieval people saw something flung from Paradise, imbued with mystical qualities and powerfully symbolic. Today, as we face climate change, habitat destruction and the decline of bee colonies, we might do well to look at the natural world with something of the same wonder.

This research is being expanded in the Leverhulme project ‘Bees in the medieval world: Economic, environmental and cultural perspectives’, which will also explore the Mediterranean trade in beeswax and consider encounters between the Christian and Muslim worlds.

To contact Alex Sapoznik: Alexandra.sapoznik@kcl.ac.uk

The Price of the Poor’s Words: Social Relations and the Economics of Deposing for One’s “Betters” in Early Modern England

by Hillary Taylor (Jesus College, Cambridge)

This article is published by The Economic History Review, and it is available on the EHS website

william_powell_frith_-_poverty_and_wealth
Poverty and Wealth. Available at Wikimedia Commons

Late sixteenth- and early seventeenth-century England was one of the most litigious societies on record. If much of this litigation was occasioned by debt disputes, a sizeable proportion involved gentlemen suing each other in an effort to secure claims to landed property. In this genre of suits, gentlemen not infrequently enlisted their social inferiors and subordinates to testify on their behalf.[1] These labouring witnesses were usually qualified to comment on the matter at hand a result of their employment histories. When they deposed, they might recount their knowledge of the boundaries of some land, of a deed or the like. In the course of doing so, they might also comment on all sorts of quotidian affairs. Because testifying enabled illiterate and otherwise anonymous people to speak on-record about all sorts of issues, historians have rightly regarded depositions as a singularly valuable source: for all their limitations, they offer us access to worlds that would otherwise be lost.

But we don’t know much about what labouring people thought about the prospect of testifying for (and against) their superiors, or how they came to testify in the first place. Did they think that it presented an opportunity to assert themselves? Did it – as some contemporary legal commentators claimed – provide them with an opportunity to make a bit of money on the side by ‘selling’ dubious evidence to their litigious superiors?[2] Or were they reluctant to depose in such circumstances and, if so, why? Where subordinated individuals deposed for their ‘betters’, what was the relationship between the ‘pull’ of economic reward and the ‘push’ of extra-economic coercion?

I wrote an article that considers these questions. It doesn’t have any tables or graphs; the issues with which it’s concerned don’t readily lend themselves to quantification. Rather, this piece tries to think about how members of the labouring population conceived of the possibilities that were afforded to and the constraints that were imposed upon them by dint of their socio-economic position.

In order to reconstruct these areas of popular thought, I read loads of late sixteenth- and early seventeenth-century suits from the court of Star Chamber. In these cases, labouring witnesses who had deposed for one superior against another were subsequently sued for perjury (this was typically done in an effort to have a verdict that they had helped to secure overturned). Allegations against these witnesses got traction because it was widely assumed that people who worked for their livings were poor and, as a result, would lie under oath for anyone who would pay them for doing so. Where these suits advanced to the deposition-taking phase, labouring witnesses who were accused of swearing falsely under oath and witnesses of comparable social position provided accounts of their relationship with the litigious superiors in question, or commentaries on the perceived risks and benefits of giving evidence. They discussed the economic dispensations (or the promise thereof) which they had been given, or the coercion which had been used to extract their testimony.

Taken in aggregate, this evidence suggests that members of the labouring population had a keen sense of the politics of testimony. In a dynamic and exacting economy such as that of late sixteenth- and early seventeenth-century England, where labouring people’s material prospects were irrevocably linked to their reputation and ‘honesty,’ deposing could be risky. Members of the labouring population were aware of this, and many were hesitant to depose at all. Their reluctance may well have been born of an awareness that doubt was likely to be cast upon their testimony as a result of their subordinated and dependent social position, which lent credibility to accusations that they had sworn falsely for gain. More immediately, it reflected concerns about the material reprecussions that they feared would follow from commenting on the affairs of their ‘betters.’ Such projections were not merely the stuff of paranoid speculation. In 1601, a carpenter from Buckinghamshire called Christopher Badger had put his mark to a statement defending a gentleman, Arthur Wright, who had frustrated efforts to impose a stinting arrangement on the common to, as many locals claimed, the ‘damadge of the poorer sorte and to the comoditie of the riche.’ Badger recalled that one of Wright’s opponents – also a gentleman – later approached him and said ‘You have had my worke and the woorke of divers’ other pro-stinting individuals. To discourage Badger from further involvement, he added a thinly veiled threat: ‘This might be an occasion that you maie have lesse worke then heretofore you have had.’[3] For members of the labouring population, material circumstance often militated against opening their mouths.

But there was an irony to the politics of testimony, which was not lost on common people. If material conditions made some prospective witnesses reluctant to depose, they all but compelled others to do so (even when they expressed reservations). In some instances, labouring people’s poverty rendered the rewards – a bit of coal, a cow, promises of work that was not dictated by the vagaries of seasonal employment, or nebulous offers of a life freed from want – that they were promised (and less often given) in return for their testimony compelling. In others, the dependency, subordination and obligation that characterized their relations with their superiors necessitated that they speak as required, or face the consequences. In the face of such pressures, a given individual’s reservations about testifying were all but irrelevant.

To contact Hillary Taylor: Hat27@cam.ac.uk

Notes

[1] For debt and debt-related litigation, see Craig Muldrew, The Economy of Obligation: The Culture of Credit and Social Relations in Early Modern England (Basingstoke, 1998).

[2] For suspicions surrounding the testimony of poor and/or labouring witnesses, see Alexandra Shepard, Accounting for Oneself: Worth, Status, and the Social Order in Early Modern England (Oxford, 2015).

[3] TNA, STAC 5/W17/32. Continue reading