by Max Hao (Peking University), Kevin Liu (The Hong Kong University of Science and Technology), and Liu Zhengcheng (Peking University)
In premodern Europe, famine relief was inadequately provided until the late 19th century. In contrast, in late imperial China, preventing starvation helped legitimate the state and played a key role in reducing internal conflicts. The Qing state operated a network of granaries and developed sophisticated procedures to report famines and supply relief. However, as shown in Figure 1, the frequency of famine relief recorded in the Qingshilu (veritable records of Qing) was much lower than the frequency of disasters under the reigns of Shunzhi (1644-1661) and Kangxi (1661-1722). In contrast, in the reign of Yongzheng (1723-1735) and in the early years of Qianlong (1736-1760), famine relief became more responsive to disasters. Why did the Qing state take on the responsibility for “nourishing the people” only after 1723?
To solve this puzzle, we explore the effect of a reform that formalized and centralized part of the fiscal system in premodern China: the ‘huohao turned to public’ reform instituted in Emperor Yongzheng’s reign (1723-1735). ‘Huohao’ denotes all the informal surtaxes collected by county governments. Because the bulk of formal tax revenue was remitted directly to the central government, provincial and county governments retained only a limited share of this income with limited discretion over its expenditure. Consequently, they imposed huohao or surtaxes to finance their own expenses. However, because these informal revenues were unsanctioned and unmonitored by the central government, they were largely pocketed by officials.
The ‘huohao turned to public’ reform was an endeavor to formalize huohao in order to achieve two interconnected policy goals: to reduce corruption and enhance provincial fiscal capacity by centralizing control. First, huohao was collected at rates designated by the provincial governor and delivered to the provincial treasury. Second, 60% of the remitted funds were allocated to county magistrates and provincial governors as ‘anticorruption salaries’ to finance their regular expenses, reducing their incentive to collect informal surtaxes and seize public revenues. More importantly, 40% of the formalized houhao was assigned as public funds to finance irregular expenses at the governors’ discretion. The total annual revenue from the formalized huohao amounted to 4.5 million taels of silver, of which 1.8 million were reserved as public funds. In sum, by formalizing and centralizing informal surtaxes, the reform enhanced provincial fiscal capacity by giving provincial governors more resources and a greater incentive to spend them on public goods. The design of the reform is illustrated in Figure 2.
Because corruption is extremely difficult to explore empirically, we mainly focus on whether the second goal was achieved. The timing of when the reform was initiated and completed differed between provinces, but the reasons behind these variations were largely exogenous to provincial characteristics, enabling us to use them to test the effects of reform. We test whether the huohao reform raised the frequency of famine relief in periods of disastrous weather by exploiting the different timing of the reform process across provinces. We restrict our dataset to 1710-1760, when the bulk of famine relief was financed by the provinces.
Using a prefecture-level panel dataset, we find that in times of extreme drought and floods, the frequency of famine relief increased after the reform by 1.05 times per prefecture, which was more than 100% of the standard deviation of the dependent variable relative to that under non-disastrous weather. By exploring the dynamics of the reform’s impact, we find no pre-trend, which supports the exogeneity of the reform’s timing. Our results are robust to controlling for other initiatives by the central government, such as tax exemptions, the allocation of tribute grain and central fiscal revenues, the enforcement of bureaucratic monitoring, and other concurrent fiscal reforms. We also find that famine relief effectively reduced grain prices when disasters occurred, indicating that public funds were spent on famine relief which had a beneficial impact on the population. Further, we find that the reform’s impact was greater when areas faced exceptional flooding compared to exceptional droughts, and greater in prefectures which had difficulties collecting taxes, suggesting that the reform facilitated the intertemporal and spatial redistribution of financial resources.
For this tax reform to have a sustained effect on provincial government capacity, central government would need to resist the expropriation of these new revenues for its own use. However, in premodern China, there were no institutional constraints on this dispossession. After emperor Qianlong (1736-1796) succeeded to the throne, the central government began to make regular checks on the expenditure of provincial public funds, forced the inter-provincial transfers of funds, and expended them on projects previously financed by central revenues. These actions forced provincial governments to reduce their expenditure on famine relief and withhold anticorruption salaries meant for county administrators. This finding highlights that it was the lack of credible commitment that accounted for the short-lived success of this fiscal reform. Viewed from this perspective, the reform provides a valuable lesson about the role of political institutions in the Great Divergence.
My Tawney lecture reassessed the relationship between slavery and industrial capitalism in both Britain and the United States. The thesis expounded by Eric Williams held that slavery and the slave trade were vital for the expansion of British industry and commerce during the 18th century but were no longer needed by the 19th. My lecture confirmed both parts of the Williams thesis: the 18th-century Atlantic economy was dominated by sugar, which required slave labor; but after 1815, British manufactured goods found diverse new international markets that did not need captive colonial buyers, naval protection, or slavery. Long-distance trade became safer and cheaper, as freight rates fell, and international financial infrastructure developed. Figure 1 (below) shows that the slave economies absorbed the majority of British cotton goods during the 18th century, but lost their centrality during the 19th, supplanted by a diverse array of global destinations.
I argued that this formulation applies with equal force to the upstart economy across the Atlantic. The mainland North American colonies were intimately connected to the larger slave-based imperial economy. The northern colonies, holding relatively few slaves themselves, were nonetheless beneficiaries of the trading regime, protected against outsiders by British naval superiority. Between 1768 and 1772, the British West Indies were the largest single market for commodity exports from New England and the Middle Atlantic, dominating sales of wood products, fish and meat, and accounting for significant shares of whale products, grains and grain products. The prominence of slave-based commerce explains the arresting connections reported by C. S. Wilder, associating early American universities with slavery. Thus, part one of the Williams thesis also holds for 18th-century colonial America.
Insurgent scholars known as New Historians of Capitalism argue that slavery, specifically slave-grown cotton, was critical for the rise of the U.S. economy in the 19th century. In contrast, I argued that although industrial capitalism needed cheap cotton, cheap cotton did not need slavery. Unlike sugar, cotton required no large investments of fixed capital and could be cultivated efficiently at any scale, in locations that would have been settled by free farmers in the absence of slavery. Early mainland cotton growers deployed slave labour not because of its productivity or aptness for the new crop, but because they were already slave owners, searching for profitable alternatives to tobacco, indigo, and other declining crops. Slavery was, in effect, a ‘pre-existing condition’ for the 19th-century American South.
To be sure, U.S. cotton did indeed rise ‘on the backs of slaves’, and no cliometric counterfactual can gainsay this brute fact of history. But it is doubtful that this brutal system served the long-run interests of textile producers in Lancashire and New England, as many of them recognized at the time. As argued here, the slave South underperformed as a world cotton supplier, for three distinct though related reasons: in 1807 the region closed the African slave trade, yet failed to recruit free migrants, making labour supply inelastic; slave owners neglected transportation infrastructure, leaving large sections of potential cotton land on the margins of commercial agriculture; and because of the fixed-cost character of slavery, even large plantations aimed at self-sufficiency in foodstuffs, limiting the region’s overall degree of market specialization. The best evidence that slavery was not essential for cotton supply is demonstrated by what happened when slavery ended. After war and emancipation, merchants and railroads flooded into the southeast, enticing previously isolated farm areas into the cotton economy. Production in plantation areas gradually recovered, but the biggest source of new cotton came from white farmers in the Piedmont. When the dust settled in the 1880s, India, Egypt, and slave-using Brazil had retreated from world markets, and the price of cotton in Liverpool returned to its antebellum level. See Figure 2.
The New Historians of Capitalism also exaggerate the importance of the slave South for accelerated U.S. growth. The Cotton Staple Growth hypothesis advanced by Douglass North was decisively refuted by economic historians a generation ago. The South was not a major market for western foodstuffs and consumed only a small and declining share of northern manufactures. International and interregional financial connections were undeniably important, but thriving capital markets in northeastern cities clearly predated the rise of cotton, and connections to slavery were remote at best. Investments in western canals and railroads were in fact larger, accentuating the expansion of commerce along East-West lines.
It would be excessive to claim that Anglo-American industrial and financial interests recognized the growing dysfunction of the slave South, and in response fostered or encouraged the antislavery campaigns that culminated in the Civil War. A more appropriate conclusion is that because of profound changes in technologies and global economic structures, slavery — though still highly profitable to its practitioners — no longer seemed essential for the capitalist economies of the 19th-century world.
by Daniel Gallardo Albarrán (Wageningen University)
Lack of access to clean water and sanitation facilities are still common across the globe. Simultaneously, infectious, water-transmitted illnesses are an important cause of death in these regions. Similarly, industrializing economies during the late 19th century exhibited extraordinarily high death rates from waterborne diseases. However, unlike contemporary developing countries, the former experienced a large decrease in mortality in subsequent decades which meant that deaths from waterborne diseases were totally eradicated.
What explains this unprecedented improvement? The provision of safe drinking water is often considered a key factor. However, the prevalence of waterborne ailments transmitted through faecal-oral mechanisms is also determined by water contamination and/or the inadequate storage and disposal of human waste. Consequently, doubts remain about efficacy of clean water per se to reduce mortality; this necessitates an integrative analysis considering both waterworks and sewerage systems.
My research adopts this approach by considering the case of Germany between 1877 and 1913 when both utilities were adopted nationally and crude death rates (CDR) and infant mortality rates (IMR) declined by almost 50 per cent. A quick glance at trends in mortality and the timing of sanitary infrastructures in Figure 1 suggests that improvements in water supply and sewage disposal are associated with better health outcomes. However, this evidence is only suggestive: Figure 1 only presents the experience of two cities and, importantly, factors outside public health investments — for example, better nutrition, improved infant care — may account for changes in mortality To study the link between sanitary improvements and mortality more systematically, I examine two new datasets containing information on various measures of mortality at city level (overall deaths, infant mortality and cause-specific deaths) and the timing when municipalities began improving water supply and sewage disposal.
The first set of results show that piped water reduced mortality, although its effects were limited given the absence of efficient systems of waste removal. Both sanitary interventions account for (at least) a fifth of the decrease in crude death rates between 1877 and 1913. If we consider the fall in infant deaths instead, I find that sewers were equally important in providing effective protection against waterborne illnesses, since improvements in water supply and sewage disposal explain a quarter of the fall in infant mortality rates.
I interpret these findings causally because both interventions had a persistent short-term impact on mortality instantaneously following their implementation, not before. As Figure 2 shows, CDR and IMR immediately decline following the construction of both waterworks and sewerage, and mortality exhibits no statistically significant trends in the years preceding the sanitary interventions (the reference point for these comparisons is one year prior to their construction). Furthermore, using cause-specific deaths I find that sanitary infrastructures are strongly associated with enteric-related illnesses, and deaths from a very different set of causes — homicides, suicides or accidents — are not.
The second set of results relates to the heterogeneous effects of sanitary interventions along different dimensions. I find that their impact on mortality are less universal than hitherto thought, since their effectiveness largely depended on local characteristics such as income inequality or the availability of female employment.
In sum, my research shows that the mere provision of safe water, is not sufficient to explain a significant fraction of the mortality decline in Germany at the turn of the 20th century. Investments in proper waste removal were needed to realize the full potential of piped water. Most importantly, the unequal mortality-reducing effect of sanitation calls for a deeper understanding of how local factors interact with public health policies. This is especially relevant today, as international initiatives, for example, the Water, Sanitation and Hygiene programmes led by UNICEF, aims to of promote universal access to sanitary services in markedly different local contexts.
by Jane Humphries (All Souls College, Oxford, and London School of Economics)
Sexual harassment was probably as common and as debilitating for past generations of women as for us in the world of #MeToo. The threat of sexual predation has long limited women and girls’ capabilities in the sense of what they could do or could be. It has constrained choice of jobs and security at work as well as threatened wellbeing more generally. Here evidence of sexual harassment and the anxiety it created are extracted from working women’s life accounts and shown to have entrenched economic discrimination and gender subordination.
Mary Saxby’s peripatetic life was punctuated by a series of encounters ranging from harassment to rape. While her vagrancy left her particularly exposed to predation, she was clearly vulnerable even when in prison. Other women were similarly at risk when going about their legitimate business. Christian Watt reported that ‘[F]ishwives were often attacked both for money and carnal knowledge’ and armed herself with a gutting knife for self-defence. Both working and getting to work created anxieties: girls in the Hodgson family faced a long walk to the mill where they worked. ‘It was dark when we went and dark going home … we three girls didn’t like it, and Mother didn’t like us having to do it either’.
Ironically, given its status as a proper employment for women and girls, domestic service entailed particular vulnerability. Christian Watt related a common type of encounter: ‘One morning while giving a hand to make the beds … a Captain Leslie Melville put his arms around me and embraced me. I dug my claws into his face and with all the force I could I tore for all I was worth; his journey into flirtation land cost him the skin of his nose’. For less forceful characters it was better not to run into such dangerous situations.
As today, girls without parental protection were particularly vulnerable. Ellen Johnston the ‘factory poetess’, who had an illegitimate child while in her teens hints several times at abuse by her stepfather. Sally Marcroft was impregnated by the son of a weaver with whom she was boarded as an orphaned pauper. Lucy Luck, on graduating from the workhouse, was found a job where she was constantly preyed upon: ‘Well, we reached St Albans at last, and the place of service [the poor law officer] had found for me was a public house … The mistress was very good to me but the master was one of the worst who walked God’s earth. Always fighting with his wife … and he would beat that woman shamefully … But that was not the worst of him. That man … did all he could, time after time, to try and ruin me, a poor orphan only fifteen years old. Even more appalling, Emma Smith, a Cornish waif who grew up partly in the workhouse and partly in her maternal grandparents’ home, was given by her mother to a hurdy- gurdy man who abused her continuously for years: “This beast — old enough to be my grandfather — grabbed hold of me, a child of about six years of age, if I was that. He undid some of my clothing and behaved in a disgusting way”. Few suffered such horrendous, and in Emma’s case life-impacting, abuse but fear of assault was common and had significant effects on what girls were able to do and to be.
Workplaces where the sexes mixed were widely regarded as promoting immorality and prudent girls shunned such exposure. Similarly, agricultural fieldwork was judged damaging once girls reached puberty whereupon it became more respectable to withdraw to indoor activities. Thus Jane Bowden was a boarded and then bound out apprentice aged 9 and ‘…[A]t the beginning part of my time I was employed in out-door work…..when I was about 16 I was kept entirely to the house, except at harvest time’. Service in public houses could also bring girls into bad company and threaten reputations. Hannah Cullwick obtained a place at the Lion Hotel but her father ‘thought it was not good for me at a public house and I was to give warning’.  Remember, Lucy Luck was consigned to this disreputable work: ‘What did it matter? I was only a drunkard’s child. But if they had found me a good place for a start, things might have been better for me’.
As these cases make clear, the need for circumspection in the face of potential predation and threats to reputation, made negotiating the world of work especially difficult. Not surprisingly, girls retreated into the ghetto of jobs where respectability was easier to retain and virtue to defend. Girls found it difficult to support themselves on the incomes they could earn and frequently remained partially dependent on fathers or the state, a foretaste of their situation as married women where the meta division of labour enforced women’s unpaid work in the home and men’s breadwinning. Dependent on men, women and girls’ lost self-esteem and lacked voice even within the household. A vicious circle eroding female capabilities was completed.
 The ‘capabilities approach’ to wellbeing originated in the work of Amartya Sen, see ‘Gender and Cooperative Conflicts’, in Irene Tinker(ed.) Persistent inequalities: Women and world development (New York, Oxford University Press, 1990). For further discussion see B. Agarwal, et al, Amartya Sen’s work and ideas. A gender perspective (London, Routledge, 2005).
 M. Saxby, Memoirs of a female vagrant written by herself (London, J. Burditt, 1806).
 C. Watt, The Christian Watt papers (Edinburgh, Birlinn, 1988), 36.
 A. Hughes, nee Hodgson,‘Unpublished autobiography’, Brunel, 5.
 E. Johnston, Autobiography of Ellen Johnston, ‘The Factory Girl’, in Four nineteenth-century working-class autobiographies, edited by James R. Simmons Jr., and introduced by Janice Carlisle (Toronto, Broadview Press, 2007).
 W. Marcroft, The Marcroft family (London, John Heywood, 1886) 21.
 L. Luck, ‘A little of my life’, The London Mercury, 76 (1926) 354-373.
 [E. Smith], A Cornish waif’s story (London, Odham’s Press, 1954) 31.
 J. Humphries, ‘Protective Legislation, the Capitalist State and Working-Class Men: The Case of the 1842 Mines Regulation Act’, Feminist Review, No. 7, Spring 1981, 1–35; J. Humphries, ‘“The Most Free from Objection…”, The Sexual Division of Labour and Women’s Work in Nineteenth Century England’, Journal of Economic History, Vol. XLVII, No. 4, December 1987, 929–950.
 J. Bowden, Employment of Women and Children in Agriculture, Parliamentary Papers, Vol. XII, 1843, 113.
 H. Cullwick, The diaries of Hannah Cullwick (London, Virago, 1984), 36.
Despite the political turmoil, the early 20th century witnessed fundamental economic and industrial transformations in China. Our research documents the most important but neglected aspect of this development: China remained on the silver standard until 1936 while many countries remained on gold. Nonetheless, the Chinese silver regime defies easy classification because its silver basis was traditionally not in coinage, but in the form of privately minted ingots called sycee, denoted by a unit of account called tael. During our study period, sycee circulated alongside standardized silver coins such as Mexican and later Chinese silver dollars. We know relatively little about the operation of the silver exchange and monetary regime within China, in contrast to the large literature on the gold standard during the same era.
We present an in-depth analysis of China’s unique silver regime by offering a systematic econometric assessment of Chinese silver market integration between 1898 and 1933. As a result of this integration, the dollar-tael exchange rate, the yangli, became the most important indicator of the Chinese currency market. We compile a large data set culled from contemporary publications on the yangli across nineteen cities in Northern and Central China, and offer a threshold time series methodology for measuring silver integration comparable to that of gold points.
We find that the silver points between Shanghai and Tianjin, the two most important financial centers in Central and Northern China, declined steadily from the 1910s for the rest of the period (Figure 1). Our estimates of silver points from the daily rates of nineteen cities during the 1920s and 1930s also reveal that there was no substantial difference in the level of monetary integration between the Warlord Era of the 1920s and the Nanjing decade of the 1930s. Figure 2 provides a simple linear plot of the distance between Shanghai and the estimated silver points of those cities paired with Shanghai during the 1920s and 1930s. This Figure shows a positive relationship between silver points and the distance from Shanghai, indicating the rise of a monetary system centered on Shanghai.
Our silver point estimates are closely aligned with the actual costs of the silver trade derived from contemporary accounts. Moreover, the silver points help predict corresponding transaction volumes: the majority of large silver exports from Shanghai occurred when the yangli spread was above the silver export points; only limited flows occurred when it fell within the bounds of the silver points. The econometric results reveal that monetary integration between Shanghai and Tianjin improved in the 1910s—precisely during the Warlord Era of national disintegration and civil strife—and these improvements spread to other cities in Central and Northern China in the 1920s and 1930s.
Our research provides a historical analysis of the causes of monetary integration, attributing a central role to China’s infrastructure and financial improvements during this period. One plausible driving force was the rise of new transport and information infrastructure, for example, the completion of the Tianjin-Nanjing Railway, and the Shanghai-Nanjing and Shanghai-Hangzhou Railways constructed between 1908 and 1916, which linked the Northern and Southern China. Compared with road or water transport, railroads offered much faster, cheaper and safer delivery, an advantage far more significant for high-value silver shipments than low-value high-bulk commodities.
Another, more important factor was monetary and financial transformation indicated by the rise of a modern banking system from the end of the 19th century. Although it was the government that issued national dollars, banking communities played a key role in defending its reputation and purity. Overtime, the ‘countable’ dollar outperformed the ‘weighable’ sycee as a medium of exchange, gaining an increasing share in China’s monetary system. This eventually paved the way for the currency reform of 1933, which abolished the sycee and the tael, establishing the dollar as the sole standard. A notable monetary transformation was the increasing popularity of banknotes. The system of Chinese bank note issuance was largely run on a model of free banking with multiple public and private banks, Chinese or foreign, issuing silver-convertible banknotes based on reputation mechanism. Thus, the increasing note issue from the 1910s provided a much more elastic currency to smooth seasonality in the money markets and enhance financial integration.
A podcast of Sevket’s Tawney lecture can be found here.
New Map of Turkey in Europe, Divided into its Provinces, 1801. Available at Wikimedia Commons.
The Tawney lecture, based on my recent book – Uneven centuries:economic development of Turkey since 1820, Princeton University Press, 2018 – examined the economic development of Turkey from a comparative global perspective. Using GDP per capita and other data, the book showed that Turkey’s record in economic growth and human development since 1820 has been close to the world average and a little above the average for developing countries. The early focus of the lecture was on the proximate causes — average rates of investment, below average rates of schooling, low rates of total productivity growth, and low technology content of production —which provide important insights into why improvements in GDP per capita were not higher. For more fundamental explanations I emphasized the role of institutions and institutional change. Since the nineteenth century Turkey’s formal economic institutions were influenced by international rules which did not always support economic development. Turkey’s elites also made extensive changes in formal political and economic institutions. However, these institutions provide only part of the story: the direction of institutional change also depended on the political order and the degree of understanding between different groups and their elites. When political institutions could not manage the recurring tensions and cleavages between the different elites, economic outcomes suffered.
There are a number of ways in which my study reflects some of the key trends in the historiography in recent decades. For example, until fairly recently, economic historians focused almost exclusively on the developed economies of western Europe, North America, and Japan. Lately, however, economic historians have been changing their focus to developing economies. Moreover, as part of this reorientation, considerable effort has been expended on constructing long-run economic series, especially GDP and GDP per capita, as well as series on health and education. In this context, I have constructed long-run series for the area within the present-day borders of Turkey. These series rely mostly on official estimates for the period after 1923 and make use of a variety of evidence for the Ottoman era, including wages, tax revenues and foreign trade series. In common with the series for other developing countries, many of my calculations involving Turkey are subject to larger margins of error than similar series for developed countries. Nonetheless, they provide insights into the developmental experience of Turkey and other developing countries that would not have been possible two or three decades ago. Finally, in recent years, economists and economic historians have made an important distinction between the proximate causes and the deeper determinants of economic development. While literature on the proximate causes of development focuses on investment, accumulation of inputs, technology, and productivity, discussions of the deeper causes consider the broader social, political, and institutional environment. Both sets of arguments are utilized in my book.
I argue that an interest-based explanation can address both the causes of long-run economic growth and its limits. Turkey’s formal economic institutions and economic policies underwent extensive change during the last two centuries. In each of the four historical periods I define, Turkey’s economic institutions and policies were influenced by international or global rules which were enforced either by the leading global powers or, more recently, by international agencies. Additionally, since the nineteenth century, elites in Turkey made extensive changes to formal political institutions. In response to European military and economic advances, the Ottoman elites adopted a programme of institutional changes that mirrored European developments; this programme continued during the twentieth century. Such fundamental changes helped foster significant increases in per capita income as well as major improvements in health and education.
But it is also necessary to examine how these new formal institutions interacted with the process of economic change – for example, changing social structure and variations in the distribution of power and expectations — to understand the scale and characteristics of growth that the new institutional configurations generated.
These interactions were complex. It is not easy to ascribe the outcomes created in Turkey during these two centuries to a single cause. Nonetheless, it is safe to state that in each of the four periods, the successful development of new institutions depended on the state making use of the different powers and capacities of the various elites. More generally, economic outcomes depended closely on the nature of the political order and the degree of understanding between different groups in society and the elites that led them. However, one of the more important characteristics of Turkey’s social structure has been the recurrence of tensions and cleavages between its elites. While they often appeared to be based on culture, these tensions overlapped with competing economic interests which were, in turn, shaped by the economic institutions and policies generated by the global economic system. When political institutions could not manage these tensions well, Turkey’s economic outcomes remained close to the world average.
By Paolo Malanima (University «Magna Graecia» in Catanzaro)
The full article from this blog post has now been published on The Economic History Review, and it is freely available on Early View until the 15th January 2020 at this link
In 2016 any human being consumed every day 57,000 kilocalories (kcal) on the average; but in western Europe he/she consumed more than 100,000 kcal, in USA and Canada 200,000 and in Africa 22-23,000 (Figure 1). The remarkable increase in the capacity to do work, due to the rise in the energy consumption, marked a discontinuity in the course of the World economy and was a decisive support of Modern Growth.
We see in Fig. 2 the change, both in aggregate (A) and per capita (B) terms. In 1820-1913 total consumption increased 3.5-4 times, and per head 2-2.5 times. The growth in the availability of energy was the same of total and per capita GDP. Actually, in this first phase of growth, 1 percent more energy input resulted into 1 percent more GDP.
This increase could not have been possible without a change in the composition of the energy sources consumed (Fig. 3). When in 1800-20 the main traditional sources were still food, firewood and fodder, consumption per head on the World scale did not exceed 10,000 kcal. It became possible to overtake this amount when primarily coal and later oil, natural gas and hydroelectricity began to be exploited on a large scale.
When traditional sources dominated, inequality in energy exploitation was very modest and primarily depended on different temperatures. Western Europe, North America and Oceania already prevailed in 1820 in energy consumption, but their population was equal to 15 percent of the World population, compared to 65-70 of Asia and Middle East. The introduction of non-renewables energy sources brought about a rising inequality, which reached a peak on the eve of the First World War and only began to diminish after the Second World War (Fig. 4).
A consequence of all this change was that in countries rich of coal, as a ratio to population (Europe, North America, Oceania), since energy per worker was remarkable, productivity and real wages were significantly higher than elsewhere. An incentive existed in those macroareas to replace labour with mechanical devices. The contribution of technology was remarkable in this process. Consider, for example, the evolution of steam engine technology between the eighteenth and nineteenth centuries which indicates the impact that technical progress exercised on the ability to exploit mechanical power.
Natural capital is ordinarily excluded from models of economic growth. Actually the role of natural resources was certainly decisive at the start of Modern Growth. For some thousands of years, in the agricultural civilisations, cultures, sciences, institutions and political systems actually changed without any substantial progress in the capacity to do work. Both real wages and incomes per capita, reconstructed recently by the historians, draw straight lines until about 1820. Many concurrent factors of change contributed to what is called today Modern Growth — cultural, political, institutional — yet, without the removal of the energy constraint, steady economic growth would have been unobtainable.
By Alejandro Ayuso Díaz and Antonio Tena Junguito (Carlos III University of Madrid)
The history of international trade provides numerous examples of trade in the ‘shadow of power’ (Findlay and O´Rourke 2007). Here we argue that Japanese empire power was as important as factor endowments, preferences, and technology, to the expansion of trade during the interwar years. Following Gardfield et al 2010, the shadow of power that we discuss is based on the use or threat of violence or conquest which depend on the military capabilities of states.
Japan was a latecomer to 20th-century industrialization, but during the interwar years, and especially in the 1930s, it was able to activate a complex and aggressive industrialization policy to accelerate the modernization of its industry. This policy consisted of import substitution and exports of manufactures to its region of influence. This newly created empire was very efficient in developing a peculiar imperial trade in the shadow of power throughout East and Southeast Asia in conjunction with a more aggressive imperial regional policy through conquest.
The trade generation capacity of the Japanese empire during the interwar years was much higher than that suggested by Mitchener and Weidenmier (2008) for the preceding period (1870-1913). However, some caution needs to be exercised in making this comparison because it might indicate issues associated with the interpretation of the relevant statistics. Japanese empire trade membership increased by more than ten times that associated with the British, German and French Empires, during this period and was twice as great as that for the US and Spanish empires. Consequently, it might be argued that our coefficients are more prominent because they are capturing stronger intra-bloc bias that emerged after the Great Depression.
Employing a granular database consisting of Japanese exports towards 117 countries over 1,135 products at six different benchmarks (1912,1915,1925,1929,1932 and 1938) we are able to demonstrate that the expansion of Japanese exports during the interwar period was facilitated by the exploitation of formal and informal imperial links which exerted a bigger influence on export determination than productivity increases.
Figure 2: Japanese total manufacturing exports by skills and region. Source: Annual Returns of the Foreign Trade of the Empire of Japan.
The main characteristics of this trade expansion between 1932 and 1938 were high-skill exports directed towards Japanese colonies. Additional evidence indicates that Japan did not enjoy comparative advantage in products with limited export- market potential. Colonial infrastructure, building and urbanization were used as exclusive markets for high-skill exports and became one of the main drivers of Japanese export expansion and its modern industrialization process.
Trade blocs in the interwar years were used as instruments of imperial power to foster exports and as a substitute for productivity in encouraging industrial production. In that sense, Japan’s total exports in 1938 were between 28% and 47% higher than 1912 thanks to imperial mechanisms. The figure is much higher when we capture the imperial effect on high-skill exports (between 66% and 76% higher thanks to imperial connections). The quoted figures are based on a counterfactual comparing exports without the empire to those obtained via Imperial mechanisms.
We believe that our results demonstrate the colonial trade bias mechanism used by imperialist countries was inversely related to productivity. The implicit counterfactual hypothesis would be that without imperial intervention in the region Japan would not have expanded its high-skill exports and would not have exported such a variety of new products. In other words, Japan’s industrialisation process would have been much less pronounced.
Ayuso-Diaz, A. and Tena-Junguito, A. (2019): “Trade in the Shadow of Power: Japanese Industrial Exports in the Interwar years”. Economic History Review (forthcoming).
Findlay, R. and O’Rourke, K. (2007). Power and Plenty. Princeton, NJ: Princeton University Press.
Garfinkel, M, Skaperdas, S., and Syropoulos, C. (2012). ‘Trade in the Shadow of Power’. In Skaperdas, S., and Syropoulos, C. (eds.), Oxford Handbook on the Economics of Peace and Conflict. Oxford University Press.
Mitchener, K. J., & Weidenmier, M. (2008). Trade and empire. The Economic Journal, 118(533), 1805-1834.
Ritschl, A. & Wolf, N. (2003). “Endogeneity of Currency Areas and Trade Blocs: Evidence from the Inter-war Period,” CEPR Discussion Papers 4112.
by Leonardo Weller (São Paulo School of Economics – FGV)
Read the full article on The Economic History Review – published in Augus 2018, available here
Mexico borrowed £6 million abroad in 1913, amidst a civil war that destroyed the state and killed over two million people. Civil wars tend to make creditors wary because of their inevitable consequences: if the borrowing government wins, it will need to spend on reconstruction, leaving it short of cash to pay their debt back; in case of defeat, the new incoming government is bound to repudiate its enemy’s debt. The Mexican loan of 1913 is unusual because the bankers in charge knew that the government was likely to lose or, at best, fight a long and bloody war. Paribas, the head of the syndicate that underwrote the loan, received first-hand reports from an agent in Mexico City, according to whom:
‘The political situation is (. . .) obscure because the country is still infested by rebellious bands while General Huerta is only president of the republic in a provisory character (…), and no one knows how that will end’.
Victoriano Huerta assassinated Francisco Madero, leader of the revolution who deposed the long-serving autocrat Porfirio Díaz who was in office at various times between the 1870s and 1911. The report from the Mexican agent was accurate: Huerta aimed to re-established Díaz’s stable regime, but his counter-revolution fostered an unlikely but powerful alliance between popular insurgents such as Emiliano Zapata and moderate politicians linked to Madero. Pressured by the financial toll of the war, the Huerta administration defaulted on its entire debt – including the loan take out 1913 – in 1914. The insurgents took Mexico City and deposed the dictator, but the war continued and Mexico became a failed state. Peace only came in 1917, but the government in charge of reconstructing the country did not pay the sovereign debt.
Paribas and its fellow syndicate members underwrote the 1913 loan at rather poor borrowing conditions: 6 per cent interest, 90 per cent discount, and 10 years maturity, which result in a 4 per cent risk premium (a measure of credit cost), twice higher than the premium applied to the Mexican debt already floating on the London Stock Exchange at the time. In plain language, the loan was remarkably expensive vis-à-vis market conditions. This discrepancy appears in the graph below: The solid line is the premium at which the secondary market traded old Mexican bonds, and the dot is the premium at which the banks issued the new loan in 1913.
Table 01. Mexican risk and reports in The Times in Mexico.
The bankers themselves considered the loan as ‘too severe a burden on the Government’, but agreed that the operation had to be arranged ‘in our favour’ because of the ‘terrible political circumstances’ in the country. In line with this dire assessment of Mexican political conditions, Paribas sold its entire share of the 1913 loan. In fact, Paribas acted solely as an underwriter to float the bonds on the international market, as opposed to underwriter and final creditor (holding a share of the bonds). The bank also liquidated all its other Mexican assets it held, including a significant share of the Banco Nacional de México. A conflict of interest explains why Paribas underwrote the 1913 loan: The new credit created confidence among the public and sustained the price of all Mexican securities, which enabled Paribas to eliminate its exposure to Mexico without realising losses. The liquidation was profitable overall: in particular, the bank sold the 1913 bonds at a 6 per cent margin.
Undoubtedly, Paribas’ gains were at the expense of others. Why, then, did bondholders agree to purchase the debt? Figure 1 suggests (and econometric tests confirm) that positive press reports influenced the market. The Times published negative news on Mexico when the revolutionaries deposed Díaz and the counter-revolutionaries assassinated Madero, in 1911 and 1913, respectively, but it subsequently altered its editorial stance by publishing good news and generally remaining going quiet on the country. Meanwhile, bondholders continued buying Mexican bonds in spite of the civil war and, as a result, Mexican risk stayed below 2 per cent, a relatively low rate.
The public read over-optimistic news, while the bankers had access to pessimistic but accurate reports from their agents in Mexico. Thus, Paribas benefited from asymmetric information, which explains why it could profit at the expenses of the final creditors.
This case study is at odds with the most recent historical literature on sovereign debt, which stresses the role of debt underwriters as gatekeepers, responsible for guiding the market. The literature asserts that bankers produced signals that separated the trustworthy borrowers from the rest. In contrast, Paribas exploited the market´s disinformation to profit from the liquidation of its Mexican businesses.
In early 1998 a nervous options trader was asked to fill an order from one of the world’s largest global investors. The fund’s manager, believing that Canadian short term rates would fall in the very near future, wanted to buy the option – but not the obligation – to buy two year bonds for the next two weeks at a ‘strike’ price of 103 per cent of par when they were actually trading at 102. If rates fell enough upon the option’s expiry in two weeks, the bond would trade above 103, and the hedge fund would pocket the difference between the actual price and the strike of 102.
The average volume in options on these bonds for the week was probably a few hundred million dollars of par value, but this client was looking for options on $2 billion. It would be impossible for the trader to find the exact matching trade in the market from another client, and he would have to ‘manufacture’ the options himself. How, then, to calculate the price?
The framework for this analysis was largely developed by Fisher Black, Myron Scholes and Robert Merton, as published in 1972 (Black and Scholes 1972, Merton 1973). But one crucial input into the so-called Black-Scholes-Merton (BSM) model was difficult to estimate: expected future volatility (as measured by standard deviation of returns) of the underlying bond over the next two weeks. The trader looked first to the recent past: What had the two-week volatility been over the past few months? The trader also knew that unemployment numbers were due out in three days and that some uncertainty always surrounds such a release. In the end, the trader used the BSM model and a volatility input of slightly higher than the past two weeks’ observations to account for the uncertainty in unemployment and the large size of the trade. Upon execution, the trader then used the BSM model to calculate the amount of the underlying bond to sell short to hedge some of the risks to the original trade. That trader was one of the co-authors of an article — ‘Commodities option pricing efficiency before Black, Scholes and Merton’ — recently published in the Review. In this study, the authors David Chambers and Rasheed Saleuddin examine a commodity futures options market for the interwar period to determine how traders might have made markets in options before the advent of modern models.
It is often thought that market prices conform to newly-implemented models rather than obeying some natural laws of markets before such laws are revealed to observers. It has been suggested that equity options, specifically, were ‘performative’ in that they converged to BSM-efficient levels shortly after the dissemination of this model in the early 1970s (MacKenzie and Millo 2003). On the other hand, some claim that it was the advent of liquid exchange trading around the same time that led to BSM efficiency (Kairys and Valerio 1997).
Evidence of efficient pricing before the 1970s is sparse and mixed. There are very few data sets with which to test efficiency, and the few that have been used are far from ideal. Two papers (Kairys and Valerio 1997, Mixon 2009) use one-sided indicative advertised levels targeted to retail investors, without any indication that these were prices upon which investors traded. Another paper uses primarily warrant data, yet the prices of warrants, even in modern times, are often far from BSM efficient, for well-understood reasons (Veld 2003). In any event, these studies find that, on average, prices were far from BSM efficient levels. There is little attempt in this early literature to determine if prices were dependent on the most important BSM model parameter –observed volatility.
This study uses a new data set: prices at which the economist John Maynard Keynes traded options on tin and copper futures traded on the interwar London Metals Exchange. In turns out that Keynes traded at levels that were – on average – as efficient as modern markets. Additionally, the traded prices appear to have varied systematically with the key input to the model, observed volatility (Figure 1), with 99% significance and very high R2.
How was it possible that Keynes’ traders and brokers were able to match BSM efficient prices so closely? There is some suggestion that options traders in the 19th and early 20th centuries well understood options theory. Indeed, Anne Murphy (2009) had identified a perhaps surprising degree of sophistication and activity among the options traders in 17th century London. Certainly, by the turn of the previous century, options traders had a strong grasp of many of the fundamentals of options trading and pricing (Higgins 1907). Yet current understanding of the influence of volatility of the underlying asset was still in its infancy and several contemporary breakthroughs in theory were not disseminated widely. Finance scholarship hints at one possible explanation: For options such as those traded by Keynes, the relationship between the key BSM valuation parameter, volatility, and option price are quite straightforward to estimate (Brenner and Subrahmanyam 1988). It may have been the case that market participants were intuitively taking into account BSM without an understanding of the model itself. This conclusion is, of course, pure speculation – but perhaps therein lies its fascination?