A Silver Transformation: Chinese Monetary Integration in Times of Political Disintegration during 1898–1933

by Debin Ma (London School of Economics and Hitotsubashi University)  and Liuyan Zhao (Peking University)

The full article for this blog post will be published on  The Economic History Review

 

chinese coins
Two 19th Century Chinese Cash Coins. Available at <https://www.coincommunity.com/forum/topic.asp?TOPIC_ID=70505>

Despite the political turmoil, the early 20th century witnessed  fundamental economic and industrial transformations in China.  Our research documents the most important but neglected aspect of this development:  China remained on the silver standard until 1936 while many countries remained on gold.  Nonetheless, the Chinese silver regime defies easy classification because  its silver basis was traditionally not in coinage, but in the form of privately minted ingots called sycee, denoted by a unit of account called tael.  During our study period, sycee circulated alongside standardized silver coins such as Mexican and later Chinese silver dollars. We know relatively little about the operation of the silver exchange and monetary regime within China, in contrast to the large literature on the gold standard during the same era.

We present an in-depth analysis of China’s unique silver regime by offering a systematic econometric assessment of Chinese silver market integration between 1898 and 1933.  As a result of this integration, the dollar-tael exchange rate, the  yangli, became the most important indicator of the Chinese currency market. We compile a large data set culled from contemporary publications on the yangli across nineteen cities in Northern and Central China, and offer a threshold time series methodology for measuring silver integration comparable to that of gold points.

Picture 1
Figure 1. Silver point estimates between Shanghai and Tianjin in 10-year moving windows, Jan. 1898–March 1933. Source: Ma and Zhao (per article in the Economic History Review, 2019)

We find that the silver points between Shanghai and Tianjin, the two most important financial centers in Central and Northern China, declined  steadily from the 1910s for the rest of the period (Figure 1).  Our estimates of silver points from the daily rates of nineteen cities during the 1920s and 1930s also reveal that there was no substantial difference in the level of monetary integration between the Warlord Era of the 1920s and the Nanjing decade of the 1930s. Figure 2 provides a simple linear plot of  the distance between Shanghai and the estimated silver points of those cities paired with Shanghai during the 1920s and 1930s. This Figure shows a positive relationship between silver points and the distance from Shanghai, indicating the rise of a monetary system centered on Shanghai.

Our silver point estimates are closely aligned with the actual costs of the silver trade derived from contemporary accounts. Moreover, the silver points help predict corresponding transaction volumes: the majority of large silver exports from Shanghai occurred when the  yangli spread was above the silver export points;  only limited flows occurred when it fell within the bounds of the silver points. The econometric results reveal that monetary integration between Shanghai and Tianjin improved in the 1910s—precisely during the Warlord Era of national disintegration and civil strife—and these improvements spread to other cities in Central and Northern China in the 1920s and 1930s.

Picture 2
Figure 2. Silver points and distance. Source: Ma and Zhao (per article in the Economic History Review, 2019

Our research provides a historical analysis of the causes of monetary integration, attributing a central role to China’s infrastructure and financial improvements during this period. One plausible driving force was the rise of new transport and information infrastructure, for example, the completion of the Tianjin-Nanjing Railway, and the Shanghai-Nanjing and Shanghai-Hangzhou Railways constructed between 1908 and 1916, which linked the Northern and Southern China. Compared with road or water transport, railroads offered much faster, cheaper and safer delivery, an advantage far more significant for high-value silver shipments than low-value high-bulk commodities.

Another, more important factor was monetary and financial transformation indicated by the rise of a modern banking system from the end of the 19th century. Although it was the government that issued national dollars, banking communities played a key role in defending its reputation and purity. Overtime, the ‘countable’ dollar outperformed the ‘weighable’ sycee as a medium of exchange, gaining an increasing share in China’s monetary system. This eventually paved the way for the currency reform of 1933, which abolished the sycee and the tael, establishing the dollar as the sole standard. A notable monetary transformation was the increasing popularity of banknotes. The system of Chinese bank note issuance was largely run on a model of free banking with multiple public and private banks, Chinese or foreign, issuing silver-convertible banknotes based on reputation mechanism. Thus, the increasing note issue from the 1910s provided a much more elastic currency to smooth seasonality in the money markets and enhance financial integration.

 

To contact the authors:

Debin Ma (D.Ma1@lse.ac.uk)

Liuyan Zhao (zhly@pku.edu.cn)

Turkey’s Experience with Economic Development since 1820

by Sevket Pamuk, University of Bogazici (Bosphorus) 

This research is part of a broader article published in the Economic History Review.

A podcast of Sevket’s Tawney lecture can be found here.

 

Pamuk 1

New Map of Turkey in Europe, Divided into its Provinces, 1801. Available at Wikimedia Commons.

The Tawney lecture, based on my recent book – Uneven centuries: economic development of Turkey since 1820, Princeton University Press, 2018 – examined the economic development of Turkey from a comparative global perspective. Using GDP per capita and other data, the book showed that Turkey’s record in economic growth and human development since 1820 has been close to the world average and a little above the average for developing countries. The early focus of the lecture was on the proximate causes — average rates of investment, below average rates of schooling, low rates of total productivity growth, and low technology content of production —which provide important insights into why improvements in GDP per capita were not higher. For more fundamental explanations I emphasized the role of institutions and institutional change. Since the nineteenth century Turkey’s formal economic institutions were influenced by international rules which did not always support economic development. Turkey’s elites also made extensive changes in formal political and economic institutions. However, these institutions provide only part of the story:  the direction of institutional change also depended on the political order and the degree of understanding between different groups and their elites. When political institutions could not manage the recurring tensions and cleavages between the different elites, economic outcomes suffered.

There are a number of ways in which my study reflects some of the key trends in the historiography in recent decades.  For example, until fairly recently, economic historians focused almost exclusively on the developed economies of western Europe, North America, and Japan. Lately, however, economic historians have been changing their focus to developing economies. Moreover, as part of this reorientation, considerable effort has been expended on constructing long-run economic series, especially GDP and GDP per capita, as well as series on health and education.  In this context, I have constructed long-run series for the area within the present-day borders of Turkey. These series rely mostly on official estimates for the period after 1923 and make use of a variety of evidence for the Ottoman era, including wages, tax revenues and foreign trade series. In common with the series for other developing countries, many of my calculations involving Turkey  are subject to larger margins of error than similar series for developed countries. Nonetheless, they provide insights into the developmental experience of Turkey and other developing countries that would not have been possible two or three decades ago. Finally, in recent years, economists and economic historians have made an important distinction between the proximate causes and the deeper determinants of economic development. While literature on the proximate causes of development focuses on investment, accumulation of inputs, technology, and productivity, discussions of the deeper causes consider the broader social, political, and institutional environment. Both sets of arguments are utilized in my book.

I argue that an interest-based explanation can address both the causes of long-run economic growth and its limits. Turkey’s formal economic institutions and economic policies underwent extensive change during the last two centuries. In each of the four historical periods I define, Turkey’s economic institutions and policies were influenced by international or global rules which were enforced either by the leading global powers or, more recently, by international agencies. Additionally, since the nineteenth century, elites in Turkey made extensive changes to formal political institutions.  In response to European military and economic advances, the Ottoman elites adopted a programme of institutional changes that mirrored European developments; this programme  continued during the twentieth century. Such fundamental  changes helped foster significant increases in per capita income as well as  major improvements in health and education.

But it is also necessary to examine how these new formal institutions interacted with the process of economic change – for example, changing social structure and variations in the distribution of power and expectations — to understand the scale and characteristics of growth that the new institutional configurations generated.

These interactions were complex. It is not easy to ascribe the outcomes created in Turkey during these two centuries to a single cause. Nonetheless, it is safe to state that in each of the four periods, the successful development of  new institutions depended on the state making use of the different powers and capacities of the various elites. More generally, economic outcomes depended closely on the nature of the political order and the degree of understanding between different groups in society and the elites that led them. However, one of the more important characteristics of Turkey’s social structure has been the recurrence of tensions and cleavages between its elites. While they often appeared to be based on culture, these tensions overlapped with competing economic interests which were, in turn, shaped by the economic institutions and policies generated by the global economic system. When political institutions could not manage these tensions well, Turkey’s economic outcomes remained close to the world average.

A World Energy Revolution: the first phase 1820-1913

By Paolo Malanima (University «Magna Graecia» in Catanzaro)

The full article from this blog post is scheduled to be published by the Economic History Review

 

Screen Shot 2019-11-13 at 16.48.44
Thomas Hair, “Percy Main Colliery”, Newcastle University Library. Available on Archives Portal Europe

 

In 2016 any human being consumed every day 57,000 kilocalories (kcal) on the average; but in western Europe he/she consumed more than 100,000 kcal, in USA and Canada 200,000 and in Africa 22-23,000 (Figure 1). The remarkable increase in the capacity to do work, due to the rise in the energy consumption, marked a discontinuity in the course of the World economy and was a decisive support of Modern Growth.

Untitled 01
Figure 1. Per capita energy consumption per macroarea in 1820 and 2016 (kcal). Note: WE is Western Europe, EE is Eastern Europe, NA is North America, LA is Latin America, O is Oceania, As is Asia, ME is Middle East, and Af is Africa.
Source: please refer to the full article as published on the Economic History Review 

 

We see in Fig. 2 the change, both in aggregate (A) and per capita (B) terms. In 1820-1913 total consumption increased 3.5-4 times, and per head 2-2.5 times. The growth in the availability of energy was the same of total and per capita GDP. Actually, in this first phase of growth, 1 percent more energy input resulted into 1 percent more GDP.

Untitled 02
Figure 2. World energy consumption (in A in Millions of Tons of Oil Equivalent) and per capita consumption (in B in kcal/day) 1820-2016. Source: please refer to the full article as published on the Economic History Review 

 

This increase could not have been possible without a change in the composition of the energy sources consumed (Fig. 3). When in 1800-20 the main traditional sources were still food, firewood and fodder, consumption per head on the World scale did not exceed 10,000 kcal. It became possible to overtake this amount when primarily coal and later oil, natural gas and hydroelectricity began to be exploited on a large scale.

Untitled 03
Figure 3. Level and structure of per capita energy consumption in 1820 and 1910 (kcal/day). Source: please refer to the full article as published on the Economic History Review

 

When traditional sources dominated, inequality in energy exploitation was very modest and primarily depended on different temperatures. Western Europe, North America and Oceania already prevailed in 1820 in energy consumption, but their population was equal to 15 percent of the World population, compared to 65-70 of Asia and Middle East. The introduction of non-renewables energy sources brought about a rising inequality, which reached a peak on the eve of the First World War and only began to diminish after the Second World War (Fig. 4).

Untitled 04
Figure 4. Inequality among countries in energy consumption 1820-2016 (Theil index). Source: please refer to the full article as published on the Economic History Review

 

A consequence of all this change was that in countries rich of coal, as a ratio to population (Europe, North America, Oceania), since energy per worker was remarkable, productivity and real wages were significantly higher than elsewhere. An incentive existed in those macroareas to replace labour with mechanical devices. The contribution of technology was remarkable in this process. Consider, for example, the evolution of steam engine technology between the eighteenth and nineteenth centuries which indicates the impact that technical progress exercised on the ability to exploit mechanical power.

Natural capital is ordinarily excluded from models of economic growth. Actually the role of natural resources was certainly decisive at the start of Modern Growth. For some thousands of years, in the agricultural civilisations, cultures, sciences, institutions and political systems actually changed without any substantial progress in the capacity to do work. Both real wages and incomes per capita, reconstructed recently by the historians, draw straight lines until about 1820. Many concurrent factors of change contributed to what is called today Modern Growth — cultural, political, institutional — yet, without the removal of the energy constraint, steady economic growth would have been unobtainable.

 

To contact the author: malanima@unicz.it

Trade in the Shadow of Power: Japanese Industrial Exports in the Interwar years

By Alejandro Ayuso Díaz and Antonio Tena Junguito (Carlos III University of Madrid)

The history of international trade provides numerous examples of trade in the ‘shadow of power’ (Findlay and O´Rourke 2007). Here we argue that Japanese empire power was as important as factor endowments, preferences, and technology, to the expansion of trade during the interwar years. Following Gardfield et al 2010, the shadow of power that we discuss is based on the use or threat of violence or conquest which depend on the military capabilities of states.

fig01
Figure 1:Japan and World Manufacturing Export Performance. Source: Japan and World comparative manufacture exports in volume (1953=100) from UN Historical Trade Statistics.

Japan was a latecomer to 20th-century industrialization, but during the interwar years, and especially in the 1930s, it was able to activate a complex and aggressive industrialization policy to accelerate the modernization of its industry. This policy consisted of import substitution and exports of manufactures to its region of influence. This newly created empire was very efficient in developing a peculiar imperial trade in the shadow of power throughout East and Southeast Asia in conjunction with a more aggressive imperial regional policy through conquest.

The trade generation capacity of the Japanese empire during the interwar years was much higher than that suggested by Mitchener and Weidenmier (2008) for the preceding period (1870-1913). However, some caution needs to be exercised in making this comparison because it might indicate issues associated with the interpretation of the relevant statistics. Japanese empire trade membership increased by more than ten times that associated with the British, German and French Empires, during this period and was twice as great as that for the US and Spanish empires. Consequently, it might be argued that our coefficients are more prominent because they are capturing stronger intra-bloc bias that emerged after the Great Depression.

Employing a granular database consisting of Japanese exports towards 117 countries over 1,135 products at six different benchmarks (1912,1915,1925,1929,1932 and 1938) we are able to demonstrate that the expansion of Japanese exports during the interwar period was facilitated by the exploitation of formal and informal imperial links which exerted a bigger influence on export determination than productivity increases.

Figure 2: Japanese total manufacturing exports by skills and region. Source: Annual Returns of the Foreign Trade of the Empire of Japan.

fig02
a) Manufacturing exports by skills
fig03
b) high skilled exports by region

 

The main characteristics of this trade expansion between 1932 and 1938 were high-skill exports directed towards Japanese colonies. Additional evidence indicates that Japan did not enjoy comparative advantage in products with limited export- market potential. Colonial infrastructure, building and urbanization were used as exclusive markets for high-skill exports and became one of the main drivers of Japanese export expansion and its modern industrialization process.

Trade blocs in the interwar years were used as instruments of imperial power to foster exports and as a substitute for productivity in encouraging industrial production. In that sense, Japan’s total exports in 1938 were between 28% and 47% higher than 1912 thanks to imperial mechanisms. The figure is much higher when we capture the imperial effect on high-skill exports (between 66% and 76% higher thanks to imperial connections). The quoted figures are based on a counterfactual comparing exports without the empire to those obtained via Imperial mechanisms.

We believe that our results demonstrate the colonial trade bias mechanism used by imperialist countries was inversely related to productivity. The implicit counterfactual hypothesis would be that without imperial intervention in the region Japan would not have expanded its high-skill exports and would not have exported such a variety of new products. In other words, Japan’s industrialisation process would have been much less pronounced.

 

References

Ayuso-Diaz, A. and Tena-Junguito, A. (2019): “Trade in the Shadow of Power: Japanese Industrial Exports in the Interwar years”. Economic History Review (forthcoming).

Findlay, R. and O’Rourke, K. (2007). Power and Plenty. Princeton, NJ: Princeton University Press.

Garfinkel, M, Skaperdas, S., and Syropoulos, C. (2012). ‘Trade in the Shadow of Power’. In Skaperdas, S., and Syropoulos, C. (eds.), Oxford Handbook on the Economics of Peace and Conflict. Oxford University Press.

Mitchener, K. J., & Weidenmier, M. (2008). Trade and empire. The Economic Journal, 118(533), 1805-1834.

Ritschl, A. & Wolf, N. (2003). “Endogeneity of Currency Areas and Trade Blocs: Evidence from the Inter-war Period,” CEPR Discussion Papers 4112.

 

To contact the authors:

Alejandro Ayuso Díaz (aayuso@clio.uc3m.es)

Antonio Tena Junguito (antonio.tena@uc3m.es)

Loans of the Revolution: How Mexico Borrowed as the State Collapsed in 1912–13

by Leonardo Weller (São Paulo School of Economics – FGV)

Read the full article on The Economic History Review – published in Augus 2018, available here

 

Mexico borrowed £6 million abroad in 1913, amidst a civil war that destroyed the state and killed over two million people. Civil wars tend to make creditors wary because of their inevitable consequences: if the borrowing government wins, it will need to spend on reconstruction, leaving it short of cash to pay their debt back; in case of defeat, the new incoming government is bound to repudiate its enemy’s debt. The Mexican loan of 1913 is unusual because the bankers in charge knew that the government was likely to lose or, at best, fight a long and bloody war. Paribas, the head of the syndicate that underwrote the loan, received first-hand reports from an agent in Mexico City, according to whom:

 ‘The political situation is (. . .) obscure because the country is still infested by rebellious bands while General Huerta is only president of the republic in a provisory character (…), and no one knows how that will end’.

Victoriano Huerta assassinated Francisco Madero, leader of the revolution who deposed the long-serving autocrat Porfirio Díaz who was in office at various times between the 1870s and 1911. The report from the Mexican agent was accurate: Huerta aimed to re-established Díaz’s stable regime, but his counter-revolution fostered an unlikely but powerful alliance between popular insurgents such as Emiliano Zapata and moderate politicians linked to Madero. Pressured by the financial toll of the war, the Huerta administration defaulted on its entire debt – including the loan take out 1913 – in 1914.   The insurgents took Mexico City and deposed the dictator, but the war continued and Mexico became a failed state. Peace only came in 1917, but the government in charge of reconstructing the country did not pay the sovereign debt.

LUCHA
Figure 1. Lucha Revolucionaria. Source: Diego Rivera, Palacio Nacional, Mexico City.

Paribas and its fellow syndicate members underwrote the 1913 loan at rather poor borrowing conditions: 6 per cent interest, 90 per cent discount, and 10 years maturity, which result in a 4 per cent risk premium (a measure of credit cost), twice higher than the premium applied to the Mexican debt already floating on the London Stock Exchange at the time. In plain language, the loan was remarkably expensive vis-à-vis market conditions. This discrepancy appears in the graph below: The solid line is the premium at which the secondary market traded old Mexican bonds, and the dot is the premium at which the banks issued the new loan in 1913.

 

Table 01. Mexican risk and reports in The Times in Mexico.

2
Sources: Calculated from Investor’s Monthly Manual, Bulletin de la Côte, The Times, 1911–14.

 

The bankers themselves considered the loan as ‘too severe a burden on the Government’, but agreed that the operation had to be arranged ‘in our favour’ because of the ‘terrible political circumstances’ in the country. In line with this dire assessment of Mexican political conditions, Paribas sold its entire share of the 1913 loan. In fact, Paribas acted solely as an underwriter to float the bonds on the international market, as opposed to underwriter and final creditor (holding a share of the bonds). The bank also liquidated all its other Mexican assets it held, including a significant share of the Banco Nacional de México. A conflict of interest explains why Paribas underwrote the 1913 loan: The new credit created confidence among the public and sustained the price of all Mexican securities, which enabled Paribas to eliminate its exposure to Mexico without realising losses. The liquidation was profitable overall: in particular, the bank sold the 1913 bonds at a 6 per cent margin.

Undoubtedly, Paribas’ gains were at the expense of others. Why, then, did bondholders agree to purchase the debt? Figure 1 suggests (and econometric tests confirm) that positive press reports influenced the market. The Times published negative news on Mexico when the revolutionaries deposed Díaz and the counter-revolutionaries assassinated Madero, in 1911 and 1913, respectively, but it subsequently altered its editorial stance by publishing good news and generally remaining going quiet on the country. Meanwhile, bondholders continued buying Mexican bonds in spite of the civil war and, as a result, Mexican risk stayed below 2 per cent, a relatively low rate.

The public read over-optimistic news, while the bankers had access to pessimistic but accurate reports from their agents in Mexico. Thus, Paribas benefited from asymmetric information, which explains why it could profit at the expenses of the final creditors.

This case study is at odds with the most recent historical literature on sovereign debt, which stresses the role of debt underwriters as gatekeepers, responsible for guiding the market. The literature asserts that bankers produced signals that separated the trustworthy borrowers from the rest. In contrast, Paribas exploited the market´s disinformation to profit from the liquidation of its Mexican businesses.

 

To contact the author:

leonardo.weller@fgv.br

@leoweller

 

Did efficient options pricing lead or follow the development of the Black Scholes Merton model? Evidence from the interwar London Metals Exchange

by David Chambers and Rasheed Saleuddin (Judge Business School, University of Cambridge)

This research is due to be published in the Economic History Review and is currently available on Early View

 

In early 1998 a nervous options trader was asked to fill an order from one of the world’s largest global investors. The fund’s manager, believing that Canadian short term rates would fall in the very near future, wanted to buy the option – but not the obligation – to buy two year bonds for the next two weeks at a ‘strike’ price of 103 per cent of par when they were actually trading at 102. If rates fell enough upon the option’s expiry in two weeks, the bond would trade above 103, and the hedge fund would pocket the difference between the actual price and the strike of 102.

The average volume in options on these bonds for the week was probably a few hundred million dollars of par value, but this client was looking for options on $2 billion. It would be impossible for the trader to find the exact matching trade in the market from another client, and he would have to ‘manufacture’ the options himself. How, then, to calculate the price?

The framework for this analysis was largely developed by Fisher Black, Myron Scholes and Robert Merton, as published in 1972 (Black and Scholes 1972, Merton 1973). But one crucial input into the so-called Black-Scholes-Merton (BSM) model was difficult to estimate: expected future volatility (as measured by standard deviation of returns) of the underlying bond over the next two weeks. The trader looked first to the recent past: What had the two-week volatility been over the past few months? The trader also knew that unemployment numbers were due out in three days and that some uncertainty always surrounds such a release. In the end, the trader used the BSM model and a volatility input of slightly higher than the past two weeks’ observations to account for the uncertainty in unemployment and the large size of the trade. Upon execution, the trader then used the BSM model to calculate the amount of the underlying bond to sell short to hedge some of the risks to the original trade. That trader was one of the co-authors of an article — ‘Commodities option pricing efficiency before Black, Scholes and Merton’ — recently published in the Review. In this study, the authors David Chambers and Rasheed Saleuddin examine a commodity futures options market for the interwar period to determine how traders might have made markets in options before the advent of modern models.

It is often thought that market prices conform to newly-implemented models rather than obeying some natural laws of markets before such laws are revealed to observers. It has been suggested that equity options, specifically, were ‘performative’ in that they converged to BSM-efficient levels shortly after the dissemination of this model in the early 1970s (MacKenzie and Millo 2003). On the other hand, some claim that it was the advent of liquid exchange trading around the same time that led to BSM efficiency (Kairys and Valerio 1997).

Evidence of efficient pricing before the 1970s is sparse and mixed. There are very few data sets with which to test efficiency, and the few that have been used are far from ideal. Two papers (Kairys and Valerio 1997, Mixon 2009) use one-sided indicative advertised levels targeted to retail investors, without any indication that these were prices upon which investors traded. Another paper uses primarily warrant data, yet the prices of warrants, even in modern times, are often far from BSM efficient, for well-understood reasons (Veld 2003). In any event, these studies find that, on average, prices were far from BSM efficient levels. There is little attempt in this early literature to determine if prices were dependent on the most important BSM model parameter –observed volatility.

This study uses a new data set: prices at which the economist John Maynard Keynes traded options on tin and copper futures traded on the interwar London Metals Exchange. In turns out that Keynes traded at levels that were – on average – as efficient as modern markets. Additionally, the traded prices appear to have varied systematically with the key input to the model, observed volatility (Figure 1), with 99% significance and very high R2.

Untitled
Figure 1. Scatter Diagram of Implied Volatility vs. Historic Volatility of 3-month options on Tin and Copper futures, 1921-31. Source: Chambers and Saleuddin (2019)

How was it possible that Keynes’ traders and brokers were able to match BSM efficient prices so closely? There is some suggestion that options traders in the 19th and early 20th centuries well understood options theory. Indeed, Anne Murphy (2009) had identified a perhaps surprising degree of sophistication and activity among the options traders in 17th century London. Certainly, by the turn of the previous century, options traders had a strong grasp of many of the fundamentals of options trading and pricing (Higgins 1907). Yet current understanding of the influence of volatility of the underlying asset was still in its infancy and several contemporary breakthroughs in theory were not disseminated widely. Finance scholarship hints at one possible explanation: For options such as those traded by Keynes, the relationship between the key BSM valuation parameter, volatility, and option price are quite straightforward to estimate (Brenner and Subrahmanyam 1988). It may have been the case that market participants were intuitively taking into account BSM without an understanding of the model itself. This conclusion is, of course, pure speculation – but perhaps therein lies its fascination?

To contact the authors:

David Chambers:
d.chambers@jbs.cam.uc.uk

Rasheed Saleuddin,
Rks66@me.com
@r_sale.

It is only cheating if you get caught – Creative accounting at the Bank of England in the 1960s

by Alain Naef (Postdoctoral fellow at the University of California, Berkeley)

This research was presented at the EHS conference in Keele in 2018 and is available as a working paper here. It is also available as an updated 2019 version here.

 

Naef 3
The Bank of England. Available at Wikimedia Commons.

The 1960s were a period of crisis for the pound. Britain was on a fixed exchange rate system and needed to defend its currency with intervention on the foreign exchange market. To avoid a crisis, the Bank of England resorted to ‘window dressing’ the published reserve figures.

In the 1960s, the Bank came under pressure from two sides: first, publication of the Radcliffe report (https://en.wikipedia.org/wiki/Radcliffe_report) forced publication of more transparent accounts. Second, with removal of capital controls in 1958, the Bank came under attack from international speculators (Schenk 2010). These contradictory pressures put the Bank in an awkward position. It needed to publish its reserve position (holdings of dollars and gold ) but it recognised that doing so could trigger a run on sterling, thereby creating a self-fulfilling currency crisis (see Krugman: http://www.nber.org/chapters/c11032.pdf).

For a long time, the Bank had a reputation for the obscurity of its accounts and its lack of transparency. Andy Haldane (Chief Economist at the Bank) recognised, for ‘most of [it’s] history, opacity has been deeply ingrained in central banks’ psyche’.

(https://www.bankofengland.co.uk/speech/2017/a-little-more-conversation-a-little-less-action). One Federal Reserve (Fed) memo noted that the Bank of England took ‘a certain pride in pointing out that hardly anything can be inferred by outsiders from their balance sheet’, another that ‘it seems clear that the Bank of England is being pushed – by much public criticism – into giving out more information.’ However, the Bank did eventually publish reserve figures at a quarterly, and then monthly, frequency (Figure 1).

Transparency about the reserves created a risk for a currency crisis so in late 1966 the Bank developed a strategy for reporting levels that would not cause a crisis (Capie 2010). Figure 1 illustrates how ‘window dressing’ worked. The solid line reports the convertible reserves as published in the Quarterly Bulletin of the Bank of England. This information was available to market participants. The stacked columns show the actual daily dollar reserves. Spikes appear at monthly intervals, indicating the short-term borrowing that was used to ensure the reserves level was high enough on reporting days.

 

Figure 1. Published EEA convertible currency reserves vs. actual dollar reserves held at the EEA, 1962-1971.

Naef 1

 

The Bank borrowed dollars shortly before the reserve reporting day by drawing on swap lines (similar to the Fed in 2007 https://voxeu.org/article/central-bank-swap-lines). Swap drawings could be used overnight. Table 1 illustrates how window dressing worked using data from the EEA ledgers available at the archives of the Bank. As an example, on Friday, 31 May 1968, the Bank borrowed over £450 million – an increase in reserves of 171%. The swap operation was reversed the next working day, and on Tuesday the reserves level was back to where it was before reporting. The details of these operations emphasise how swap networks were short-term instruments to manipulate published figures.

 

Table 1. Daily entry in the EEA ledger showing how window dressing worked

Naef 2

 

The Bank of England’s window dressing was done in collaboration with the Fed. Both discussed reserve figures before the Bank published them. During most of the 1960s, the Bank and the Fed were in contact daily about exchange rate matters. Records of these phone conversations are parsimonious at the Bank but the Fed kept daily records (Archives of the Fed in New York, references 617031 and 617015).

During the 1960s, collaboration between the two central banks intensified. The Bank consulted the Fed on the exact wording of the reserve publication (Naef, 2019) and the Fed communicated on the swap position with the Bank, to ensure consonance between the public statements. Indeed, the Fed sent excerpts of minutes to the Bank to allow excision of anything mentioning window dressing (Archives of the Fed in New York, reference 107320). Thus, in December 1971, before publishing the minutes of the Federal Open Market Committee (FOMC) for 1966, Charles Coombs (a leading figure at the Fed) consulted Richard Hallet (Chief Cashier at the Bank):

‘You will recall that when you visited us in December 1969, we invited you to look over selected excerpts from the 1966 FOMC minutes involving certain delicate points that we thought you might wish to have deleted from the published version. We have subsequently deleted all of the passages which you found troublesome. Recently, we have made a final review of the minutes and have turned up one other passage that I am not certain you had an opportunity to go over. I am enclosing a copy of the excerpt, with possible deletions bracketed in red ink.’

Source: Letter from Coombs to Hallet, New York Federal Reserve Bank archives, 1 December 1971, Box 107320.)

 

Coombs suggested deleting passages where some FOMC members criticised window dressing, while other members suggested the Bank would get better results ‘if they reported their reserve position accurately than if they attempted to conceal their true reserve position’ (https://fraser.stlouisfed.org/scribd/?item_id=22913&filepath=/docs/historical/FOMC/meetingdocuments/19660628Minutesv.pdf). However, MacLaury (FOMC), stressed that there was a risk of ‘setting off a cycle of speculation against sterling’ if the Bank published a loss of $200 million, which was ‘large for a single month’ in comparison with what was published the previous month.

The history of the Bank’s window dressing is a reminder of the difficulties central banks face in managing reserves, a situation similar to how investors today closely monitor the reserves of the People’s Bank of China.

 

 

To contact the author: alain.naef@berkeley.edu

 

References:

Capie, Forrest. 2010. The Bank of England: 1950s to 1979. Cambridge: Cambridge University Press.

Naef, Alain. 2019. “Dirty Float or Clean Intervention?  The Bank of England in the Foreign Exchange Market.” Lund Papers in Economic History. General Issues, no. 2019:199. http://lup.lub.lu.se/record/dfe46e60-6dfb-4380-8354-e7b699ed8ef9.

Schenk, Catherine. 2010. The Decline of Sterling: Managing the Retreat of an International Currency, 1945–1992. Cambridge University Press.

All quiet before the take-off? Pre-industrial regional inequality in Sweden (1571-1850)

by Anna Missiaia and Kersten Enflo (Lund University)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

Missiaia Main.jpg
Södra Bancohuset (The Southern National Bank Building), Stockholm. Available here at Wikimedia Commons.

For a long time, scholars have thought about regional inequality merely as a by-product of modern economic growth: following a Kuznets-style interpretation, the front-running regions increase their income levels and regional inequality during industrialization; and it is only when the other regions catch-up that overall regional inequality decreases and completes the inverted-U shaped pattern. But early empirical research on this theme was largely focused on the  the 20th century, ignoring industrial take-off of many countries (Williamson, 1965).  More recent empirical studies have pushed the temporal boundary back to the mid-19th century, finding that inequality in regional GDP was already high at the outset of modern industrialization (see for instance Rosés et al., 2010 on Spain and Felice, 2018 on Italy).

The main constraint for taking the estimations well into the pre-industrial period is the availability of suitable regional sources. The exceptional quality of Swedish sources allowed us for the first time to estimate a dataset of regional GDP for a European economy going back to the 16th century (Enflo and Missiaia, 2018). The estimates used here for 1571 are largely based on a one-off tax proportional to the yearly production: the Swedish Crown imposed this tax on all Swedish citizens in order to pay a ransom for the strategic Älvsborg castle that had just been conquered by Denmark. For the period 1750-1850, the estimates rely on standard population censuses. By connecting the new series to the existing ones from 1860 onwards by Enflo et al. (2014), we obtain the longest regional GDP series for any given country.

We find that inequality increased dramatically between 1571 and 1750 and remained high until the mid-19th century. Thereafter, it declined during the modern industrialization of the country (Figure 1). Our results discard the traditional  view that regional divergence can only originate during an industrial take-off.

 

Figure 1. Coefficient of variation of GDP per capita across Swedish counties, 1571-2010.

Missiaia 1
Sources: 1571-1850: Enflo and. Missiaia, ‘Regional GDP estimates for Sweden, 1571-1850’; 1860-2010: Enflo et al, ‘Swedish regional GDP 1855-2000 and Rosés and Wolf, ‘The Economic Development of Europe’s Regions’.

 

Figure 2 shows the relative disparities in four benchmark years. If the country appeared relatively equal in 1571, between 1750 and 1850 both the mining districts in central and northern Sweden and the port cities of Stockholm and Gothenburg emerged.

 

Figure 2. The relative evolution of GDP per capita, 1571-1850 (Sweden=100).

Missiaia 2
Sources: 1571-1850: Enflo and. Missiaia, ‘Regional GDP estimates for Sweden, 1571-1850’; 2010: Rosés and Wolf, ‘The Economic Development of Europe’s Regions’.

The second part of the paper is devoted to the study of the drivers of pre-industrial regional inequality. Decomposing the Theil index for GDP per worker, we show that regional inequality was driven by structural change, meaning that regions diverged because they specialized in different sectors. A handful of regions specialized in either early manufacturing or in mining, both with a much higher productivity per worker compared to agriculture.

To explain this different trajectory, we use a theoretical framework introduced by Strulik and Weisdorf (2008) in the context of the British Industrial Revolution: in regions with a higher share of GDP in agriculture, technological advancements lead to productivity improvements but also to a proportional increase in population, impeding the growth in GDP per capita as in a classic Malthusian framework. Regions with a higher share of GDP in industry, on the other hand, experienced limited population growth due to the increasing relative price of children, leading to a higher level of GDP per capita. Regional inequality in this framework arises from a different role of the Malthusian mechanism in the two sectors.

Our work speaks to a growing literature on the origin of regional divergence and represents the first effort to perform this type of analysis before the 19th century.

 

To contact the authors:

anna.missiaia@ekh.lu.se

kerstin.enflo@ekh.lu.se

 

References

Enflo, K. and Missiaia, A., ‘Regional GDP estimates for Sweden, 1571-1850’, Historical Methods, 51(2018), 115-137.

Enflo, K., Henning, M. and Schön, L., ‘Swedish regional GDP 1855-2000 Estimations and general trends in the Swedish regional system’, Research in Economic History, 30(2014), pp. 47-89.

Felice, E., ‘The roots of a dual equilibrium: GDP, productivity, and structural change in the Italian regions in the long run (1871-2011)’, European Review of Economic History, (2018), forthcoming.

Rosés, J., Martínez-Galarraga, J. and Tirado, D., ‘The upswing of regional income inequality in Spain (1860–1930)’,  Explorations in Economic History, 47(2010), pp. 244-257.

Strulik, H., and J. Weisdorf. ‘Population, food, and knowledge: a simple unified growth theory.’ Journal of Economic Growth 13.3 (2008): 195.

Williamson, J., ‘Regional Inequality and the Process of National Development: A Description of the Patterns’, Economic Development and Cultural Change 13(1965), pp. 1-84.

 

How many days a year did people work in England before the Industrial Revolution?

By Judy Stephenson (University College London)

The full paper that inspired this blog post will be published on The Economic History Review and is currently available on early view here

DomeConstruction09
St Paul’s Cathedral – the construction of the Dome. Available at <https://www.explore-stpauls.net/oct03/textMM/DomeConstructionN.htm>

How many days a year did people work in England before the Industrial Revolution? For those who don’t spend their waking hours desperate for sources to inform wages and GDP per capita over seven centuries, this question provokes an agreeable discussion about artisans, agriculture and tradition. Someone will mention EP Thompson and clocks or Saint Mondays. ‘Really that few?’ It’s quaint.

But, for those of us who do spend our waking hours desperate for sources to inform wages and GDP per capita over seven centuries the question has evolved in the last few years into a debate about productivity and when modern economic growth began in an ‘industrious revolution’. A serious body of research in economic history has recently estimated increasing numbers of days that people worked from the late seventeenth century. Current estimates are that people worked about 270 days a year by 1700, rising to about 300 after 1750.

The uninitiated might think that estimates of such important things like the working year would be based on some substantive evidence, but in fact, most estimates of the working year that economic historians have been using for the last two decades don’t come from working records at all. They come from court depositions where witnesses told the courts when they went to and left work, or they come from working out how many days a worker had to toil to afford a basket of consumption goods. This approach, pioneered by Jacob Weisdorf and Bob Allen in 2011, essentially holds welfare as a constant throughout history, and it’s the key assumption made in a new paper on wages forthcoming from Jane Humphries and Jacob Weisdorf. Unsurprisingly for historians familiar with material showing the miserable conditions under which the poor toiled in eighteenth century Britain, this calculation frequently leads to a high number of days worked. It also implies that Londoners, due to higher day wages, may have had slightly more leisure than rural workers. Both implications might appear counterintuitive.

Knowledgeable historians, such as John Hatcher, have pointed out that the idea that anyone had 270 days paid work a year before the industrial revolution is fanciful. But unless there was an industrious revolution, and people did begin to work more days per year in market work – as Jan de Vries posited – the established evidence firmly implies that workers became worse off throughout the eighteenth century, because wage rates as measured by builders wages didn’t increase in line with inflation, and in fact builders earned even less than we thought.

My article, “Working days in a London construction team in the eighteenth century: evidence from St Paul’s Cathedral” forthcoming in the Review, takes a different approach: it uses the actual working records of a team of masons working under William Kempster who constructed the South West tower of St Paul’s Cathedral. For five years in the 1700s, these archives are exceptionally detailed. They show that building was seasonal (it’s not like we didn’t know – it’s just we had sort of forgotten), and building was stage dependent, so not all men could have worked all year. In fact, they didn’t. Surprisingly, for a stable firm at an established and large site, very few men worked for Kempster for more than about 27 weeks. Work was temporal and insecure, and working and employment relationships were casual.

If one was to take a crude average of the days each man worked in any year it would be less than 150 days. To do so is obviously misleading and that’s not what the paper claims, because obviously men worked for other employers too. But, what the working patterns reveal is that unless men seamlessly moved from one employer to another with no search costs or time in between, it would have been impossible for them to have worked 250 days a year. Its more plausible that they were able to work between 200 and 220 days.

Moreover, the data shows that men did not work the full 6 days per week on offer. The average number of days worked per week was only 5.2. This wasn’t because men did not work Saint Mondays (which are almost indiscernible) but because they took idiosyncratic breaks. Only the foremen seem to have been able to sustain six days a week.

However, men that had a longer relationship with Kempster worked more days per year than the rest. This implies that stronger working relationships or consolidation of employers and workers relationships might have led to an increase in the average number of days worked. However, architectural and construction historians generally think that consolidation in the industry did not occur until the 1820s. If there was an industrious revolution in the eighteenth century it might not have happened for builders. If builders’ wages are representative – and that old assumption seems increasingly stretched these days – then the story for wages in the eighteenth century is even more pessimistic than before.

The evidence from working records presented in this article paper are still relatively fragmentary but they do clearly show that holding welfare to be stable by calculating the number of days worked from consumption goods – as the Weisdorf/ Humphries/ Allen approach does not give us the whole story.

But then again, is it really plausible to hold welfare stable? The debate, and scholarship no doubt will continue.

 

To contact the author:

J.Stephenson@ucl.ac.uk

@judyzara

A Policy of Credit Disruption: The Punjab Land Alienation Act of 1900

by Latika Chaudhary (Naval Postgraduate School) and Anand V. Swamy (Williams College)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

Agriculture_in_Punjab_India
Farming, farms, crops, agriculture in North India. Available at Wikimedia Commons.

In the late 19th century the British-Indian government (the Raj) became preoccupied with default on debt and the consequent transfer of land in rural India. In many regions Raj officials made the following argument: British rule had created or clarified individual property rights in land, which had for the first time made land available as collateral for debt. Consequently, peasants could now borrow up to the full value of their land. The Raj had also replaced informal village-based forms of dispute resolution with a formal legal system operating outside the village, which favored the lender over the borrower. Peasants were spendthrift and naïve, and unable to negotiate the new formal courts created by British rule, whereas lenders were predatory and legally savvy. Borrowers were frequently defaulting, and land was rapidly passing from long-standing resident peasants to professional moneylenders who were often either immigrant, of another religion, or sometimes both.  This would lead to social unrest and threaten British rule. To preserve British rule it was essential that one of the links in the chain be broken, even if this meant abandoning cherished notions of sanctity of property and contract.

The Punjab Land Alienation Act (PLAA) of 1900 was the most ambitious policy intervention motivated by this thinking. It sought to prevent professional moneylenders from acquiring the property of traditional landowners. To this end it banned, except under some conditions, the permanent transfer of land from an owner belonging to an ‘agricultural tribe’ to a buyer or creditor who was not from this tribe. Moreover, a lender who was not from an agricultural tribe could no longer seize the land of a defaulting debtor who was from an agricultural tribe.

The PLAA made direct restrictions on the transfer of land a respectable part of the policy apparatus of the Raj and its influence persists to the present-day. There is a substantial literature on the emergence of the PLAA, yet there is no econometric work on two basic questions regarding its impact. First, did the PLAA reduce the availability of mortgage-backed credit? Or were borrowers and lenders able to use various devices to evade the Act, thereby neutralizing it?  Second, if less credit was available, what were the effects on agricultural outcomes and productivity? We use panel data methods to address these questions, for the first time, so far as we know.

Our work provides evidence regarding an unusual policy experiment that is relevant to a hypothesis of broad interest. It is often argued that ‘clean titling’ of assets can facilitate their use as collateral, increasing access to credit, leading to more investment and faster growth. Informed by this hypothesis, many studies estimate the effects of titling on credit and other outcomes, but they usually pertain to making assets more usable as collateral. The PLAA went in the opposite direction – it reduced the “collateralizability” of land which should have  reduced investment and growth, based on the argument we have described. We investigate whether it did.

To identify the effects of the PLAA, we assembled a panel dataset on 25 districts in Punjab from 1890 to 1910. Our dataset contains information on mortgages and sales of land, economic outcomes, such as acreage and ownership of cattle, and control variables like rainfall and population. Because the PLAA targeted professional moneylenders, it should have reduced mortgage-backed credit more in places where they were bigger players in the credit market. Hence, we interact a measure of the importance of the professional, that is, non-agricultural, moneylenders in the mortgage market with an indicator variable for the introduction of the PLAA, which takes the value of  1 from 1900 onward. As expected, we find that  that the PLAA contracted credit more in places where professional moneylenders played a larger role – compared to  districts with no professional moneylenders.  The PLAA reduced mortgage-backed credit by 48 percentage points more at the 25th percentile of our measure of moneylender-importance and by 61 percentage points more at the 75th percentile.

However, this decrease of mortgage-backed credit in professional moneylender-dominated areas did not lead to lower acreage or less ownership of cattle. In short, the PLAA affected credit markets as we might expect without undermining agricultural productivity. Because we have panel data, we are able to account for potential confounding factors such as time-invariant unobserved differences across districts (using district fixed effects), common district-specific shocks (using year effects) and the possibility that districts were trending differently independent of the PLAA (using district-specific time trends).

British officials provided a plausible explanation for the non-impact of PLAA on agricultural production: lenders had merely become more judicious – they were still willing to lend for productive activity, but not for ‘extravagant’ expenditures, such as social ceremonies.  There may be a general lesson here:  policies that make it harder for lenders to recover money may have the beneficial effect of encouraging due diligence.

 

 

To contact the authors:

lhartman@nps.edu

aswamy@williams.edu