The Economic History Society is saddened to learn of the recent death of Professor Robert (Bob) Millward. Professor Millward was professor of economics at Salford University before taking the chair in economic history at Manchester University in 1989. Bob was a highly-regarded scholar with diverse interests in economic history and he will be sorely missed. Read an academic appreciation of Robert Millward here
The 2020 Annual Conference will be held at St Catherine’s College, Oxford, Friday 17 – Sunday 19 April. Registration, sessions, delegate accommodation, dining, and meetings will all be located in the College.
- A link to the call for papers can be found here. Deadline: 2 September 2019
- A link to the call for posters can be found here. Deadline: 18 November 2019
More information here
by Leonardo Weller (São Paulo School of Economics – FGV)
Read the full article on The Economic History Review – published in Augus 2018, available here
Mexico borrowed £6 million abroad in 1913, amidst a civil war that destroyed the state and killed over two million people. Civil wars tend to make creditors wary because of their inevitable consequences: if the borrowing government wins, it will need to spend on reconstruction, leaving it short of cash to pay their debt back; in case of defeat, the new incoming government is bound to repudiate its enemy’s debt. The Mexican loan of 1913 is unusual because the bankers in charge knew that the government was likely to lose or, at best, fight a long and bloody war. Paribas, the head of the syndicate that underwrote the loan, received first-hand reports from an agent in Mexico City, according to whom:
‘The political situation is (. . .) obscure because the country is still infested by rebellious bands while General Huerta is only president of the republic in a provisory character (…), and no one knows how that will end’.
Victoriano Huerta assassinated Francisco Madero, leader of the revolution who deposed the long-serving autocrat Porfirio Díaz who was in office at various times between the 1870s and 1911. The report from the Mexican agent was accurate: Huerta aimed to re-established Díaz’s stable regime, but his counter-revolution fostered an unlikely but powerful alliance between popular insurgents such as Emiliano Zapata and moderate politicians linked to Madero. Pressured by the financial toll of the war, the Huerta administration defaulted on its entire debt – including the loan take out 1913 – in 1914. The insurgents took Mexico City and deposed the dictator, but the war continued and Mexico became a failed state. Peace only came in 1917, but the government in charge of reconstructing the country did not pay the sovereign debt.
Paribas and its fellow syndicate members underwrote the 1913 loan at rather poor borrowing conditions: 6 per cent interest, 90 per cent discount, and 10 years maturity, which result in a 4 per cent risk premium (a measure of credit cost), twice higher than the premium applied to the Mexican debt already floating on the London Stock Exchange at the time. In plain language, the loan was remarkably expensive vis-à-vis market conditions. This discrepancy appears in the graph below: The solid line is the premium at which the secondary market traded old Mexican bonds, and the dot is the premium at which the banks issued the new loan in 1913.
Table 01. Mexican risk and reports in The Times in Mexico.
The bankers themselves considered the loan as ‘too severe a burden on the Government’, but agreed that the operation had to be arranged ‘in our favour’ because of the ‘terrible political circumstances’ in the country. In line with this dire assessment of Mexican political conditions, Paribas sold its entire share of the 1913 loan. In fact, Paribas acted solely as an underwriter to float the bonds on the international market, as opposed to underwriter and final creditor (holding a share of the bonds). The bank also liquidated all its other Mexican assets it held, including a significant share of the Banco Nacional de México. A conflict of interest explains why Paribas underwrote the 1913 loan: The new credit created confidence among the public and sustained the price of all Mexican securities, which enabled Paribas to eliminate its exposure to Mexico without realising losses. The liquidation was profitable overall: in particular, the bank sold the 1913 bonds at a 6 per cent margin.
Undoubtedly, Paribas’ gains were at the expense of others. Why, then, did bondholders agree to purchase the debt? Figure 1 suggests (and econometric tests confirm) that positive press reports influenced the market. The Times published negative news on Mexico when the revolutionaries deposed Díaz and the counter-revolutionaries assassinated Madero, in 1911 and 1913, respectively, but it subsequently altered its editorial stance by publishing good news and generally remaining going quiet on the country. Meanwhile, bondholders continued buying Mexican bonds in spite of the civil war and, as a result, Mexican risk stayed below 2 per cent, a relatively low rate.
The public read over-optimistic news, while the bankers had access to pessimistic but accurate reports from their agents in Mexico. Thus, Paribas benefited from asymmetric information, which explains why it could profit at the expenses of the final creditors.
This case study is at odds with the most recent historical literature on sovereign debt, which stresses the role of debt underwriters as gatekeepers, responsible for guiding the market. The literature asserts that bankers produced signals that separated the trustworthy borrowers from the rest. In contrast, Paribas exploited the market´s disinformation to profit from the liquidation of its Mexican businesses.
To contact the author:
by David Chambers and Rasheed Saleuddin (Judge Business School, University of Cambridge)
In early 1998 a nervous options trader was asked to fill an order from one of the world’s largest global investors. The fund’s manager, believing that Canadian short term rates would fall in the very near future, wanted to buy the option – but not the obligation – to buy two year bonds for the next two weeks at a ‘strike’ price of 103 per cent of par when they were actually trading at 102. If rates fell enough upon the option’s expiry in two weeks, the bond would trade above 103, and the hedge fund would pocket the difference between the actual price and the strike of 102.
The average volume in options on these bonds for the week was probably a few hundred million dollars of par value, but this client was looking for options on $2 billion. It would be impossible for the trader to find the exact matching trade in the market from another client, and he would have to ‘manufacture’ the options himself. How, then, to calculate the price?
The framework for this analysis was largely developed by Fisher Black, Myron Scholes and Robert Merton, as published in 1972 (Black and Scholes 1972, Merton 1973). But one crucial input into the so-called Black-Scholes-Merton (BSM) model was difficult to estimate: expected future volatility (as measured by standard deviation of returns) of the underlying bond over the next two weeks. The trader looked first to the recent past: What had the two-week volatility been over the past few months? The trader also knew that unemployment numbers were due out in three days and that some uncertainty always surrounds such a release. In the end, the trader used the BSM model and a volatility input of slightly higher than the past two weeks’ observations to account for the uncertainty in unemployment and the large size of the trade. Upon execution, the trader then used the BSM model to calculate the amount of the underlying bond to sell short to hedge some of the risks to the original trade. That trader was one of the co-authors of an article — ‘Commodities option pricing efficiency before Black, Scholes and Merton’ — recently published in the Review. In this study, the authors David Chambers and Rasheed Saleuddin examine a commodity futures options market for the interwar period to determine how traders might have made markets in options before the advent of modern models.
It is often thought that market prices conform to newly-implemented models rather than obeying some natural laws of markets before such laws are revealed to observers. It has been suggested that equity options, specifically, were ‘performative’ in that they converged to BSM-efficient levels shortly after the dissemination of this model in the early 1970s (MacKenzie and Millo 2003). On the other hand, some claim that it was the advent of liquid exchange trading around the same time that led to BSM efficiency (Kairys and Valerio 1997).
Evidence of efficient pricing before the 1970s is sparse and mixed. There are very few data sets with which to test efficiency, and the few that have been used are far from ideal. Two papers (Kairys and Valerio 1997, Mixon 2009) use one-sided indicative advertised levels targeted to retail investors, without any indication that these were prices upon which investors traded. Another paper uses primarily warrant data, yet the prices of warrants, even in modern times, are often far from BSM efficient, for well-understood reasons (Veld 2003). In any event, these studies find that, on average, prices were far from BSM efficient levels. There is little attempt in this early literature to determine if prices were dependent on the most important BSM model parameter –observed volatility.
This study uses a new data set: prices at which the economist John Maynard Keynes traded options on tin and copper futures traded on the interwar London Metals Exchange. In turns out that Keynes traded at levels that were – on average – as efficient as modern markets. Additionally, the traded prices appear to have varied systematically with the key input to the model, observed volatility (Figure 1), with 99% significance and very high R2.
How was it possible that Keynes’ traders and brokers were able to match BSM efficient prices so closely? There is some suggestion that options traders in the 19th and early 20th centuries well understood options theory. Indeed, Anne Murphy (2009) had identified a perhaps surprising degree of sophistication and activity among the options traders in 17th century London. Certainly, by the turn of the previous century, options traders had a strong grasp of many of the fundamentals of options trading and pricing (Higgins 1907). Yet current understanding of the influence of volatility of the underlying asset was still in its infancy and several contemporary breakthroughs in theory were not disseminated widely. Finance scholarship hints at one possible explanation: For options such as those traded by Keynes, the relationship between the key BSM valuation parameter, volatility, and option price are quite straightforward to estimate (Brenner and Subrahmanyam 1988). It may have been the case that market participants were intuitively taking into account BSM without an understanding of the model itself. This conclusion is, of course, pure speculation – but perhaps therein lies its fascination?
To contact the authors:
by Alain Naef (Postdoctoral fellow at the University of California, Berkeley)
The 1960s were a period of crisis for the pound. Britain was on a fixed exchange rate system and needed to defend its currency with intervention on the foreign exchange market. To avoid a crisis, the Bank of England resorted to ‘window dressing’ the published reserve figures.
In the 1960s, the Bank came under pressure from two sides: first, publication of the Radcliffe report (https://en.wikipedia.org/wiki/Radcliffe_report) forced publication of more transparent accounts. Second, with removal of capital controls in 1958, the Bank came under attack from international speculators (Schenk 2010). These contradictory pressures put the Bank in an awkward position. It needed to publish its reserve position (holdings of dollars and gold ) but it recognised that doing so could trigger a run on sterling, thereby creating a self-fulfilling currency crisis (see Krugman: http://www.nber.org/chapters/c11032.pdf).
For a long time, the Bank had a reputation for the obscurity of its accounts and its lack of transparency. Andy Haldane (Chief Economist at the Bank) recognised, for ‘most of [it’s] history, opacity has been deeply ingrained in central banks’ psyche’.
(https://www.bankofengland.co.uk/speech/2017/a-little-more-conversation-a-little-less-action). One Federal Reserve (Fed) memo noted that the Bank of England took ‘a certain pride in pointing out that hardly anything can be inferred by outsiders from their balance sheet’, another that ‘it seems clear that the Bank of England is being pushed – by much public criticism – into giving out more information.’ However, the Bank did eventually publish reserve figures at a quarterly, and then monthly, frequency (Figure 1).
Transparency about the reserves created a risk for a currency crisis so in late 1966 the Bank developed a strategy for reporting levels that would not cause a crisis (Capie 2010). Figure 1 illustrates how ‘window dressing’ worked. The solid line reports the convertible reserves as published in the Quarterly Bulletin of the Bank of England. This information was available to market participants. The stacked columns show the actual daily dollar reserves. Spikes appear at monthly intervals, indicating the short-term borrowing that was used to ensure the reserves level was high enough on reporting days.
Figure 1. Published EEA convertible currency reserves vs. actual dollar reserves held at the EEA, 1962-1971.
The Bank borrowed dollars shortly before the reserve reporting day by drawing on swap lines (similar to the Fed in 2007 https://voxeu.org/article/central-bank-swap-lines). Swap drawings could be used overnight. Table 1 illustrates how window dressing worked using data from the EEA ledgers available at the archives of the Bank. As an example, on Friday, 31 May 1968, the Bank borrowed over £450 million – an increase in reserves of 171%. The swap operation was reversed the next working day, and on Tuesday the reserves level was back to where it was before reporting. The details of these operations emphasise how swap networks were short-term instruments to manipulate published figures.
Table 1. Daily entry in the EEA ledger showing how window dressing worked
The Bank of England’s window dressing was done in collaboration with the Fed. Both discussed reserve figures before the Bank published them. During most of the 1960s, the Bank and the Fed were in contact daily about exchange rate matters. Records of these phone conversations are parsimonious at the Bank but the Fed kept daily records (Archives of the Fed in New York, references 617031 and 617015).
During the 1960s, collaboration between the two central banks intensified. The Bank consulted the Fed on the exact wording of the reserve publication (Naef, 2019) and the Fed communicated on the swap position with the Bank, to ensure consonance between the public statements. Indeed, the Fed sent excerpts of minutes to the Bank to allow excision of anything mentioning window dressing (Archives of the Fed in New York, reference 107320). Thus, in December 1971, before publishing the minutes of the Federal Open Market Committee (FOMC) for 1966, Charles Coombs (a leading figure at the Fed) consulted Richard Hallet (Chief Cashier at the Bank):
‘You will recall that when you visited us in December 1969, we invited you to look over selected excerpts from the 1966 FOMC minutes involving certain delicate points that we thought you might wish to have deleted from the published version. We have subsequently deleted all of the passages which you found troublesome. Recently, we have made a final review of the minutes and have turned up one other passage that I am not certain you had an opportunity to go over. I am enclosing a copy of the excerpt, with possible deletions bracketed in red ink.’
Source: Letter from Coombs to Hallet, New York Federal Reserve Bank archives, 1 December 1971, Box 107320.)
Coombs suggested deleting passages where some FOMC members criticised window dressing, while other members suggested the Bank would get better results ‘if they reported their reserve position accurately than if they attempted to conceal their true reserve position’ (https://fraser.stlouisfed.org/scribd/?item_id=22913&filepath=/docs/historical/FOMC/meetingdocuments/19660628Minutesv.pdf). However, MacLaury (FOMC), stressed that there was a risk of ‘setting off a cycle of speculation against sterling’ if the Bank published a loss of $200 million, which was ‘large for a single month’ in comparison with what was published the previous month.
The history of the Bank’s window dressing is a reminder of the difficulties central banks face in managing reserves, a situation similar to how investors today closely monitor the reserves of the People’s Bank of China.
To contact the author: firstname.lastname@example.org
Capie, Forrest. 2010. The Bank of England: 1950s to 1979. Cambridge: Cambridge University Press.
Naef, Alain. 2019. “Dirty Float or Clean Intervention? The Bank of England in the Foreign Exchange Market.” Lund Papers in Economic History. General Issues, no. 2019:199. http://lup.lub.lu.se/record/dfe46e60-6dfb-4380-8354-e7b699ed8ef9.
Schenk, Catherine. 2010. The Decline of Sterling: Managing the Retreat of an International Currency, 1945–1992. Cambridge University Press.
by Anna Missiaia and Kersten Enflo (Lund University)
This research is due to be published in the Economic History Review and is currently available on Early View.
For a long time, scholars have thought about regional inequality merely as a by-product of modern economic growth: following a Kuznets-style interpretation, the front-running regions increase their income levels and regional inequality during industrialization; and it is only when the other regions catch-up that overall regional inequality decreases and completes the inverted-U shaped pattern. But early empirical research on this theme was largely focused on the the 20th century, ignoring industrial take-off of many countries (Williamson, 1965). More recent empirical studies have pushed the temporal boundary back to the mid-19th century, finding that inequality in regional GDP was already high at the outset of modern industrialization (see for instance Rosés et al., 2010 on Spain and Felice, 2018 on Italy).
The main constraint for taking the estimations well into the pre-industrial period is the availability of suitable regional sources. The exceptional quality of Swedish sources allowed us for the first time to estimate a dataset of regional GDP for a European economy going back to the 16th century (Enflo and Missiaia, 2018). The estimates used here for 1571 are largely based on a one-off tax proportional to the yearly production: the Swedish Crown imposed this tax on all Swedish citizens in order to pay a ransom for the strategic Älvsborg castle that had just been conquered by Denmark. For the period 1750-1850, the estimates rely on standard population censuses. By connecting the new series to the existing ones from 1860 onwards by Enflo et al. (2014), we obtain the longest regional GDP series for any given country.
We find that inequality increased dramatically between 1571 and 1750 and remained high until the mid-19th century. Thereafter, it declined during the modern industrialization of the country (Figure 1). Our results discard the traditional view that regional divergence can only originate during an industrial take-off.
Figure 1. Coefficient of variation of GDP per capita across Swedish counties, 1571-2010.
Figure 2 shows the relative disparities in four benchmark years. If the country appeared relatively equal in 1571, between 1750 and 1850 both the mining districts in central and northern Sweden and the port cities of Stockholm and Gothenburg emerged.
Figure 2. The relative evolution of GDP per capita, 1571-1850 (Sweden=100).
The second part of the paper is devoted to the study of the drivers of pre-industrial regional inequality. Decomposing the Theil index for GDP per worker, we show that regional inequality was driven by structural change, meaning that regions diverged because they specialized in different sectors. A handful of regions specialized in either early manufacturing or in mining, both with a much higher productivity per worker compared to agriculture.
To explain this different trajectory, we use a theoretical framework introduced by Strulik and Weisdorf (2008) in the context of the British Industrial Revolution: in regions with a higher share of GDP in agriculture, technological advancements lead to productivity improvements but also to a proportional increase in population, impeding the growth in GDP per capita as in a classic Malthusian framework. Regions with a higher share of GDP in industry, on the other hand, experienced limited population growth due to the increasing relative price of children, leading to a higher level of GDP per capita. Regional inequality in this framework arises from a different role of the Malthusian mechanism in the two sectors.
Our work speaks to a growing literature on the origin of regional divergence and represents the first effort to perform this type of analysis before the 19th century.
To contact the authors:
Enflo, K. and Missiaia, A., ‘Regional GDP estimates for Sweden, 1571-1850’, Historical Methods, 51(2018), 115-137.
Enflo, K., Henning, M. and Schön, L., ‘Swedish regional GDP 1855-2000 Estimations and general trends in the Swedish regional system’, Research in Economic History, 30(2014), pp. 47-89.
Felice, E., ‘The roots of a dual equilibrium: GDP, productivity, and structural change in the Italian regions in the long run (1871-2011)’, European Review of Economic History, (2018), forthcoming.
Rosés, J., Martínez-Galarraga, J. and Tirado, D., ‘The upswing of regional income inequality in Spain (1860–1930)’, Explorations in Economic History, 47(2010), pp. 244-257.
Strulik, H., and J. Weisdorf. ‘Population, food, and knowledge: a simple unified growth theory.’ Journal of Economic Growth 13.3 (2008): 195.
Williamson, J., ‘Regional Inequality and the Process of National Development: A Description of the Patterns’, Economic Development and Cultural Change 13(1965), pp. 1-84.
By Judy Stephenson (University College London)
How many days a year did people work in England before the Industrial Revolution? For those who don’t spend their waking hours desperate for sources to inform wages and GDP per capita over seven centuries, this question provokes an agreeable discussion about artisans, agriculture and tradition. Someone will mention EP Thompson and clocks or Saint Mondays. ‘Really that few?’ It’s quaint.
But, for those of us who do spend our waking hours desperate for sources to inform wages and GDP per capita over seven centuries the question has evolved in the last few years into a debate about productivity and when modern economic growth began in an ‘industrious revolution’. A serious body of research in economic history has recently estimated increasing numbers of days that people worked from the late seventeenth century. Current estimates are that people worked about 270 days a year by 1700, rising to about 300 after 1750.
The uninitiated might think that estimates of such important things like the working year would be based on some substantive evidence, but in fact, most estimates of the working year that economic historians have been using for the last two decades don’t come from working records at all. They come from court depositions where witnesses told the courts when they went to and left work, or they come from working out how many days a worker had to toil to afford a basket of consumption goods. This approach, pioneered by Jacob Weisdorf and Bob Allen in 2011, essentially holds welfare as a constant throughout history, and it’s the key assumption made in a new paper on wages forthcoming from Jane Humphries and Jacob Weisdorf. Unsurprisingly for historians familiar with material showing the miserable conditions under which the poor toiled in eighteenth century Britain, this calculation frequently leads to a high number of days worked. It also implies that Londoners, due to higher day wages, may have had slightly more leisure than rural workers. Both implications might appear counterintuitive.
Knowledgeable historians, such as John Hatcher, have pointed out that the idea that anyone had 270 days paid work a year before the industrial revolution is fanciful. But unless there was an industrious revolution, and people did begin to work more days per year in market work – as Jan de Vries posited – the established evidence firmly implies that workers became worse off throughout the eighteenth century, because wage rates as measured by builders wages didn’t increase in line with inflation, and in fact builders earned even less than we thought.
My article, “Working days in a London construction team in the eighteenth century: evidence from St Paul’s Cathedral” forthcoming in the Review, takes a different approach: it uses the actual working records of a team of masons working under William Kempster who constructed the South West tower of St Paul’s Cathedral. For five years in the 1700s, these archives are exceptionally detailed. They show that building was seasonal (it’s not like we didn’t know – it’s just we had sort of forgotten), and building was stage dependent, so not all men could have worked all year. In fact, they didn’t. Surprisingly, for a stable firm at an established and large site, very few men worked for Kempster for more than about 27 weeks. Work was temporal and insecure, and working and employment relationships were casual.
If one was to take a crude average of the days each man worked in any year it would be less than 150 days. To do so is obviously misleading and that’s not what the paper claims, because obviously men worked for other employers too. But, what the working patterns reveal is that unless men seamlessly moved from one employer to another with no search costs or time in between, it would have been impossible for them to have worked 250 days a year. Its more plausible that they were able to work between 200 and 220 days.
Moreover, the data shows that men did not work the full 6 days per week on offer. The average number of days worked per week was only 5.2. This wasn’t because men did not work Saint Mondays (which are almost indiscernible) but because they took idiosyncratic breaks. Only the foremen seem to have been able to sustain six days a week.
However, men that had a longer relationship with Kempster worked more days per year than the rest. This implies that stronger working relationships or consolidation of employers and workers relationships might have led to an increase in the average number of days worked. However, architectural and construction historians generally think that consolidation in the industry did not occur until the 1820s. If there was an industrious revolution in the eighteenth century it might not have happened for builders. If builders’ wages are representative – and that old assumption seems increasingly stretched these days – then the story for wages in the eighteenth century is even more pessimistic than before.
The evidence from working records presented in this article paper are still relatively fragmentary but they do clearly show that holding welfare to be stable by calculating the number of days worked from consumption goods – as the Weisdorf/ Humphries/ Allen approach does not give us the whole story.
But then again, is it really plausible to hold welfare stable? The debate, and scholarship no doubt will continue.
To contact the author:
by Latika Chaudhary (Naval Postgraduate School) and Anand V. Swamy (Williams College)
This research is due to be published in the Economic History Review and is currently available on Early View.
In the late 19th century the British-Indian government (the Raj) became preoccupied with default on debt and the consequent transfer of land in rural India. In many regions Raj officials made the following argument: British rule had created or clarified individual property rights in land, which had for the first time made land available as collateral for debt. Consequently, peasants could now borrow up to the full value of their land. The Raj had also replaced informal village-based forms of dispute resolution with a formal legal system operating outside the village, which favored the lender over the borrower. Peasants were spendthrift and naïve, and unable to negotiate the new formal courts created by British rule, whereas lenders were predatory and legally savvy. Borrowers were frequently defaulting, and land was rapidly passing from long-standing resident peasants to professional moneylenders who were often either immigrant, of another religion, or sometimes both. This would lead to social unrest and threaten British rule. To preserve British rule it was essential that one of the links in the chain be broken, even if this meant abandoning cherished notions of sanctity of property and contract.
The Punjab Land Alienation Act (PLAA) of 1900 was the most ambitious policy intervention motivated by this thinking. It sought to prevent professional moneylenders from acquiring the property of traditional landowners. To this end it banned, except under some conditions, the permanent transfer of land from an owner belonging to an ‘agricultural tribe’ to a buyer or creditor who was not from this tribe. Moreover, a lender who was not from an agricultural tribe could no longer seize the land of a defaulting debtor who was from an agricultural tribe.
The PLAA made direct restrictions on the transfer of land a respectable part of the policy apparatus of the Raj and its influence persists to the present-day. There is a substantial literature on the emergence of the PLAA, yet there is no econometric work on two basic questions regarding its impact. First, did the PLAA reduce the availability of mortgage-backed credit? Or were borrowers and lenders able to use various devices to evade the Act, thereby neutralizing it? Second, if less credit was available, what were the effects on agricultural outcomes and productivity? We use panel data methods to address these questions, for the first time, so far as we know.
Our work provides evidence regarding an unusual policy experiment that is relevant to a hypothesis of broad interest. It is often argued that ‘clean titling’ of assets can facilitate their use as collateral, increasing access to credit, leading to more investment and faster growth. Informed by this hypothesis, many studies estimate the effects of titling on credit and other outcomes, but they usually pertain to making assets more usable as collateral. The PLAA went in the opposite direction – it reduced the “collateralizability” of land which should have reduced investment and growth, based on the argument we have described. We investigate whether it did.
To identify the effects of the PLAA, we assembled a panel dataset on 25 districts in Punjab from 1890 to 1910. Our dataset contains information on mortgages and sales of land, economic outcomes, such as acreage and ownership of cattle, and control variables like rainfall and population. Because the PLAA targeted professional moneylenders, it should have reduced mortgage-backed credit more in places where they were bigger players in the credit market. Hence, we interact a measure of the importance of the professional, that is, non-agricultural, moneylenders in the mortgage market with an indicator variable for the introduction of the PLAA, which takes the value of 1 from 1900 onward. As expected, we find that that the PLAA contracted credit more in places where professional moneylenders played a larger role – compared to districts with no professional moneylenders. The PLAA reduced mortgage-backed credit by 48 percentage points more at the 25th percentile of our measure of moneylender-importance and by 61 percentage points more at the 75th percentile.
However, this decrease of mortgage-backed credit in professional moneylender-dominated areas did not lead to lower acreage or less ownership of cattle. In short, the PLAA affected credit markets as we might expect without undermining agricultural productivity. Because we have panel data, we are able to account for potential confounding factors such as time-invariant unobserved differences across districts (using district fixed effects), common district-specific shocks (using year effects) and the possibility that districts were trending differently independent of the PLAA (using district-specific time trends).
British officials provided a plausible explanation for the non-impact of PLAA on agricultural production: lenders had merely become more judicious – they were still willing to lend for productive activity, but not for ‘extravagant’ expenditures, such as social ceremonies. There may be a general lesson here: policies that make it harder for lenders to recover money may have the beneficial effect of encouraging due diligence.
To contact the authors:
by Bishnupriya Gupta (University of Warwick and CAGE)
There has been much discussion in recent years about India’s growth failure in the first 30 years after independence in 1947. India became a highly-regulated economy and withdrew from the global market. This led to inefficiency and low growth. The architect of Indian planning –Jawaharlal Nehru, the first prime minister, did not put India on an East Asian path. As a contrast, the last decade of the 20th century has seen a reintegration into the global economy and today India is one of the fastest growing economies.
Any analysis of Indian growth and development that starts in 1947, is deeply flawed. It ignores the history of development and the impact of colonization. This paper takes a long run view of India’s economic development and argues that the Indian economy stagnated under colonial rule and a reversal came with independence. Although a slow growth in comparison to East Asia, the Nehruvian legacy put India on a growth path.
Tharoor (2017) in his book Inglorious Empire argues that Britain’s industrial revolution was built on the destruction of Indian textile industries and British rule turned India from an exporter of agricultural goods. A different view on colonial rule comes from Niall Ferguson in his book Empire: How Britain Made the Modern World. Ferguson claimed that even if the British rule did not increase Indian incomes, things might have been much worse under a restored Mughal regime in 1857. The British build the railways and connected India to the rest of the world.
Neither of these views are based on statistical evidence. Data on GDP per capita (Figure 1), shows that there was a slow decline and stagnation over a long period. Evidence on wages and per capita GDP show a prosperous economy in 1600 under the Mughal Emperor Akbar. Living standards began to decline from the middle of the 17th century, before colonization, continued as the East India Company gained territorial control in 1757. It is important to note that the decline coincided with increased integration with international markets and the rising trade in textiles to Europe. In 1857, India became a part of the global economy of the British Empire. Indian trade volume increased, but from an exporter or industrial products, India became an exporter if food and raw material. Per capita income stagnated even as trade increased, the colonial government built a railway network and British entrepreneurs owned large parts of the industrial sector. In 1947, the country was one of the poorest in the world. Figure 1 below also tells us that growth picked up after independence as India moved towards regulation and restrictions on trade and private investment.
What explains the stagnation in income prior to independence? The colonial government invested very little in the main sector, agriculture. The bulk of British investment went to the railways, but not in irrigation. The railways, initially connected the hinterland with the ports, but over time integrated markets, reducing price variability across markets. However, it did not contribute increasing agricultural productivity. Without large investment in irrigation, output per acre declined in areas that did not get canals. Industry on the other had was the fastest growing sector, but employed only 10 per cent of the work force. Stagnation of the economy under control rule had little to do with trade.
Indian growth reversal began in independent India with regulation of trade and industry and a break with the global economy. For the first time in the 20th century, the Indian economy began to grow as the graph shows with investment in capital goods industries and agricultural infrastructure. Industrial growth and the green revolution in agriculture, moved the economy from stagnation to growth. This growth slowed down, but the economy did not stagnate as in the colonial period. Following economic reforms after the 1980s, India has entered a high growth regime. The initial increase in growth was a response to removal of restrictions on domestic private investment, well before reintegration into the global economy in the 1990s. The foundations for growth were laid in the first three decades after independence.
The institutional legacy of British rule, had long run consequences. One example is education policy that prioritized investment in secondary and tertiary education, creating a small group with higher education, but few with basic primary schooling. In 1947, less than one–fifth of the population had basic education. The higher education bias in education continued after independence and has created an advantage for the service sector. There are lessons from history to understand Indian growth after independence.
To contact the author: B.Gupta@warwick.ac.uk
by Fernando Collantes (University of Zaragoza and Instituto Agroalimentario de Aragón)
This blog is part of a larger research paper published in the Economic History Review.
Consumers in the Northern hemisphere are feeling increasingly uneasy about their industrial diet. Few question that during the twentieth century the industrial diet helped us solve the nutritional problems related to scarcity. But there is now growing recognition that the triumph of the industrial diet triggered new problems related to abundance, among them obesity, excessive consumerism and environmental degradation. Currently, alternatives, ranging from organic food and those bearing geographical-‘quality’ labels, struggle to transcend the industrial diet. Frequently, these alternatives face a major obstacle: their relatively high price compared to mass-produced and mass-retailed food.
The research that I have conducted examines the literature on nutritional transitions, food regimes and food history, and positions it within present-day debates on diet change in affluent societies. I employ a case-study of the growth in mass consumption of dairy products in Spain between 1965 and 1990. In the mid-1960s, dairy consumption was very low in Spain and many suffered from calcium deficiency. Subsequently, there was a rapid growth in consumption. Milk, especially, became an integral part of the diet for the population. Alongside mass consumption there was also mass-production and complementary technical change. In the early 1960s, most consumers only drank raw milk, but by the 1990s milk was being sterilised and pasteurised to standard specifications by an emergent national dairy industry.
In the early 1960s, the regular purchase of milk was too expensive for most households. By the early 1990s, an increase in household incomes, complemented by (alleged) price reductions generated by dairy industrialization, facilitated rapid milk consumption. A further factor aiding consumption was changing consumer preferences. Previously, consumers perceptions of milk were affected by recurrent episodes of poisoning and fraud. The process of dairy industrialization ensured a greater supply of ‘safe’ milk and this encouraged consumers to use their increased real incomes to buy more milk. ‘Quality’ milk meant milk that was safe to consume became the main theme in the advertising campaigns employed milk processers (Figure 1).
Figure 1. Advertisement by La Lactaria Española in the early 1970s.
What are the implications of my research to contemporary debates on food quality? First the transition toward a diet richer in organic foods and foods characterised by short food-supply chains and artisan-like production, backed by geographical-quality labels has more than niche relevance. There are historical precedents (such as the one studied in this article) of large sections of the populace willing to pay premium prices for food products that in some senses are perceived as qualitatively superior to other, more ‘conventional’ alternatives. If it happened in the past, it can happen again. Indeed, new qualitative substitutions are already taking place. The key issue is the direction of this substitution. Will consumers use their affluence to ‘green’ their diet? Or will they use higher incomes to purchase more highly processed foods — with possibly negative implications for public health and environmental sustainability? This juncture between food-system dynamics and public policy is crucial. As Fernand Braudel argued, it is the extraordinary capacity for adaption that defines capitalism. My research suggests that we need public policies that reorient food capitalism towards socially progressive ends.
To contact the author: email@example.com
by Jean-Pascal Bassino (ENS Lyon) and Pierre van der Eng (Australian National University)
This blog is part of a larger research paper published in the Economic History Review.
In the ‘great divergence’ debate, China, India, and Japan have been used to represent the Asian continent. However, their development experience is not likely to be representative of the whole of Asia. The countries of Southeast Asia were relatively underpopulated for a considerable period. Very different endowments of natural resources (particularly land) and labour were key parameters that determined economic development options.
Maddison’s series of per-capita GDP in purchasing power parity (PPP) adjusted international dollars, based on a single 1990 benchmark and backward extrapolation, indicate that a divergence took place in 19th century Asia: Japan was well above other Asian countries in 1913. In 2018 the Maddison Project Database released a new international series of GDP per capita that accommodate the available historical PPP-based converters. Due to the very limited availability of historical PPP-based converters for Asian countries, the 2018 database retains many of the shortcomings of the single-year extrapolation.
Maddison’s estimates indicate that Japan’s GDP per capita in 1913 was much higher than in other Asian countries, and that Asian countries started their development experiences from broadly comparable levels of GDP per capita in the early nineteenth century. This implies that an Asian divergence took place in the 19th century as a consequence of Japan’s economic transformation during the Meiji era (1868-1912). There is now growing recognition that the use of a single benchmark year and the choice of a particular year may influence the historical levels of GDP per capita across countries. Relative levels of Asian countries based on Maddison’s estimates of per capita GDP are not confirmed by other indicators such as real unskilled wages or the average height of adults.
Our study uses available estimates of GDP per capita in current prices from historical national accounting projects, and estimates PPP-based converters and PPP-adjusted GDP with multiple benchmarks years (1913, 1922, 1938, 1952, 1958, and 1969) for India, Indonesia, Korea, Malaya, Myanmar (then Burma), the Philippines, Sri Lanka (then Ceylon), Taiwan, Thailand and Vietnam, relative to Japan. China is added on the basis of other studies. PPP-based converters are used to calculate GDP per capita in constant PPP yen. The indices of GDP per capita in Japan and other countries were expressed as a proportion of GDP per capita in Japan during the years 1910–70 in 1934–6 yen, and then converted to 1990 international dollars by relying on PPP-adjusted Japanese series comparable to US GDP series. Figure 1 presents the resulting series for Asian countries.
Figure 1. GDP per capita in selected Asian countries, 1910–1970 (1934–6 Japanese yen)
The conventional view dates the start of the divergence to the nineteenth century. Our study identifies the First World War and the 1920s as the era during which the little divergence in Asia occurred. During the 1920s, most countries in Asia — except Japan —depended significantly on exports of primary commodities. The growth experience of Southeast Asia seems to have been largely characterised by market integration in national economies and by the mobilisation of hitherto underutilised resources (labour and land) for export production. Particularly in the land-abundant parts of Asia, the opening-up of land for agricultural production led to economic growth.
Commodity price changes may have become debilitating when their volatility increased after 1913. This was followed by episodes of import-substituting industrialisation, particularly during after 1945. While Japan rapidly developed its export-oriented manufacturing industries from the First World War, other Asian countries increasingly had inward-looking economies. This pattern lasted until the 1970s, when some Asian countries followed Japan on a path of export-oriented industrialisation and economic growth. For some countries this was a staggered process that lasted well into the 1990s, when the World Bank labelled this development the ‘East Asian miracle’.
To contact the authors:
Bassino, J-P. and Van der Eng, P., ‘Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69’, Economic History Review (forthcoming).
Fouquet, R. and Broadberry, S., ‘Seven centuries of European economic growth and decline’, Journal of Economic Perspectives, 29 (2015), pp. 227–44.
Fukao, K., Ma, D., and Yuan, T., ‘Real GDP in pre-war Asia: a 1934–36 benchmark purchasing power parity comparison with the US’, Review of Income and Wealth, 53 (2007), pp. 503–37.
Inklaar, R., de Jong, H., Bolt, J., and van Zanden, J. L., ‘Rebasing “Maddison”: new income comparisons and the shape of long-run economic development’, Groningen Growth and Development Centre Research Memorandum no. 174 (2018).
Link to the website of the Southeast Asian Development in the Long Term (SEA-DELT) project: https://seadelt.net
by Laura Maravall Buckwalter (University of Tübingen)
This research is due to be published in the Economic History Review and is currently available on Early View.
It is often claimed that access to land and labour during the colonial years determined land redistribution policies and labour regimes that had persistent, long-run effects. For this reason, the amount of land and labour available in a colonized country at a fixed point in time are being included more frequently in regression frameworks as proxies for the types of colonial modes of production and institutions. However, despite the relevance of these variables within the scholarly literature on settlement economies, little is known about the way in which they changed during the process of settlement. This is because most studies focus on long-term effects and tend to exclude relevant inter-country heterogeneities that should be included in the assessment of the impact of colonization on economic development.
In my article, I show how colonial land policy and settler modes of production responded differently within a colony. I examine rural settlement in French Algeria at the start of the 1900s and focus on cereal cultivation which was the crop that allowed the arable frontier to expand. I rely upon the literature that reintroduces the notion of ‘land frontier expansion’ into the understanding of settler economies. By including the frontier in my analysis, it is possible to assess how colonial land policy and settler farming adapted to very different local conditions. For exanple, because settlers were located in the interior regions they encountered growing land aridity. I argue that the expansion of rural settlement into the frontier was strongly dependent upon the adoption of modern ploughs, intensive labour (modern ploughs were non-labour saving) and larger cultivated fields (because they removed fallow areas) which, in turn, had a direct impact on colonial land policy and settler farming.
Figure 1. Threshing wheat in French Algeria (Zibans)
My research takes advantage of annual agricultural statistics reported by the French administration at the municipal level in Constantine for the years 1904/05 and 1913/14. The data are analysed in a cross-section and panel regression framework and, although the dataset provides a snapshot at only two points in time, the ability to identify the timing of settlement after the 1840s for each municipality provides a broader temporal framework.
Figure 2. Constantine at the beginning of the 1900s
The results illustrate how the limited amount of arable land on the Algerian frontier forced colonial policymakers to relax restrictions on the amount of land owned by settlers. This change in policy occurred because expanding the frontier into less fertile regions and consolidating settlement required agricultural intensification – changes in the frequency of crop rotation and more intensive ploughing. These techniques required larger fields and were therefore incompatible with the French colonial ideal of establishing a small-scale, family farm type of settler economy.
My results also indicate that settler farmers were able to adopt more intensive techniques mainly by relying on the abundant indigenous labour force. The man-to-cultivable land ratio, which increased after the 1870s due to continuous indigenous population growth and colonial land expropriation measures, eased settler cultivation, particularly on the frontier. This confirms that the availability of labour relative to land is an important variable that should be taken into consideration to assess the impact of settlement on economic development. My findings are in accord with Lloyd and Metzer (2013, p. 20), who argue that, in Africa, where the indigenous peasantry was significant, the labour surplus allowed low wages and ‘verged on servility’, leading to a ‘segmented labour and agricultural production system’. Moreover, it is precisely the presence of a large indigenous population relative to that of the settlers, and the reliance of settlers upon the indigenous labour and the state (to access land and labour), that has allowed Lloyd and Metzer to describe Algeria (together with Southern Rhodesia, Kenya and South Africa) as having a “somewhat different type of settler colonialism that emerged in Africa over the 19th and early 20th Centuries” (2013, p.2).
In conclusion, it is reasonable to assume that, as rural settlement gains ground within a colony, local endowments and cultivation requirements change. The case of rural settlement in Constantine reveals how settler farmers and colonial restrictions on ownership size adapted to the varying amounts of land and labour.
Ageron, C. R. (1991). Modern Algeria: a history from 1830 to the present (9th ed). Africa World Press.
Frankema, E. (2010). The colonial roots of land inequality: geography, factor endowments, or institutions? The Economic History Review, 63(2):418–451.
Frankema, E., Green, E., and Hillbom, E. (2016). Endogenous processes of colonial settlement. the success and failure of European settler farming in Sub-Saharan Africa. Revista de Historia Económica-Journal of Iberian and Latin American Economic History, 34(2), 237-265.
Easterly, W., & Levine, R. (2003). Tropics, germs, and crops: how endowments influence economic development. Journal of monetary economics, 50(1), 3-39.
Engerman, S. L., and Sokoloff, K. L. (2012). Economic development in the Americas since 1500: endowments and institutions. Cambridge University Press.
Lloyd, C. and Metzer, J. (2013). Settler colonization and societies in world history: patterns and concepts. In Settler Economies in World History, Global Economic History Series 9:1.
Lützelschwab, C. (2007). Populations and Economies of European Settlement Colonies in Africa (South Africa, Algeria, Kenya, and Southern Rhodesia). In Annales de démographie historique (No. 1, pp. 33-58). Belin.
Lützelschwab, C. (2013). Settler colonialism in Africa Lloyd, C., Metzer, J., and Sutch, R. (2013), Settler economies in world history. Brill.
Willebald, H., and Juambeltz, J. (2018). Land Frontier Expansion in Settler Economies, 1830–1950: Was It a Ricardian Process? In Agricultural Development in the World Periphery (pp. 439-466). Palgrave Macmillan, Cham.