Turkey’s Experience with Economic Development since 1820

by Sevket Pamuk, University of Bogazici (Bosphorus) 

This research is part of a broader article published in the Economic History Review.

A podcast of Sevket’s Tawney lecture can be found here.

 

Pamuk 1

New Map of Turkey in Europe, Divided into its Provinces, 1801. Available at Wikimedia Commons.

The Tawney lecture, based on my recent book – Uneven centuries: economic development of Turkey since 1820, Princeton University Press, 2018 – examined the economic development of Turkey from a comparative global perspective. Using GDP per capita and other data, the book showed that Turkey’s record in economic growth and human development since 1820 has been close to the world average and a little above the average for developing countries. The early focus of the lecture was on the proximate causes — average rates of investment, below average rates of schooling, low rates of total productivity growth, and low technology content of production —which provide important insights into why improvements in GDP per capita were not higher. For more fundamental explanations I emphasized the role of institutions and institutional change. Since the nineteenth century Turkey’s formal economic institutions were influenced by international rules which did not always support economic development. Turkey’s elites also made extensive changes in formal political and economic institutions. However, these institutions provide only part of the story:  the direction of institutional change also depended on the political order and the degree of understanding between different groups and their elites. When political institutions could not manage the recurring tensions and cleavages between the different elites, economic outcomes suffered.

There are a number of ways in which my study reflects some of the key trends in the historiography in recent decades.  For example, until fairly recently, economic historians focused almost exclusively on the developed economies of western Europe, North America, and Japan. Lately, however, economic historians have been changing their focus to developing economies. Moreover, as part of this reorientation, considerable effort has been expended on constructing long-run economic series, especially GDP and GDP per capita, as well as series on health and education.  In this context, I have constructed long-run series for the area within the present-day borders of Turkey. These series rely mostly on official estimates for the period after 1923 and make use of a variety of evidence for the Ottoman era, including wages, tax revenues and foreign trade series. In common with the series for other developing countries, many of my calculations involving Turkey  are subject to larger margins of error than similar series for developed countries. Nonetheless, they provide insights into the developmental experience of Turkey and other developing countries that would not have been possible two or three decades ago. Finally, in recent years, economists and economic historians have made an important distinction between the proximate causes and the deeper determinants of economic development. While literature on the proximate causes of development focuses on investment, accumulation of inputs, technology, and productivity, discussions of the deeper causes consider the broader social, political, and institutional environment. Both sets of arguments are utilized in my book.

I argue that an interest-based explanation can address both the causes of long-run economic growth and its limits. Turkey’s formal economic institutions and economic policies underwent extensive change during the last two centuries. In each of the four historical periods I define, Turkey’s economic institutions and policies were influenced by international or global rules which were enforced either by the leading global powers or, more recently, by international agencies. Additionally, since the nineteenth century, elites in Turkey made extensive changes to formal political institutions.  In response to European military and economic advances, the Ottoman elites adopted a programme of institutional changes that mirrored European developments; this programme  continued during the twentieth century. Such fundamental  changes helped foster significant increases in per capita income as well as  major improvements in health and education.

But it is also necessary to examine how these new formal institutions interacted with the process of economic change – for example, changing social structure and variations in the distribution of power and expectations — to understand the scale and characteristics of growth that the new institutional configurations generated.

These interactions were complex. It is not easy to ascribe the outcomes created in Turkey during these two centuries to a single cause. Nonetheless, it is safe to state that in each of the four periods, the successful development of  new institutions depended on the state making use of the different powers and capacities of the various elites. More generally, economic outcomes depended closely on the nature of the political order and the degree of understanding between different groups in society and the elites that led them. However, one of the more important characteristics of Turkey’s social structure has been the recurrence of tensions and cleavages between its elites. While they often appeared to be based on culture, these tensions overlapped with competing economic interests which were, in turn, shaped by the economic institutions and policies generated by the global economic system. When political institutions could not manage these tensions well, Turkey’s economic outcomes remained close to the world average.

Trade in the Shadow of Power: Japanese Industrial Exports in the Interwar years

By Alejandro Ayuso Díaz and Antonio Tena Junguito (Carlos III University of Madrid)

The history of international trade provides numerous examples of trade in the ‘shadow of power’ (Findlay and O´Rourke 2007). Here we argue that Japanese empire power was as important as factor endowments, preferences, and technology, to the expansion of trade during the interwar years. Following Gardfield et al 2010, the shadow of power that we discuss is based on the use or threat of violence or conquest which depend on the military capabilities of states.

fig01
Figure 1:Japan and World Manufacturing Export Performance. Source: Japan and World comparative manufacture exports in volume (1953=100) from UN Historical Trade Statistics.

Japan was a latecomer to 20th-century industrialization, but during the interwar years, and especially in the 1930s, it was able to activate a complex and aggressive industrialization policy to accelerate the modernization of its industry. This policy consisted of import substitution and exports of manufactures to its region of influence. This newly created empire was very efficient in developing a peculiar imperial trade in the shadow of power throughout East and Southeast Asia in conjunction with a more aggressive imperial regional policy through conquest.

The trade generation capacity of the Japanese empire during the interwar years was much higher than that suggested by Mitchener and Weidenmier (2008) for the preceding period (1870-1913). However, some caution needs to be exercised in making this comparison because it might indicate issues associated with the interpretation of the relevant statistics. Japanese empire trade membership increased by more than ten times that associated with the British, German and French Empires, during this period and was twice as great as that for the US and Spanish empires. Consequently, it might be argued that our coefficients are more prominent because they are capturing stronger intra-bloc bias that emerged after the Great Depression.

Employing a granular database consisting of Japanese exports towards 117 countries over 1,135 products at six different benchmarks (1912,1915,1925,1929,1932 and 1938) we are able to demonstrate that the expansion of Japanese exports during the interwar period was facilitated by the exploitation of formal and informal imperial links which exerted a bigger influence on export determination than productivity increases.

Figure 2: Japanese total manufacturing exports by skills and region. Source: Annual Returns of the Foreign Trade of the Empire of Japan.

fig02
a) Manufacturing exports by skills
fig03
b) high skilled exports by region

 

The main characteristics of this trade expansion between 1932 and 1938 were high-skill exports directed towards Japanese colonies. Additional evidence indicates that Japan did not enjoy comparative advantage in products with limited export- market potential. Colonial infrastructure, building and urbanization were used as exclusive markets for high-skill exports and became one of the main drivers of Japanese export expansion and its modern industrialization process.

Trade blocs in the interwar years were used as instruments of imperial power to foster exports and as a substitute for productivity in encouraging industrial production. In that sense, Japan’s total exports in 1938 were between 28% and 47% higher than 1912 thanks to imperial mechanisms. The figure is much higher when we capture the imperial effect on high-skill exports (between 66% and 76% higher thanks to imperial connections). The quoted figures are based on a counterfactual comparing exports without the empire to those obtained via Imperial mechanisms.

We believe that our results demonstrate the colonial trade bias mechanism used by imperialist countries was inversely related to productivity. The implicit counterfactual hypothesis would be that without imperial intervention in the region Japan would not have expanded its high-skill exports and would not have exported such a variety of new products. In other words, Japan’s industrialisation process would have been much less pronounced.

 

References

Ayuso-Diaz, A. and Tena-Junguito, A. (2019): “Trade in the Shadow of Power: Japanese Industrial Exports in the Interwar years”. Economic History Review (forthcoming).

Findlay, R. and O’Rourke, K. (2007). Power and Plenty. Princeton, NJ: Princeton University Press.

Garfinkel, M, Skaperdas, S., and Syropoulos, C. (2012). ‘Trade in the Shadow of Power’. In Skaperdas, S., and Syropoulos, C. (eds.), Oxford Handbook on the Economics of Peace and Conflict. Oxford University Press.

Mitchener, K. J., & Weidenmier, M. (2008). Trade and empire. The Economic Journal, 118(533), 1805-1834.

Ritschl, A. & Wolf, N. (2003). “Endogeneity of Currency Areas and Trade Blocs: Evidence from the Inter-war Period,” CEPR Discussion Papers 4112.

 

To contact the authors:

Alejandro Ayuso Díaz (aayuso@clio.uc3m.es)

Antonio Tena Junguito (antonio.tena@uc3m.es)

It is only cheating if you get caught – Creative accounting at the Bank of England in the 1960s

by Alain Naef (Postdoctoral fellow at the University of California, Berkeley)

This research was presented at the EHS conference in Keele in 2018 and is available as a working paper here. It is also available as an updated 2019 version here.

 

Naef 3
The Bank of England. Available at Wikimedia Commons.

The 1960s were a period of crisis for the pound. Britain was on a fixed exchange rate system and needed to defend its currency with intervention on the foreign exchange market. To avoid a crisis, the Bank of England resorted to ‘window dressing’ the published reserve figures.

In the 1960s, the Bank came under pressure from two sides: first, publication of the Radcliffe report (https://en.wikipedia.org/wiki/Radcliffe_report) forced publication of more transparent accounts. Second, with removal of capital controls in 1958, the Bank came under attack from international speculators (Schenk 2010). These contradictory pressures put the Bank in an awkward position. It needed to publish its reserve position (holdings of dollars and gold ) but it recognised that doing so could trigger a run on sterling, thereby creating a self-fulfilling currency crisis (see Krugman: http://www.nber.org/chapters/c11032.pdf).

For a long time, the Bank had a reputation for the obscurity of its accounts and its lack of transparency. Andy Haldane (Chief Economist at the Bank) recognised, for ‘most of [it’s] history, opacity has been deeply ingrained in central banks’ psyche’.

(https://www.bankofengland.co.uk/speech/2017/a-little-more-conversation-a-little-less-action). One Federal Reserve (Fed) memo noted that the Bank of England took ‘a certain pride in pointing out that hardly anything can be inferred by outsiders from their balance sheet’, another that ‘it seems clear that the Bank of England is being pushed – by much public criticism – into giving out more information.’ However, the Bank did eventually publish reserve figures at a quarterly, and then monthly, frequency (Figure 1).

Transparency about the reserves created a risk for a currency crisis so in late 1966 the Bank developed a strategy for reporting levels that would not cause a crisis (Capie 2010). Figure 1 illustrates how ‘window dressing’ worked. The solid line reports the convertible reserves as published in the Quarterly Bulletin of the Bank of England. This information was available to market participants. The stacked columns show the actual daily dollar reserves. Spikes appear at monthly intervals, indicating the short-term borrowing that was used to ensure the reserves level was high enough on reporting days.

 

Figure 1. Published EEA convertible currency reserves vs. actual dollar reserves held at the EEA, 1962-1971.

Naef 1

 

The Bank borrowed dollars shortly before the reserve reporting day by drawing on swap lines (similar to the Fed in 2007 https://voxeu.org/article/central-bank-swap-lines). Swap drawings could be used overnight. Table 1 illustrates how window dressing worked using data from the EEA ledgers available at the archives of the Bank. As an example, on Friday, 31 May 1968, the Bank borrowed over £450 million – an increase in reserves of 171%. The swap operation was reversed the next working day, and on Tuesday the reserves level was back to where it was before reporting. The details of these operations emphasise how swap networks were short-term instruments to manipulate published figures.

 

Table 1. Daily entry in the EEA ledger showing how window dressing worked

Naef 2

 

The Bank of England’s window dressing was done in collaboration with the Fed. Both discussed reserve figures before the Bank published them. During most of the 1960s, the Bank and the Fed were in contact daily about exchange rate matters. Records of these phone conversations are parsimonious at the Bank but the Fed kept daily records (Archives of the Fed in New York, references 617031 and 617015).

During the 1960s, collaboration between the two central banks intensified. The Bank consulted the Fed on the exact wording of the reserve publication (Naef, 2019) and the Fed communicated on the swap position with the Bank, to ensure consonance between the public statements. Indeed, the Fed sent excerpts of minutes to the Bank to allow excision of anything mentioning window dressing (Archives of the Fed in New York, reference 107320). Thus, in December 1971, before publishing the minutes of the Federal Open Market Committee (FOMC) for 1966, Charles Coombs (a leading figure at the Fed) consulted Richard Hallet (Chief Cashier at the Bank):

‘You will recall that when you visited us in December 1969, we invited you to look over selected excerpts from the 1966 FOMC minutes involving certain delicate points that we thought you might wish to have deleted from the published version. We have subsequently deleted all of the passages which you found troublesome. Recently, we have made a final review of the minutes and have turned up one other passage that I am not certain you had an opportunity to go over. I am enclosing a copy of the excerpt, with possible deletions bracketed in red ink.’

Source: Letter from Coombs to Hallet, New York Federal Reserve Bank archives, 1 December 1971, Box 107320.)

 

Coombs suggested deleting passages where some FOMC members criticised window dressing, while other members suggested the Bank would get better results ‘if they reported their reserve position accurately than if they attempted to conceal their true reserve position’ (https://fraser.stlouisfed.org/scribd/?item_id=22913&filepath=/docs/historical/FOMC/meetingdocuments/19660628Minutesv.pdf). However, MacLaury (FOMC), stressed that there was a risk of ‘setting off a cycle of speculation against sterling’ if the Bank published a loss of $200 million, which was ‘large for a single month’ in comparison with what was published the previous month.

The history of the Bank’s window dressing is a reminder of the difficulties central banks face in managing reserves, a situation similar to how investors today closely monitor the reserves of the People’s Bank of China.

 

 

To contact the author: alain.naef@berkeley.edu

 

References:

Capie, Forrest. 2010. The Bank of England: 1950s to 1979. Cambridge: Cambridge University Press.

Naef, Alain. 2019. “Dirty Float or Clean Intervention?  The Bank of England in the Foreign Exchange Market.” Lund Papers in Economic History. General Issues, no. 2019:199. http://lup.lub.lu.se/record/dfe46e60-6dfb-4380-8354-e7b699ed8ef9.

Schenk, Catherine. 2010. The Decline of Sterling: Managing the Retreat of an International Currency, 1945–1992. Cambridge University Press.

All quiet before the take-off? Pre-industrial regional inequality in Sweden (1571-1850)

by Anna Missiaia and Kersten Enflo (Lund University)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

Missiaia Main.jpg
Södra Bancohuset (The Southern National Bank Building), Stockholm. Available here at Wikimedia Commons.

For a long time, scholars have thought about regional inequality merely as a by-product of modern economic growth: following a Kuznets-style interpretation, the front-running regions increase their income levels and regional inequality during industrialization; and it is only when the other regions catch-up that overall regional inequality decreases and completes the inverted-U shaped pattern. But early empirical research on this theme was largely focused on the  the 20th century, ignoring industrial take-off of many countries (Williamson, 1965).  More recent empirical studies have pushed the temporal boundary back to the mid-19th century, finding that inequality in regional GDP was already high at the outset of modern industrialization (see for instance Rosés et al., 2010 on Spain and Felice, 2018 on Italy).

The main constraint for taking the estimations well into the pre-industrial period is the availability of suitable regional sources. The exceptional quality of Swedish sources allowed us for the first time to estimate a dataset of regional GDP for a European economy going back to the 16th century (Enflo and Missiaia, 2018). The estimates used here for 1571 are largely based on a one-off tax proportional to the yearly production: the Swedish Crown imposed this tax on all Swedish citizens in order to pay a ransom for the strategic Älvsborg castle that had just been conquered by Denmark. For the period 1750-1850, the estimates rely on standard population censuses. By connecting the new series to the existing ones from 1860 onwards by Enflo et al. (2014), we obtain the longest regional GDP series for any given country.

We find that inequality increased dramatically between 1571 and 1750 and remained high until the mid-19th century. Thereafter, it declined during the modern industrialization of the country (Figure 1). Our results discard the traditional  view that regional divergence can only originate during an industrial take-off.

 

Figure 1. Coefficient of variation of GDP per capita across Swedish counties, 1571-2010.

Missiaia 1
Sources: 1571-1850: Enflo and. Missiaia, ‘Regional GDP estimates for Sweden, 1571-1850’; 1860-2010: Enflo et al, ‘Swedish regional GDP 1855-2000 and Rosés and Wolf, ‘The Economic Development of Europe’s Regions’.

 

Figure 2 shows the relative disparities in four benchmark years. If the country appeared relatively equal in 1571, between 1750 and 1850 both the mining districts in central and northern Sweden and the port cities of Stockholm and Gothenburg emerged.

 

Figure 2. The relative evolution of GDP per capita, 1571-1850 (Sweden=100).

Missiaia 2
Sources: 1571-1850: Enflo and. Missiaia, ‘Regional GDP estimates for Sweden, 1571-1850’; 2010: Rosés and Wolf, ‘The Economic Development of Europe’s Regions’.

The second part of the paper is devoted to the study of the drivers of pre-industrial regional inequality. Decomposing the Theil index for GDP per worker, we show that regional inequality was driven by structural change, meaning that regions diverged because they specialized in different sectors. A handful of regions specialized in either early manufacturing or in mining, both with a much higher productivity per worker compared to agriculture.

To explain this different trajectory, we use a theoretical framework introduced by Strulik and Weisdorf (2008) in the context of the British Industrial Revolution: in regions with a higher share of GDP in agriculture, technological advancements lead to productivity improvements but also to a proportional increase in population, impeding the growth in GDP per capita as in a classic Malthusian framework. Regions with a higher share of GDP in industry, on the other hand, experienced limited population growth due to the increasing relative price of children, leading to a higher level of GDP per capita. Regional inequality in this framework arises from a different role of the Malthusian mechanism in the two sectors.

Our work speaks to a growing literature on the origin of regional divergence and represents the first effort to perform this type of analysis before the 19th century.

 

To contact the authors:

anna.missiaia@ekh.lu.se

kerstin.enflo@ekh.lu.se

 

References

Enflo, K. and Missiaia, A., ‘Regional GDP estimates for Sweden, 1571-1850’, Historical Methods, 51(2018), 115-137.

Enflo, K., Henning, M. and Schön, L., ‘Swedish regional GDP 1855-2000 Estimations and general trends in the Swedish regional system’, Research in Economic History, 30(2014), pp. 47-89.

Felice, E., ‘The roots of a dual equilibrium: GDP, productivity, and structural change in the Italian regions in the long run (1871-2011)’, European Review of Economic History, (2018), forthcoming.

Rosés, J., Martínez-Galarraga, J. and Tirado, D., ‘The upswing of regional income inequality in Spain (1860–1930)’,  Explorations in Economic History, 47(2010), pp. 244-257.

Strulik, H., and J. Weisdorf. ‘Population, food, and knowledge: a simple unified growth theory.’ Journal of Economic Growth 13.3 (2008): 195.

Williamson, J., ‘Regional Inequality and the Process of National Development: A Description of the Patterns’, Economic Development and Cultural Change 13(1965), pp. 1-84.

 

A Policy of Credit Disruption: The Punjab Land Alienation Act of 1900

by Latika Chaudhary (Naval Postgraduate School) and Anand V. Swamy (Williams College)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

Agriculture_in_Punjab_India
Farming, farms, crops, agriculture in North India. Available at Wikimedia Commons.

In the late 19th century the British-Indian government (the Raj) became preoccupied with default on debt and the consequent transfer of land in rural India. In many regions Raj officials made the following argument: British rule had created or clarified individual property rights in land, which had for the first time made land available as collateral for debt. Consequently, peasants could now borrow up to the full value of their land. The Raj had also replaced informal village-based forms of dispute resolution with a formal legal system operating outside the village, which favored the lender over the borrower. Peasants were spendthrift and naïve, and unable to negotiate the new formal courts created by British rule, whereas lenders were predatory and legally savvy. Borrowers were frequently defaulting, and land was rapidly passing from long-standing resident peasants to professional moneylenders who were often either immigrant, of another religion, or sometimes both.  This would lead to social unrest and threaten British rule. To preserve British rule it was essential that one of the links in the chain be broken, even if this meant abandoning cherished notions of sanctity of property and contract.

The Punjab Land Alienation Act (PLAA) of 1900 was the most ambitious policy intervention motivated by this thinking. It sought to prevent professional moneylenders from acquiring the property of traditional landowners. To this end it banned, except under some conditions, the permanent transfer of land from an owner belonging to an ‘agricultural tribe’ to a buyer or creditor who was not from this tribe. Moreover, a lender who was not from an agricultural tribe could no longer seize the land of a defaulting debtor who was from an agricultural tribe.

The PLAA made direct restrictions on the transfer of land a respectable part of the policy apparatus of the Raj and its influence persists to the present-day. There is a substantial literature on the emergence of the PLAA, yet there is no econometric work on two basic questions regarding its impact. First, did the PLAA reduce the availability of mortgage-backed credit? Or were borrowers and lenders able to use various devices to evade the Act, thereby neutralizing it?  Second, if less credit was available, what were the effects on agricultural outcomes and productivity? We use panel data methods to address these questions, for the first time, so far as we know.

Our work provides evidence regarding an unusual policy experiment that is relevant to a hypothesis of broad interest. It is often argued that ‘clean titling’ of assets can facilitate their use as collateral, increasing access to credit, leading to more investment and faster growth. Informed by this hypothesis, many studies estimate the effects of titling on credit and other outcomes, but they usually pertain to making assets more usable as collateral. The PLAA went in the opposite direction – it reduced the “collateralizability” of land which should have  reduced investment and growth, based on the argument we have described. We investigate whether it did.

To identify the effects of the PLAA, we assembled a panel dataset on 25 districts in Punjab from 1890 to 1910. Our dataset contains information on mortgages and sales of land, economic outcomes, such as acreage and ownership of cattle, and control variables like rainfall and population. Because the PLAA targeted professional moneylenders, it should have reduced mortgage-backed credit more in places where they were bigger players in the credit market. Hence, we interact a measure of the importance of the professional, that is, non-agricultural, moneylenders in the mortgage market with an indicator variable for the introduction of the PLAA, which takes the value of  1 from 1900 onward. As expected, we find that  that the PLAA contracted credit more in places where professional moneylenders played a larger role – compared to  districts with no professional moneylenders.  The PLAA reduced mortgage-backed credit by 48 percentage points more at the 25th percentile of our measure of moneylender-importance and by 61 percentage points more at the 75th percentile.

However, this decrease of mortgage-backed credit in professional moneylender-dominated areas did not lead to lower acreage or less ownership of cattle. In short, the PLAA affected credit markets as we might expect without undermining agricultural productivity. Because we have panel data, we are able to account for potential confounding factors such as time-invariant unobserved differences across districts (using district fixed effects), common district-specific shocks (using year effects) and the possibility that districts were trending differently independent of the PLAA (using district-specific time trends).

British officials provided a plausible explanation for the non-impact of PLAA on agricultural production: lenders had merely become more judicious – they were still willing to lend for productive activity, but not for ‘extravagant’ expenditures, such as social ceremonies.  There may be a general lesson here:  policies that make it harder for lenders to recover money may have the beneficial effect of encouraging due diligence.

 

 

To contact the authors:

lhartman@nps.edu

aswamy@williams.edu

Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69

by Jean-Pascal Bassino (ENS Lyon) and Pierre van der Eng (Australian National University)

This blog is part of a larger research paper published in the Economic History Review.

 

Bassino1
Vietnam, rice paddy. Available at Pixabay.

In the ‘great divergence’ debate, China, India, and Japan have been used to represent the Asian continent. However, their development experience is not likely to be representative of the whole of Asia. The countries of Southeast Asia were relatively underpopulated for a considerable period.  Very different endowments of natural resources (particularly land) and labour were key parameters that determined economic development options.

Maddison’s series of per-capita GDP in purchasing power parity (PPP) adjusted international dollars, based on a single 1990 benchmark and backward extrapolation, indicate that a divergence took place in 19th century Asia: Japan was well above other Asian countries in 1913. In 2018 the Maddison Project Database released a new international series of GDP per capita that accommodate the available historical PPP-based converters. Due to the very limited availability of historical PPP-based converters for Asian countries, the 2018 database retains many of the shortcomings of the single-year extrapolation.

Maddison’s estimates indicate that Japan’s GDP per capita in 1913 was much higher than in other Asian countries, and that Asian countries started their development experiences from broadly comparable levels of GDP per capita in the early nineteenth century. This implies that an Asian divergence took place in the 19th century as a consequence of Japan’s economic transformation during the Meiji era (1868-1912). There is now  growing recognition that the use of a single benchmark year and the choice of a particular year may influence the historical levels of GDP per capita across countries. Relative levels of Asian countries based on Maddison’s estimates of per capita GDP are not confirmed by other indicators such as real unskilled wages or the average height of adults.

Our study uses available estimates of GDP per capita in current prices from historical national accounting projects, and estimates PPP-based converters and PPP-adjusted GDP with multiple benchmarks years (1913, 1922, 1938, 1952, 1958, and 1969) for India, Indonesia, Korea, Malaya, Myanmar (then Burma), the Philippines, Sri Lanka (then Ceylon), Taiwan, Thailand and Vietnam, relative to Japan. China is added on the basis of other studies. PPP-based converters are used to calculate GDP per capita in constant PPP yen. The indices of GDP per capita in Japan and other countries were expressed as a proportion of GDP per capita in Japan during the years 1910–70 in 1934–6 yen, and then converted to 1990 international dollars by relying on PPP-adjusted Japanese series comparable to US GDP series. Figure 1 presents the resulting series for Asian countries.

 

Figure 1. GDP per capita in selected Asian countries, 1910–1970 (1934–6 Japanese yen)

Bassino2
Sources: see original article.

 

The conventional view dates the start of the divergence to the nineteenth century. Our study identifies the First World War and the 1920s as the era during which the little divergence in Asia occurred. During the 1920s, most countries in Asia — except Japan —depended significantly on exports of primary commodities. The growth experience of Southeast Asia seems to have been largely characterised by market integration in national economies and by the mobilisation of hitherto underutilised resources (labour and land) for export production. Particularly in the land-abundant parts of Asia, the opening-up of land for agricultural production led to economic growth.

Commodity price changes may have become debilitating when their volatility increased after 1913. This was followed by episodes of import-substituting industrialisation, particularly during after 1945.  While Japan rapidly developed its export-oriented manufacturing industries from the First World War, other Asian countries increasingly had inward-looking economies. This pattern lasted until the 1970s, when some Asian countries followed Japan on a path of export-oriented industrialisation and economic growth. For some countries this was a staggered process that lasted well into the 1990s, when the World Bank labelled this development the ‘East Asian miracle’.

 

To contact the authors:

jean-pascal.bassino@ens-lyon.fr

pierre.vandereng@anu.edu.au

 

References

Bassino, J-P. and Van der Eng, P., ‘Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69’, Economic History Review (forthcoming).

Fouquet, R. and Broadberry, S., ‘Seven centuries of European economic growth and decline’, Journal of Economic Perspectives, 29 (2015), pp. 227–44.

Fukao, K., Ma, D., and Yuan, T., ‘Real GDP in pre-war Asia: a 1934–36 benchmark purchasing power parity comparison with the US’, Review of Income and Wealth, 53 (2007), pp. 503–37.

Inklaar, R., de Jong, H., Bolt, J., and van Zanden, J. L., ‘Rebasing “Maddison”: new income comparisons and the shape of long-run economic development’, Groningen Growth and Development Centre Research Memorandum no. 174 (2018).

Link to the website of the Southeast Asian Development in the Long Term (SEA-DELT) project:  https://seadelt.net

Factor Endowments on the “Frontier”: Algerian Settler Agriculture at the Beginning of the 1900s

by Laura Maravall Buckwalter (University of Tübingen)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

It is often claimed that access to land and labour during the colonial years determined land redistribution policies and labour regimes that had persistent, long-run effects.  For this reason, the amount of land and labour available in a colonized country at a fixed point in time are being included more frequently in regression frameworks as proxies for the types of colonial modes of production and institutions. However, despite the relevance of these variables within the scholarly literature on settlement economies, little is known about the way in which they changed during the process of settlement. This is because most studies focus on long-term effects and tend to exclude relevant inter-country heterogeneities that should be included in the assessment of the impact of colonization on economic development.

In my article, I show how colonial land policy and settler modes of production responded differently within a colony.  I examine rural settlement in French Algeria at the start of the 1900s and focus on cereal cultivation which was the crop that allowed the arable frontier to expand. I rely upon the literature that reintroduces the notion of ‘land frontier expansion’ into the understanding of settler economies. By including the frontier in my analysis, it is possible to assess how colonial land policy and settler farming adapted to very different local conditions. For exanple,  because settlers were located in the interior regions they encountered growing land aridity. I argue that the expansion of rural settlement into the frontier was strongly dependent upon the adoption of modern ploughs, intensive labour (modern ploughs were non-labour saving) and larger cultivated fields (because they removed fallow areas) which, in turn, had a direct impact on  colonial land policy and settler farming.

Figure 1. Threshing wheat in French Algeria (Zibans)

Buckwalter 1
Source: Retrieved from https://www.flickr.com/photos/internetarchivebookimages/14764127875/in/photostream/, last accessed 31st of May, 2019.

 

My research takes advantage of annual agricultural statistics reported by the French administration at the municipal level in Constantine for the years 1904/05 and 1913/14. The data are analysed in a cross-section and panel regression framework and, although the dataset provides a snapshot at only two points in time, the ability to identify the timing of settlement after the 1840s for each municipality provides a broader temporal framework.

Figure 2. Constantine at the beginning of the 1900s

Buckwalter 2
Source: Original outline of the map derives from mainly from Carte de la Colonisation Officielle, Algérie (1902), available online at the digital library from the Bibliothèque Nationale de France, retrieved from http://catalogue.bnf.fr/ark:/12148/cb40710721s (accessed on 28 Apr. 2019) and ANOM-iREL, http://anom.archivesnationales.culture.gouv.fr/ (accessed on 28 Apr. 2019).

 

The results illustrate how the limited amount of arable land on the Algerian frontier forced  colonial policymakers to relax  restrictions on the amount of land owned by settlers. This change in policy occurred because expanding the frontier into less fertile regions and consolidating settlement required agricultural intensification –  changes in the frequency of crop rotation and more intensive ploughing. These techniques required larger fields and were therefore incompatible  with the French colonial ideal of establishing a small-scale, family farm type of settler economy.

My results also indicate that settler farmers were able to adopt more intensive techniques mainly by relying on the abundant indigenous labour force. The man-to-cultivable land ratio, which increased after the 1870s due to continuous indigenous population growth and colonial land expropriation measures, eased settler cultivation, particularly on the frontier. This confirms that the availability of labour relative to land is an important variable that should be taken into consideration to assess the impact of settlement on economic development. My findings are in accord with Lloyd and Metzer (2013, p. 20), who argue that, in Africa, where the indigenous peasantry was significant, the labour surplus allowed low wages and ‘verged on servility’, leading to a ‘segmented labour and agricultural production system’. Moreover, it is precisely the presence of a large indigenous population relative to that of the settlers, and the reliance of settlers upon the indigenous labour and the state (to access land and labour), that has allowed Lloyd and Metzer to describe Algeria (together with Southern Rhodesia, Kenya and South Africa) as having a “somewhat different type of settler colonialism that emerged in Africa over the 19th and early 20th Centuries” (2013, p.2).

In conclusion, it is reasonable to assume that, as rural settlement gains ground within a colony, local endowments and cultivation requirements change. The case of rural settlement in Constantine reveals how settler farmers and colonial restrictions on ownership size adapted to the varying amounts of land and labour.

 

To contact: 

laura.maravall@uni-tuebingen.de

Twitter: @lmaravall

 

References

Ageron, C. R. (1991). Modern Algeria: a history from 1830 to the present (9th ed). Africa World Press.

Frankema, E. (2010). The colonial roots of land inequality: geography, factor endowments, or institutions? The Economic History Review, 63(2):418–451.

Frankema, E., Green, E., and Hillbom, E. (2016). Endogenous processes of colonial settlement. the success and failure of European settler farming in Sub-Saharan Africa. Revista de Historia Económica-Journal of Iberian and Latin American Economic History, 34(2), 237-265.

Easterly, W., & Levine, R. (2003). Tropics, germs, and crops: how endowments influence economic development. Journal of monetary economics, 50(1), 3-39.

Engerman, S. L., and Sokoloff, K. L. (2012). Economic development in the Americas since 1500: endowments and institutions. Cambridge University Press.

Lloyd, C. and Metzer, J. (2013). Settler colonization and societies in world history: patterns and concepts. In Settler Economies in World History, Global Economic History Series 9:1.

Lützelschwab, C. (2007). Populations and Economies of European Settlement Colonies in Africa (South Africa, Algeria, Kenya, and Southern Rhodesia). In Annales de démographie historique (No. 1, pp. 33-58). Belin.

Lützelschwab, C. (2013). Settler colonialism in Africa Lloyd, C., Metzer, J., and Sutch, R. (2013), Settler economies in world history. Brill.

Willebald, H., and Juambeltz, J. (2018). Land Frontier Expansion in Settler Economies, 1830–1950: Was It a Ricardian Process? In Agricultural Development in the World Periphery (pp. 439-466). Palgrave Macmillan, Cham.

The spread of Hindu-Arabic numerals in the tradition of European practical mathematics

by Raffaele Danna (University of Cambridge)

 

Arabic_numerals-en.svg
Comparison between five different styles of writing Arabic numerals. Available at Wikimedia Commons.

0, 1, 2, 3, 4, 5, 6, 7, 8, 9

The ten digits we use to represent numbers are everywhere in our modern world. But they reached a widespread diffusion in the ‘west’ only at a relatively late stage. The positional numeral system was central for the development of the scientific revolution, but – contrary to what one might expect – their spread in Europe was not driven just by scientists, but also by practitioners.

How did these numbers reach the almost universal diffusion we see today? What were the causes and broad consequences of their introduction?

As a matter of fact, for a very long time the ‘west’ did not know the numbers we now use every day. People had to rely on Roman numerals and the corresponding reckoning tools (such as counting boards).

Arabic numbers, or more precisely Hindu-Arabic numbers, were invented sometime in fifth century India. From India they spread westwards, together with the spread of Islam, reaching the Mediterranean around the eighth century.

Europe picked up these numbers from the Arabic civilisation, and that is the reason why we call them ‘Arabic’. But it took a long time before Europeans widely adopted Arabic numbers in their practice. This was due to difficult relationships with Islam, but also to the low levels of literacy and numeracy in Europe at the time, together with a more general cultural backwardness in comparison with the Arabic civilisation.

Starting from the eleventh century, Europe experienced an economic renaissance that reached its peak in the thirteenth century. With the development of international trade, several key financial and organisational innovations were introduced. This is the moment when the first international companies appear, together with the earliest examples of banking and international finance.

This new economic complexity raised the need for a higher level of computing power, especially to solve calculations of interest and exchange rates. It is at this stage that merchant-bankers, who were already literate and numerate, realised that Hindu-Arabic numerals suited their needs better than Roman ones. Arithmetic with Hindu-Arabic numerals became part of the required training for merchant-bankers.

By the late thirteenth century, we see the first examples of practical arithmetic texts published in central Italy, the cradle of early finance and banking. From here, the publication of these manuals slowly spread to the rest of Europe, with a dramatic acceleration in the sixteenth century driven by the introduction of the printing press.

A detailed reconstruction of these traditions, comprising more than 1,280 manuals, makes it possible to study the main characteristics of such spread. It was a movement from the south to the north of Europe, with late adopters – such as the north of Germany and England – taking up such texts only in the second half of the sixteenth century.

The spread of these texts allows us to reconstruct a slow process of transmission of practical mathematics throughout Europe. The use of such knowledge transformed economic practices, together with several other fields, such as visual arts, architecture, shipbuilding, surveying and engineering.

During the seventeenth century, this practical mathematics combined with the academic understanding of astronomy, reaching a new synthesis in the scientific revolution. Following the story of the adoption of Hindu-Arabic numerals allows us to appreciate that the scientific revolution was also indebted to more than three centuries of mathematical experimentation carried out by European practitioners.

Global trade imbalances in the classical and post-classical world

by Jamus Jerome Lim (ESSEC Business School and Center for Analytical Finance)

 

Global_trade_visualization_map,_2014
A Global trade visualization map, with data is derived from Trade Map database of International Trade Center. Available on Wikipedia.

In 2017, the bilateral trade deficit between China and the United States amounted to $375 billion, a staggering amount just shy of what the latter incurred against the rest of the world combined. And not only is this deficit large, it has been remarkably persistent: the chronic imbalance emerged in earnest in 1989, and has persisted for the better part of three decades. Some have even pointed to such imbalances as a contributing factor to the global financial crisis of 2008.

While such massive, chronic imbalances may strike one as artefacts of a modern, hyperglobalised world economy, nothing could be further from the truth. For example, recent economic history records large, persistent imbalances between the United States and Britain during the former’s earlier stages of development. Such imbalances also characterised the rise of Japan following the Second World War.

In recent research, we show that external imbalances between two major economic powers – an established leader, and a rising follower – were also observed over three earlier periods in economic history. These were the deficits borne by the Roman empire vis-à-vis pre-Gupta India circa 1CE; the borrowing by the Abbasid caliphate from Carolingian Frankia in the early ninth century; and the imbalances between West European kingdoms and the Byzantine empire that emerged around the 1300s.

Although data paucity implies that definitive claims on current account deficits are all but impossible, it is possible to rely on indirect sources of evidence to infer the likely presence of imbalances. One such source consists of trade-related documents from the time as well as pottery finds, which ascertain not just the existence but also the size of exchange relationships.

For example, using such records, we demonstrate that Baghdad – the capital of the Abbasid Caliphate – received furs and slaves from the comparative economic backwater that was the Carolingian empire, in exchange for goods such as spices, dates and olive oil. This imbalance may have lasted as long as several centuries.

A second source of evidence comes from numismatic records, especially coin hoards. Hoards of Roman gold aurei and silver dinarii have been discovered, for example, in India, with coinage dating from as early as the reign of Augustus through until at least that of Marcus Aurelius, well over half a century. Rome relied on such specie exports to fund, among other expenditures, continued military adventurism during the second century.

Our final source of evidence relies on fiscal records. Given the close relationship between external and fiscal balances – all else equal, greater government borrowing gives rise to a larger external deficit – chronic budgetary shortfalls generally give rise to rising imbalances.

This was very much the case in Byzantium prior to its decline: around the turn of the previous millennium, the Empire’s saving and reserves were in significant surplus, lending credence to the notion that the flow of products went from East to West. The recipients of such goods? The kingdoms of Western Europe, paid for with silver.

Squeezing blood from a stone: eighteenth century debtors’ prisons worked

by Alex Wakelam (University of Cambridge)

 

Woodstreet Compter.jpg
Wood Street Compter, 1793. Image extracted from page 384 of volume 1 of Old and New London, Illustrated, by Walter Thornbury. Available at Wikimedia Commons. 

While it is often assumed that debtors’ prisons were illogical and ineffective, my research demonstrates that they were extremely economically effective for creditors though they could ruin the lives of debtors.

The debtors’ prison is a frequent historical bogeyman, a Dickensian symptom of the illogical cruelty of the past that disappeared with enlightened capitalism. As imprisoning someone who could not afford to pay their debts, keeping them away from work and family, seems futile it is assumed creditors were doing so to satisfy petty revenge.

But they were a feature of most of English history from 1283, and though their power was curbed in 1869, there were still debtors imprisoned in the 1920s. The reason they persisted, as my research shows, is because, for creditors, they worked well.

The majority of imprisoned debtors in the eighteenth century were released relatively quickly having paid their creditors. This revelation is timely when events in America demonstrate how easily these prisons can return.

As today, most eighteenth century purchases were done on credit due to the delay in wages, limited supply of coinage, and cultural preferences for buying goods on credit. But credit was based on a range of factors including personal reputation, social rank and moral status. Informal oral contracts could frequently be made with little sense of an individual’s actual financial status, particularly if they were a gentleman or aristocrat. As contracts were not based on goods and court processes were slow, it was difficult to seize property to recover debts when creditors required money.

Creditors were able to imprison debtors without trial in this period until they paid what they owed or died. The registers of a London Debtors’ Prison, the Woodstreet Compter (1741-1815), reveal that creditors had good reasons to do so. Most of the 10,156 debtors contained in the registers left prison relatively quickly – 91% were released in under a year while almost a third were released in less than 100 days.

In addition, 84% were ‘discharged’ by their creditors, indicating that either the prisoner had paid their debts or a new contract had been agreed. Imprisonment forced debtors to find a way to pay or at least to renegotiate with creditors.

Prisoners were not the poor, but usually middle class people in small amounts of debt. One of the largest groups was made up of shopkeepers (about 20% of prisoners) though male and female prisoners came from across society with gentlemen, cheesemongers, lawyers, wigmakers and professors rubbing shoulders.

Most used their time to coordinate the selling of goods to raise money, or borrowed yet more from family and friends. Many others called in their own debts by having their debtors imprisoned as well.

As prisons were relatively open, some debtors worked off their debts. John Grano, a trumpeter who worked for Handel, imprisoned in the 1720s, taught music lessons from his cell. Others sold liquor or food to fellow prisoners or continued as best they could at their trade in the prison yard. Those with a literary mind, such as Daniel Defoe, wrote their way out.

Though credit works on different terms today, that coercive imprisonment is effective at securing repayment remains true. There have been a number of US states operating what amount to debtors’ prisons in recent years where the poor, fined by the state usually for traffic violations, are held until they pay what they owe.

Attorney General Jeff Sessions even retracted an Obama era memo in December aimed at abolishing the practice. While eighteenth century prisons worked effectively for creditors, they could ruin the lives of debtors who were forced to sell anything they could to pay their dues and escape the unsanitary hole in which they were being kept without trial. Assuming that they did not work and therefore won’t return is shown by my research to be false.