Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69

by Jean-Pascal Bassino (ENS Lyon) and Pierre van der Eng (Australian National University)

This blog is part of a larger research paper published in the Economic History Review.

 

Bassino1
Vietnam, rice paddy. Available at Pixabay.

In the ‘great divergence’ debate, China, India, and Japan have been used to represent the Asian continent. However, their development experience is not likely to be representative of the whole of Asia. The countries of Southeast Asia were relatively underpopulated for a considerable period.  Very different endowments of natural resources (particularly land) and labour were key parameters that determined economic development options.

Maddison’s series of per-capita GDP in purchasing power parity (PPP) adjusted international dollars, based on a single 1990 benchmark and backward extrapolation, indicate that a divergence took place in 19th century Asia: Japan was well above other Asian countries in 1913. In 2018 the Maddison Project Database released a new international series of GDP per capita that accommodate the available historical PPP-based converters. Due to the very limited availability of historical PPP-based converters for Asian countries, the 2018 database retains many of the shortcomings of the single-year extrapolation.

Maddison’s estimates indicate that Japan’s GDP per capita in 1913 was much higher than in other Asian countries, and that Asian countries started their development experiences from broadly comparable levels of GDP per capita in the early nineteenth century. This implies that an Asian divergence took place in the 19th century as a consequence of Japan’s economic transformation during the Meiji era (1868-1912). There is now  growing recognition that the use of a single benchmark year and the choice of a particular year may influence the historical levels of GDP per capita across countries. Relative levels of Asian countries based on Maddison’s estimates of per capita GDP are not confirmed by other indicators such as real unskilled wages or the average height of adults.

Our study uses available estimates of GDP per capita in current prices from historical national accounting projects, and estimates PPP-based converters and PPP-adjusted GDP with multiple benchmarks years (1913, 1922, 1938, 1952, 1958, and 1969) for India, Indonesia, Korea, Malaya, Myanmar (then Burma), the Philippines, Sri Lanka (then Ceylon), Taiwan, Thailand and Vietnam, relative to Japan. China is added on the basis of other studies. PPP-based converters are used to calculate GDP per capita in constant PPP yen. The indices of GDP per capita in Japan and other countries were expressed as a proportion of GDP per capita in Japan during the years 1910–70 in 1934–6 yen, and then converted to 1990 international dollars by relying on PPP-adjusted Japanese series comparable to US GDP series. Figure 1 presents the resulting series for Asian countries.

 

Figure 1. GDP per capita in selected Asian countries, 1910–1970 (1934–6 Japanese yen)

Bassino2
Sources: see original article.

 

The conventional view dates the start of the divergence to the nineteenth century. Our study identifies the First World War and the 1920s as the era during which the little divergence in Asia occurred. During the 1920s, most countries in Asia — except Japan —depended significantly on exports of primary commodities. The growth experience of Southeast Asia seems to have been largely characterised by market integration in national economies and by the mobilisation of hitherto underutilised resources (labour and land) for export production. Particularly in the land-abundant parts of Asia, the opening-up of land for agricultural production led to economic growth.

Commodity price changes may have become debilitating when their volatility increased after 1913. This was followed by episodes of import-substituting industrialisation, particularly during after 1945.  While Japan rapidly developed its export-oriented manufacturing industries from the First World War, other Asian countries increasingly had inward-looking economies. This pattern lasted until the 1970s, when some Asian countries followed Japan on a path of export-oriented industrialisation and economic growth. For some countries this was a staggered process that lasted well into the 1990s, when the World Bank labelled this development the ‘East Asian miracle’.

 

To contact the authors:

jean-pascal.bassino@ens-lyon.fr

pierre.vandereng@anu.edu.au

 

References

Bassino, J-P. and Van der Eng, P., ‘Asia’s ‘little divergence’ in the twentieth century: evidence from PPP-based direct estimates of GDP per capita, 1913–69’, Economic History Review (forthcoming).

Fouquet, R. and Broadberry, S., ‘Seven centuries of European economic growth and decline’, Journal of Economic Perspectives, 29 (2015), pp. 227–44.

Fukao, K., Ma, D., and Yuan, T., ‘Real GDP in pre-war Asia: a 1934–36 benchmark purchasing power parity comparison with the US’, Review of Income and Wealth, 53 (2007), pp. 503–37.

Inklaar, R., de Jong, H., Bolt, J., and van Zanden, J. L., ‘Rebasing “Maddison”: new income comparisons and the shape of long-run economic development’, Groningen Growth and Development Centre Research Memorandum no. 174 (2018).

Link to the website of the Southeast Asian Development in the Long Term (SEA-DELT) project:  https://seadelt.net

Factor Endowments on the “Frontier”: Algerian Settler Agriculture at the Beginning of the 1900s

by Laura Maravall Buckwalter (University of Tübingen)

This research is due to be published in the Economic History Review and is currently available on Early View.

 

It is often claimed that access to land and labour during the colonial years determined land redistribution policies and labour regimes that had persistent, long-run effects.  For this reason, the amount of land and labour available in a colonized country at a fixed point in time are being included more frequently in regression frameworks as proxies for the types of colonial modes of production and institutions. However, despite the relevance of these variables within the scholarly literature on settlement economies, little is known about the way in which they changed during the process of settlement. This is because most studies focus on long-term effects and tend to exclude relevant inter-country heterogeneities that should be included in the assessment of the impact of colonization on economic development.

In my article, I show how colonial land policy and settler modes of production responded differently within a colony.  I examine rural settlement in French Algeria at the start of the 1900s and focus on cereal cultivation which was the crop that allowed the arable frontier to expand. I rely upon the literature that reintroduces the notion of ‘land frontier expansion’ into the understanding of settler economies. By including the frontier in my analysis, it is possible to assess how colonial land policy and settler farming adapted to very different local conditions. For exanple,  because settlers were located in the interior regions they encountered growing land aridity. I argue that the expansion of rural settlement into the frontier was strongly dependent upon the adoption of modern ploughs, intensive labour (modern ploughs were non-labour saving) and larger cultivated fields (because they removed fallow areas) which, in turn, had a direct impact on  colonial land policy and settler farming.

Figure 1. Threshing wheat in French Algeria (Zibans)

Buckwalter 1
Source: Retrieved from https://www.flickr.com/photos/internetarchivebookimages/14764127875/in/photostream/, last accessed 31st of May, 2019.

 

My research takes advantage of annual agricultural statistics reported by the French administration at the municipal level in Constantine for the years 1904/05 and 1913/14. The data are analysed in a cross-section and panel regression framework and, although the dataset provides a snapshot at only two points in time, the ability to identify the timing of settlement after the 1840s for each municipality provides a broader temporal framework.

Figure 2. Constantine at the beginning of the 1900s

Buckwalter 2
Source: Original outline of the map derives from mainly from Carte de la Colonisation Officielle, Algérie (1902), available online at the digital library from the Bibliothèque Nationale de France, retrieved from http://catalogue.bnf.fr/ark:/12148/cb40710721s (accessed on 28 Apr. 2019) and ANOM-iREL, http://anom.archivesnationales.culture.gouv.fr/ (accessed on 28 Apr. 2019).

 

The results illustrate how the limited amount of arable land on the Algerian frontier forced  colonial policymakers to relax  restrictions on the amount of land owned by settlers. This change in policy occurred because expanding the frontier into less fertile regions and consolidating settlement required agricultural intensification –  changes in the frequency of crop rotation and more intensive ploughing. These techniques required larger fields and were therefore incompatible  with the French colonial ideal of establishing a small-scale, family farm type of settler economy.

My results also indicate that settler farmers were able to adopt more intensive techniques mainly by relying on the abundant indigenous labour force. The man-to-cultivable land ratio, which increased after the 1870s due to continuous indigenous population growth and colonial land expropriation measures, eased settler cultivation, particularly on the frontier. This confirms that the availability of labour relative to land is an important variable that should be taken into consideration to assess the impact of settlement on economic development. My findings are in accord with Lloyd and Metzer (2013, p. 20), who argue that, in Africa, where the indigenous peasantry was significant, the labour surplus allowed low wages and ‘verged on servility’, leading to a ‘segmented labour and agricultural production system’. Moreover, it is precisely the presence of a large indigenous population relative to that of the settlers, and the reliance of settlers upon the indigenous labour and the state (to access land and labour), that has allowed Lloyd and Metzer to describe Algeria (together with Southern Rhodesia, Kenya and South Africa) as having a “somewhat different type of settler colonialism that emerged in Africa over the 19th and early 20th Centuries” (2013, p.2).

In conclusion, it is reasonable to assume that, as rural settlement gains ground within a colony, local endowments and cultivation requirements change. The case of rural settlement in Constantine reveals how settler farmers and colonial restrictions on ownership size adapted to the varying amounts of land and labour.

 

To contact: 

laura.maravall@uni-tuebingen.de

Twitter: @lmaravall

 

References

Ageron, C. R. (1991). Modern Algeria: a history from 1830 to the present (9th ed). Africa World Press.

Frankema, E. (2010). The colonial roots of land inequality: geography, factor endowments, or institutions? The Economic History Review, 63(2):418–451.

Frankema, E., Green, E., and Hillbom, E. (2016). Endogenous processes of colonial settlement. the success and failure of European settler farming in Sub-Saharan Africa. Revista de Historia Económica-Journal of Iberian and Latin American Economic History, 34(2), 237-265.

Easterly, W., & Levine, R. (2003). Tropics, germs, and crops: how endowments influence economic development. Journal of monetary economics, 50(1), 3-39.

Engerman, S. L., and Sokoloff, K. L. (2012). Economic development in the Americas since 1500: endowments and institutions. Cambridge University Press.

Lloyd, C. and Metzer, J. (2013). Settler colonization and societies in world history: patterns and concepts. In Settler Economies in World History, Global Economic History Series 9:1.

Lützelschwab, C. (2007). Populations and Economies of European Settlement Colonies in Africa (South Africa, Algeria, Kenya, and Southern Rhodesia). In Annales de démographie historique (No. 1, pp. 33-58). Belin.

Lützelschwab, C. (2013). Settler colonialism in Africa Lloyd, C., Metzer, J., and Sutch, R. (2013), Settler economies in world history. Brill.

Willebald, H., and Juambeltz, J. (2018). Land Frontier Expansion in Settler Economies, 1830–1950: Was It a Ricardian Process? In Agricultural Development in the World Periphery (pp. 439-466). Palgrave Macmillan, Cham.

The spread of Hindu-Arabic numerals in the tradition of European practical mathematics

by Raffaele Danna (University of Cambridge)

 

Arabic_numerals-en.svg
Comparison between five different styles of writing Arabic numerals. Available at Wikimedia Commons.

0, 1, 2, 3, 4, 5, 6, 7, 8, 9

The ten digits we use to represent numbers are everywhere in our modern world. But they reached a widespread diffusion in the ‘west’ only at a relatively late stage. The positional numeral system was central for the development of the scientific revolution, but – contrary to what one might expect – their spread in Europe was not driven just by scientists, but also by practitioners.

How did these numbers reach the almost universal diffusion we see today? What were the causes and broad consequences of their introduction?

As a matter of fact, for a very long time the ‘west’ did not know the numbers we now use every day. People had to rely on Roman numerals and the corresponding reckoning tools (such as counting boards).

Arabic numbers, or more precisely Hindu-Arabic numbers, were invented sometime in fifth century India. From India they spread westwards, together with the spread of Islam, reaching the Mediterranean around the eighth century.

Europe picked up these numbers from the Arabic civilisation, and that is the reason why we call them ‘Arabic’. But it took a long time before Europeans widely adopted Arabic numbers in their practice. This was due to difficult relationships with Islam, but also to the low levels of literacy and numeracy in Europe at the time, together with a more general cultural backwardness in comparison with the Arabic civilisation.

Starting from the eleventh century, Europe experienced an economic renaissance that reached its peak in the thirteenth century. With the development of international trade, several key financial and organisational innovations were introduced. This is the moment when the first international companies appear, together with the earliest examples of banking and international finance.

This new economic complexity raised the need for a higher level of computing power, especially to solve calculations of interest and exchange rates. It is at this stage that merchant-bankers, who were already literate and numerate, realised that Hindu-Arabic numerals suited their needs better than Roman ones. Arithmetic with Hindu-Arabic numerals became part of the required training for merchant-bankers.

By the late thirteenth century, we see the first examples of practical arithmetic texts published in central Italy, the cradle of early finance and banking. From here, the publication of these manuals slowly spread to the rest of Europe, with a dramatic acceleration in the sixteenth century driven by the introduction of the printing press.

A detailed reconstruction of these traditions, comprising more than 1,280 manuals, makes it possible to study the main characteristics of such spread. It was a movement from the south to the north of Europe, with late adopters – such as the north of Germany and England – taking up such texts only in the second half of the sixteenth century.

The spread of these texts allows us to reconstruct a slow process of transmission of practical mathematics throughout Europe. The use of such knowledge transformed economic practices, together with several other fields, such as visual arts, architecture, shipbuilding, surveying and engineering.

During the seventeenth century, this practical mathematics combined with the academic understanding of astronomy, reaching a new synthesis in the scientific revolution. Following the story of the adoption of Hindu-Arabic numerals allows us to appreciate that the scientific revolution was also indebted to more than three centuries of mathematical experimentation carried out by European practitioners.

Global trade imbalances in the classical and post-classical world

by Jamus Jerome Lim (ESSEC Business School and Center for Analytical Finance)

 

Global_trade_visualization_map,_2014
A Global trade visualization map, with data is derived from Trade Map database of International Trade Center. Available on Wikipedia.

In 2017, the bilateral trade deficit between China and the United States amounted to $375 billion, a staggering amount just shy of what the latter incurred against the rest of the world combined. And not only is this deficit large, it has been remarkably persistent: the chronic imbalance emerged in earnest in 1989, and has persisted for the better part of three decades. Some have even pointed to such imbalances as a contributing factor to the global financial crisis of 2008.

While such massive, chronic imbalances may strike one as artefacts of a modern, hyperglobalised world economy, nothing could be further from the truth. For example, recent economic history records large, persistent imbalances between the United States and Britain during the former’s earlier stages of development. Such imbalances also characterised the rise of Japan following the Second World War.

In recent research, we show that external imbalances between two major economic powers – an established leader, and a rising follower – were also observed over three earlier periods in economic history. These were the deficits borne by the Roman empire vis-à-vis pre-Gupta India circa 1CE; the borrowing by the Abbasid caliphate from Carolingian Frankia in the early ninth century; and the imbalances between West European kingdoms and the Byzantine empire that emerged around the 1300s.

Although data paucity implies that definitive claims on current account deficits are all but impossible, it is possible to rely on indirect sources of evidence to infer the likely presence of imbalances. One such source consists of trade-related documents from the time as well as pottery finds, which ascertain not just the existence but also the size of exchange relationships.

For example, using such records, we demonstrate that Baghdad – the capital of the Abbasid Caliphate – received furs and slaves from the comparative economic backwater that was the Carolingian empire, in exchange for goods such as spices, dates and olive oil. This imbalance may have lasted as long as several centuries.

A second source of evidence comes from numismatic records, especially coin hoards. Hoards of Roman gold aurei and silver dinarii have been discovered, for example, in India, with coinage dating from as early as the reign of Augustus through until at least that of Marcus Aurelius, well over half a century. Rome relied on such specie exports to fund, among other expenditures, continued military adventurism during the second century.

Our final source of evidence relies on fiscal records. Given the close relationship between external and fiscal balances – all else equal, greater government borrowing gives rise to a larger external deficit – chronic budgetary shortfalls generally give rise to rising imbalances.

This was very much the case in Byzantium prior to its decline: around the turn of the previous millennium, the Empire’s saving and reserves were in significant surplus, lending credence to the notion that the flow of products went from East to West. The recipients of such goods? The kingdoms of Western Europe, paid for with silver.

Squeezing blood from a stone: eighteenth century debtors’ prisons worked

by Alex Wakelam (University of Cambridge)

 

Woodstreet Compter.jpg
Wood Street Compter, 1793. Image extracted from page 384 of volume 1 of Old and New London, Illustrated, by Walter Thornbury. Available at Wikimedia Commons. 

While it is often assumed that debtors’ prisons were illogical and ineffective, my research demonstrates that they were extremely economically effective for creditors though they could ruin the lives of debtors.

The debtors’ prison is a frequent historical bogeyman, a Dickensian symptom of the illogical cruelty of the past that disappeared with enlightened capitalism. As imprisoning someone who could not afford to pay their debts, keeping them away from work and family, seems futile it is assumed creditors were doing so to satisfy petty revenge.

But they were a feature of most of English history from 1283, and though their power was curbed in 1869, there were still debtors imprisoned in the 1920s. The reason they persisted, as my research shows, is because, for creditors, they worked well.

The majority of imprisoned debtors in the eighteenth century were released relatively quickly having paid their creditors. This revelation is timely when events in America demonstrate how easily these prisons can return.

As today, most eighteenth century purchases were done on credit due to the delay in wages, limited supply of coinage, and cultural preferences for buying goods on credit. But credit was based on a range of factors including personal reputation, social rank and moral status. Informal oral contracts could frequently be made with little sense of an individual’s actual financial status, particularly if they were a gentleman or aristocrat. As contracts were not based on goods and court processes were slow, it was difficult to seize property to recover debts when creditors required money.

Creditors were able to imprison debtors without trial in this period until they paid what they owed or died. The registers of a London Debtors’ Prison, the Woodstreet Compter (1741-1815), reveal that creditors had good reasons to do so. Most of the 10,156 debtors contained in the registers left prison relatively quickly – 91% were released in under a year while almost a third were released in less than 100 days.

In addition, 84% were ‘discharged’ by their creditors, indicating that either the prisoner had paid their debts or a new contract had been agreed. Imprisonment forced debtors to find a way to pay or at least to renegotiate with creditors.

Prisoners were not the poor, but usually middle class people in small amounts of debt. One of the largest groups was made up of shopkeepers (about 20% of prisoners) though male and female prisoners came from across society with gentlemen, cheesemongers, lawyers, wigmakers and professors rubbing shoulders.

Most used their time to coordinate the selling of goods to raise money, or borrowed yet more from family and friends. Many others called in their own debts by having their debtors imprisoned as well.

As prisons were relatively open, some debtors worked off their debts. John Grano, a trumpeter who worked for Handel, imprisoned in the 1720s, taught music lessons from his cell. Others sold liquor or food to fellow prisoners or continued as best they could at their trade in the prison yard. Those with a literary mind, such as Daniel Defoe, wrote their way out.

Though credit works on different terms today, that coercive imprisonment is effective at securing repayment remains true. There have been a number of US states operating what amount to debtors’ prisons in recent years where the poor, fined by the state usually for traffic violations, are held until they pay what they owe.

Attorney General Jeff Sessions even retracted an Obama era memo in December aimed at abolishing the practice. While eighteenth century prisons worked effectively for creditors, they could ruin the lives of debtors who were forced to sell anything they could to pay their dues and escape the unsanitary hole in which they were being kept without trial. Assuming that they did not work and therefore won’t return is shown by my research to be false.

 

Is committing to a free trade policy enough? Evidence from colonial Africa

by Federico Tadei (Department of Economic History, University of Barcelona)

 

Africa1898
French map of Africa from 1898, showing colonial claims. Originally published as “Carte Generale de l’Afrique’. Available at Wikimedia Commons.

Recent Brexit negotiations have led to intense debate on the type of trade agreements that should be put in place between the UK and the European Union. According to Policy Exchange’s February 2018 report, the UK should unilaterally commit to free trade. The assumption underlying this argument is that the removal of tariffs has the potential to reduce consumer prices due to greater competition and lower protection of domestic industries, which would promote innovation and increase productivity.

But the removal of tariffs and protectionist policies might not be sufficient to implement free trade fully. My research on trade from colonial Africa suggests that a legal commitment to free trade is not nearly enough.

Specifically, it appears that during the colonial period the British formally relied on free trade encouraging competition between trading firms, while the French made use of their political power to establish trade monopsonies and acquire African goods at prices lower than in the world markets.

Yet the situation on the ground might have been quite different than what formal policies envisaged. Did the British colonies actually enjoy free trade? Did producers in Africa who lived under British rule receive higher prices than those living under the French?

To answer these questions, I measure the degree of competitiveness of trade under the two colonial powers by computing profit margins for trading companies that bought goods from the African coast and resold them in Europe.

To do so, I use data on African export prices and European import prices for a variety of agricultural commodities exported from British and French colonies between 1898 and 1939 and estimated trade costs from Africa to Europe. The rationale behind this methodology is simple: if the colonisers relied on free trade, profit margins of trading companies should be close to zero.

Tadei Figures

On average, profit margins in the British colonies were lower than in the French colonies, suggesting a higher reliance on free trade in the British Empire (see Figure 1). But if we compare the two colonial powers within one same region (West or East Africa) (Figures 2 and 3), it appears that the actual extent of free trade depended more on the conditions in the colonies than on formal policies of the colonial power.

Profit margins were statistically indistinguishable from zero in British East Africa, suggesting free trade, but they were large (10-15%) in West African colonies under both the French and the British, suggesting the presence of monopsony power.

These results suggest that, in spite of formal policies, other factors were at play in determining the actual implementation of free trade in Africa. In the Western colonies, the longer history of trade and higher level of commercialisation reduced the operational costs of trading companies. At the same time, most of agricultural production was based on small African farmers, with little political power and ability to oppose de facto trade monopsonies.

Conversely, in East Africa, production was often controlled by European settlers who had a much larger political influence over the metropolitan government, increasing the cost of establishing trade monopsonies and allowing better implementation of colonial free trade policy.

Overall, despite formal policies, the ability of trading firms in West Africa to eliminate competition was costly in terms of economic growth. African producers received lower prices than they would have in a competitive market and consumers paid more for imported goods. Formal commitment to free trade policies might not be sufficient to reap the full benefits of free trade.

Financial neoliberalism: British insurance and the revolution in the management of uncertainty

by Thomas Gould (University of Bristol)

 

Margaret_Thatcher_visiting_Salford
Margaret Thatcher on a visit to Salford, 1982. Available at Wikimedia Commons.

What has been the relationship between the growth of finance and ‘neoliberalism’ in post-war Britain? My research shows that the drive towards popular capitalism and a property-owning democracy was not directly created by Thatcherism, which qualifies popular narratives about the impact of government reforms such as deregulation and privatisation.

Instead, away from the battlegrounds of mainstream economics and politics, a silent ‘neoliberal revolution’ developed deep within the financial industry before Thatcher came to power.

For example, between 1967 and 1980, the number of personalised life insurance policies directly linked to asset values increased from 81,000 to 3.5 million. This development marked a sea change in the way that society managed financial risk and uncertainty.

It had little to do with mainstream politics, and it was so powerful that by 1990 there were over 12 million of these unit-linked policies in force, showing that Thatcherite reforms merely accelerated the pace of change for developments that were already underway.

A cornerstone of traditional insurance, the objective of collective security, was superseded by the interests of individual fairness. The burden of financial risk was increasingly allocated to individual policyholders and the management of financial risk to the markets.

Together, unitised insurance policies and mathematical finance re-engineered the landscape of British capitalism by undermining the scientific foundations and appeal of traditional forms of protective insurance, such as industrial life insurance policies, annuities and defined benefit pension schemes.

Vast concentrations of personal wealth accumulated in institutional funds. The conduct and behaviour of firms became more diverse and complex as the science behind financial risk management was revolutionised. There were four key contours of change:

  • First, collective provision was increasingly superseded by considerations of individual equity.
  • Second, financial analysis and treatment of assets assumed greater importance than the management of liabilities.
  • Third, insurance and protection were increasingly displaced by savings and investment media.
  • Finally, traditional actuarial science was gradually substituted for a paradigm of financial economics.

 

Financial neoliberalism – the increased role and responsibility of financial markets and financial theories in the provision of economic security – redesigned the management of uncertainty and risk in insurance by changing the relationships between experts, individuals and the regulator within an increasingly sophisticated and competitive financial environment.

Risk-taking financial behaviour became an exigency. The presumption that financial uncertainty could, and should, be managed through financial markets gained saliency. The financial world, and its future, was increasingly understood through the lenses of advanced computing, mathematics and statistics.

Financial neoliberalism dramatically changed the ways in which the financial industry and government engaged with uncertainty; and it influenced the increasingly risk-based techniques, and forms of knowledge, through which they sought to manage and control that future.

Political philosophy may be thought to have represented the main attack on collectivism and the welfare state. Yet, removed from mainstream political discourse, the journals of the actuarial profession show how financial economics gradually displaced actuarial science as the principle scientific paradigm that managed financial uncertainty.

Furthermore, data compiled from the Association of British Insurers show that the attack on principles of collectivism were already underway in the late 1960s and early 1970s as individuals increasingly acquired these personalised insurance policies.

Thus, the practice of unitising the management of risk gradually merged with a new paradigm of financial economics that scientifically legitimised investment and savings rather than mutual protection and risk pooling. In this sense, many of the Thatcher government’s reforms geared towards promoting popular capitalism and property ownership simply pushed at an open door.

The Ruhr’s mining industry and its power struggle with the High Authority of the European Coal and Steel Community

by Juliane Czierpka (Ruhr-University Bochum)

 

Ruhr
Ruhr mining. Available at Pixabay.

Since the beginning of the Ruhr area’s industrialisation in the second half of the nineteenth century, the local mining industry has always been a powerful player. Controlling vast amounts of coal, the Ruhr’s mining companies held a huge share of the European coal market and were usually able to influence political decisions made by German governments.

One reason for the power of the Ruhr’s mining industry was of course the importance of the energy sector and the country’s dependence on its coal. But the local mining companies also used to present themselves as a unity, speaking with one voice and – even more importantly – selling their coals collectively.

In other words, the mining companies of the Ruhr had built a huge coal cartel, even though it wasn’t called a cartel or syndicate after 1945 – at least within the Ruhr area, everyone was quite keen on finding new names for the sales.

In the early 1950s, the newly constituted German government was desperately trying to reduce the Allies’ control. While Britain and the United States were willing to give the Germans back parts of their sovereignty and started to loosen the regulations on the production of steel and other goods, the French did not like this approach.

Naturally, after 1945 the French government not only felt threatened by the German heavy industry, which was seen as having made the war possible by quickly adapting to the production of arms in order to support Hitler and its troops, but also by the German mining industry’s market power, because the energy sector was closely linked to questions of national autonomy and security. Furthermore, the French steel industry depended on specific qualities of coal from the Ruhr area.

The specific combination of interests in Europe in the aftermath of the war – a French government trying to keep control over the German coal and steel sector and a German government that was trying hard to win back at least parts of its sovereignty from the Allies – led to the foundation of the European Coal and Steel Community (ECSC).

The ECSC’s principal goal was to merge the coal and steel markets of Germany, France, Belgium, Luxembourg and the Netherlands, thereby leading to a high degree of economic and political cooperation, and peace between the member states. These words were of course mainly tinsel and glitter, as every member state pursued its own national interests.

The High Authority, the ECSC’s supranational executive institution, is usually seen as a failure by historians and political scientists, because it did not succeed in enforcing the ECSC’s treaty against the member states’ national interests.

My research shows that the hypothesis of a weak HA is not generally true. Looking into the HA’s dispute with the Ruhr’s mining industry over the organisation of their coal sales, I show how the HA managed to break up the traditional structures in the Ruhr area, even though the mining industry fought fiercely for their cartel and was supported by the German government – which had initially sold the mining industry out for membership in the ECSC.

My research also sheds light on the relationship between businesses and national governments and shows how this relationship was changed by the emergence of a new player – the supranational HA. My research also shows that there would have been a very early Gerxit, which only was prevented by the pressure of the Allies, which forced the German government to be part of the ECSC regardless to domestic protests.

Institutional choice in the governance of the early Atlantic sugar trade: diasporas, markets and courts

by Daniel Strum (University of São Paulo)

This article is published by The Economic History Review, and it is available for EHS members here.

 

Strum Pic
Figure 1. Cartographic chart of the Atlantic Ocean (c. 1600). Source: Biblioteca Nazionale Centrale di Firenze, Florence, Italy. Port. 27.  By kind permission of the Ministero per i Beni e le Attivitá Culturali della Repubblica Italiana.
Reproduction of this image by any means is strictly prohibited.

In the age of sailboats, how could traders be confident that the parties with whom they were considering working on the other side of the ocean would not act opportunistically? Commercial agents overseas spared merchants time and the hazards of travel and allowed them to diversify their investments; but agents might also cheat or renege on or neglect their commitments.

My research about the merchants of Jewish origin plying the sugar trade linking Brazil, Portugal and the Netherlands demonstrates that the same merchants chose different feasible mechanisms (institutions) to curb opportunism in different types of transactions. Its main contribution is to establish a clear pattern linking the attributes of these transactions to those of the mechanisms chosen to enforce them. It also shows how these mechanisms interrelated.

Around 1600, Europe experienced rapidly growing urban populations and dependence on trade for supplies of basic products, while overseas possessions contributed to a surging output of marketable commodities, including sugar. Brazil was turned into the first large-scale plantation economy and became the world’s main sugar producer, with Amsterdam emerging as its main distribution and refining centre. Most of the Brazilian sugar trade was intermediated by merchants in Portugal, and traders of Jewish origin scattered along this trade route played a prominent role in the sugar trade.  The Brazilian sugar trade required institutions with low costs in agency services and contract enforcement because it was a significantly competitive market. Its political, legal, and administrative framework raised relatively few obstacles to market entrants, and trade in a semi-luxury commodity necessitated low start-up costs.

Sources reveal that merchants of Jewish origin engaged mostly individuals of other backgrounds in transactions in which agents had little latitude, performed simple tasks over short periods, and managed small sums (see table 1). Insiders were not left out in these transactions, but the background of agents was not determinant.The research shows that these transactions were primarily enforced by an informal mechanism that linked one’s expected income to one’s professional reputation. Bad conduct led to marginalization while good behaviour vouched for more opportunities by the same and other principals. This mechanism functioned among all traders, despite their differing backgrounds, who were active in these interconnected marketplaces. This professional reputation mechanism worked because a standardization of basic mercantile practices produced a shared understanding of how trade should be conducted. At the same time, the marketplaces’ structure together with patterns of transportation and correspondence increased the speed, frequency, volume, and diversity of the information flow within and between these marketplaces. This information system facilitated both the detection of good and bad conduct and relatively rapid response to news about it.

 

Strum Pic 2
Figure 2. Sugar crate being weighted at the Palace Square in Lisbon. Source: Dirk Stoop – Terreiro do Paço no século XVII, 1662. Painting. Museu da Cidade, Lisboa, Portugal. MC.PIN.261.© Museu da Cidade – Câmara Municipal de Lisboa.

The professional reputation mechanism worked better on transactions involving small sums and fewer, simpler, and shorter tasks. Misconduct in these tasks were easier to detect and expose amid an extensive and heterogeneous network; and if the agent cheated, the small sums assigned were not enough to live on while forsaking trade.

 

Table 1. Backgrounds of agents in complex and simple arrangements

Type of transaction Outsiders Probable outsiders Insiders Probable insiders Relatives
Complex 2.6% 4.9% 69.9% 2.1% 20.6%
Simple 20.0% 70.0% 0% 10.0% 0%

Source: original article in the Economic History Review.

 

On the other hand, merchants of Jewish origin preferred to engage members of their diaspora in complex, larger, and longer transactions (see table 1). A reputation mechanism within diaspora was more effective in governing transactions that were difficult to follow. Although enforcement within the diaspora benefitted from the general information system, the diaspora’s social structure generated more information more rapidly about the conduct of its members. In each centre, insiders knew each other and marriages and socialization within the group prevailed. Insiders usually had personal acquaintances and often relatives in other centres as well. They were conscious of their common history and fragile status. Such social structure also provided greater economic and social incentives for honesty and diligence than the professional mechanism, making the internal mechanism preferable in transactions involving larger sums and wider latitude.

Finally, the research shows that the legal system was able to impose sanctions across wide distances and political units. Yet owing to courts’ slowness and costliness, merchants resorted to litigation only after nonjudicial mechanisms failed. Furthermore, courts could not punish inattention that did not breach legal, customary, or contractual specifications, nor could courts reward accomplishment.

Litigation had to supplemented the professional mechanism because its incentives were not homogeneous across all marketplaces and diasporas. Courts also reinforced the diaspora mechanism by limiting the future income an agent expected to gain from misappropriating large sums from one or many principals. Finally, the professional mechanism supplemented the diaspora mechanism by limiting alternative agency relations with outsiders for insiders who had engaged in misconduct.

Because merchants were capable of matching transactions with the most appropriate governing mechanisms, they were able to diversify their transactions, expand the market for agents, better allocate agents to tasks, and stimulate competition among them. The resulting decrease in agency costs was critical in a significantly competitive market as the sugar trade. Institutional choice thus supported and reinforced—rather than caused—expansion of exchange.

War, shortage and Thailand’s industrialisation, 1932-57

by Panarat Anamwathana (University of Oxford)

This study was awarded the prize for the best conference paper by a new researcher at the Economic History Society’s 2019 annual conference in Belfast.

 

1950S-BANGKOK-STREET-SCENE
1954 Bangkok street. Available at Wikimedia Commons.

Thailand fell under Japanese occupation during the Second World War. The small agrarian country relied on imports from the West for consumer and industrial goods, and suffered shortages of everything from clothes to machinery between 1941 and 1945.

After the Japanese surrender, the Thai government learned from its trauma, adapted its economic approach and began domestic production of its own consumer goods – although at the cost of inefficiencies and rent-seeking.

Economic historians have expressed different perspectives on Thailand’s immediate post-war economic development and state-led industrialisation programme. Some, such as Hewison (1989) and Ingram (1971), mention the expansion of manufacturing capacity, despite government inefficiencies. Others, such as Suehiro (1989) and Phongpaichit and Baker (1995), are more critical of state involvement, saying that rent-seeking and corruption hindered any real progress.

Anyone familiar with state-operated enterprises might be suspicious of Thailand’s state-led industrialisation approach. To protect many of the country’s new industries, import tariffs and quotas were introduced. At the same time, a new class of capitalists emerged from an alliance of politicians and entrepreneurs. These people benefitted from favourable concessions, state-sponsored monopolies or being granted lucrative import licences. The question is: did anything come out of all this?

Since Thailand had no industrial census for the period, it is difficult to measure changes in the kingdom’s manufacturing capacity from before the war to after the war. To address this challenge, I have gathered statistical data on three industries: sugar, textiles and gunny bags (which are essential for transporting rice, Thailand’s most important export crop). These goods were three of Thailand’s most important pre-war imports, key to the wellbeing of the population and rationed during the war.

My data come from a variety of primary sources from the National Archives of Thailand, the National Archives at Kew, and the National Archives and Records Administration in Washington, DC. I also read previously unused qualitative sources, such as government reports, correspondence and old newspapers to build a more complete picture of wartime Thailand.

I find that Thailand was able to produce more of its sugar, textiles and gunny bags after 1945, and continued to substitute for imports as the decade progressed. This was achievable in part because the shortage of goods during the war reinforced the drive to diversify the economy. Government systems and infrastructure established under the Japanese occupation but hindered by wartime circumstances could then make use of importing machinery and international credit.

Finally, machines and facilities abandoned by the Japanese army could be used by the post-war Thai government and their capitalist allies. I also find that per capita consumption either plateaued or increased during this period, suggesting that Thais were not deprived of these products because of the government’s industrialisation programme.

Corruption and rent-seeking, however, were common and can easily arise from state-led industrialisation programmes with little transparency, like that in Thailand.

For example, the Sugar Organisation, the most important state-operated enterprise in this industry, played a large role in transporting sugar from both private and government mills to shops. Unfortunately, this organisation was completely corrupt. It embezzled, cheated farmers, sold sugar to fake agents and distributors, and was extremely permissive on check-ups and regulation. Although the state did revoke some of the privileges of the organisation, it continued to operate throughout all the scandals.

My study not only contributes to the historiography of Thai economic development, but also engages with studies of various models of economic growth, the efficiency and costs of state-operated enterprises, and the legacies of the Second World War in occupied territories.

 

 

Further reading

Hewison, Kevin (1989) Bankers and Bureaucrats Capital and the Role of the State in Thailand, New Haven.

Ingram, James C (1971) Economic Change in Thailand, 1850-1970, Stanford University Press.

Phongpaichit, Pasuk, and Chris Baker (1995). Thailand: Economy and Politics, Oxford University Press.

Suehiro, Akira (1989) Capital Accumulation in Thailand, Tokyo.