Military casualties and exchange rates during the First World War: did the Eastern Front matter?

by Pablo Duarte and Andreas Hoffmann (Leipzig University)

An article expanding on this blog has been published in the Economic History Review.


Russion troops going to the front. Available at Wikimedia Commons.

In 1918 the Entente forces defeated the Central Powers on the Western Front. The First World War, with countless brutal battles and over 40 million casualties, had finally ended.

During the war, all governments substantially increased their national debt and promised to hand the bill to the losers. They also promised to return to the pre-war gold parity rather than inflating and devaluing their currency. Since the outcome of the war was expected to severely affect currency values, particularly for the losers,  foreign exchange traders had an incentive to closely follow war events to update their beliefs on who was more likely to win.

According to Ferguson’s (1998) The Pity of War, the lost morale of the German troops — reflected in higher numbers of prisoners of war and of soldiers surrendering on the Western Front — was the ultimate reason for their defeat. Complementing this argument, Hall (2004) provided evidence that military casualties on the Western Front — the key front to finally winning the war — can help explain contemporary fluctuations in the exchange rates between belligerents’ currencies.

Although finally decided in the West, historians have emphasized the relevance of the global dimension of the First World War and the importance of the Eastern Front in understanding its complex evolution. Imagine it is 1914. Russia has just entered the war (earlier than expected), upsetting the plans of the Central Powers to circumvent a two-front war. Events on one front affected those on the other. But did contemporary traders, like historians today, consider the Eastern Front to be of relevance?

In our forthcoming article, we provide the first empirical insights into the relative importance of the Eastern Front during the First World War from the perspective of contemporary foreign exchange traders. Building on Hall’s study, the article indicates when and to what extent military casualties from both the Western and  Eastern Fronts were linked to exchange rate fluctuations during the First World War, and suggest that traders used this information as an indicator as to  which side was more likely to win.

To analyze the link between exchange rates and casualties we have introduced a novel dataset:  the German Reichsarchiv and the Austrian War Office. Merging our dataset with that for the Western Front employed by Hall (2004), we have been able to construct a rich dataset on war casualties for France, Britain, and Russia as well as Germany and Austria-Hungary, for  both Fronts.


Figure 1. 15,000 Russian Prisoners of war in Germany.

Duarte & Hoffmann
Russian prisoners in Germany. Available at Wikimedia Commons.


Using the digital archives of the Neue Zürcher Zeitung (a Swiss newspaper),  we have further documented information on casualties, specifically  the number of prisoners of war (Figure 1).  The following quote from December 1914 makes this finding explicit:

Berlin, Dec. 31 [1914] (Wolff. Authorized) The overall number of prisoners of war (no civilian prisoners) in Germany at the end of the year is 8,138 officers and 577,875 men. This number does not include a portion of those captured on the run in Russian Poland nor any of those still in transit. The overall number is comprised of the following: French 3,159 officers and 215,905 men, including 7 generals; Russians 3,575 officers and 306,294 men, including 3 generals; British 492 officers and 18,824 men (Neue Zürcher Zeitung, 1 Jan. 1915, p. A1.).


In summary, our forthcoming article provides evidence that foreign exchange traders recognized the global dimension of the war, especially the Eastern and Western Fronts.  Casualties on both Fronts were associated with exchange rate fluctuations. The number of soldiers captured on the Eastern Front affected exchange rates in the early war years. Foreign exchange traders gave additional weight to the Eastern Front during the first year of the war because Russia’s attack came as a surprise and the number of casualties was substantially higher than on the Western Front.

From autumn 1916 onwards, even though Russia had not yet left the war, our findings indicate that traders believed that the key to winning the war was in the west.  The Brusilov offensive, a massive Russian attack (from June to September 1916), had proven that the Central Powers would face substantial opposition in the East. Moreover, the Allied forces on the Western Front had started to coordinate joint offenses.


To contact the authors: Twitter: @economusiker Twitter: @Andhoflei



Ferguson, N. (1998). The Pity of War. Basic Books.

Hall, G. J., ‘Exchange rates and casualties during the First World War’, Journal of Monetary Economics, 51 (2004), pp. 1711–42.




Plague and long-term development

by Guido Alfani (Bocconi University, Dondena Centre and IGIER)


The full paper has been published in The Economic History Review and is available here.

A YouTube video accompanies this work and can be found here.


How did preindustrial economies react to extreme mortality crises caused by severe epidemics of plague? Were health shocks of this kind able to shape long-term development patterns? While past research focused on the Black Death that affected Europe during 1347-52 ( Álvarez Nogal and Prados de la Escosura 2013; Clark 2007; Voigtländer and Voth 2013), in a forthcoming article with Marco Percoco we analyse the long-term consequences of what was by far the worst mortality crisis affecting Italy during the Early Modern period: the 1629-30 plague which killed an estimated 30-35% of the northern Italian population — about two million victims.


Figure 1 Luigi Pellegrini Scaramuccia (1670), Federico Borromeo visits the plague ward during the 1630 plague,

Alfani 1

Source: Milan, Biblioteca Ambrosiana


This episode is significant in Italian history, and more generally, for our understanding of the Little Divergence between the North and South of Europe. It had recently been hypothesized that the 1630 plague was the source of Italy’s relative decline during the seventeenth century (Alfani 2013). However, this hypothesis lacked solid empirical evidence. To resolve this question, we take a different approach from previous studies, and  demonstrate that plague lowered the trajectory of development of Italian cities. We argue that this was mostly due to a productivity shock caused by the plague, but we also explore other contributing factors. Consequently,  we provide support for the view that the economic consequences of severe demographic shocks need to be understood and studied on a case-by-case basis, as the historical context in which they occurred can lead to very different outcomes (Alfani and Murphy 2017).

After assembling a new database of mortality rates in a sample of 56 cities, we estimate a model of population growth allowing for different regimes of growth. We build on the seminal papers by Davis and Weinstein (2002), and Brakman et al. (2004) who based their analysis on a new framework in economic geography framework in which a relative city size growth model is estimated to determine whether a shock has temporary or persistent effects. We find that cities affected by the 1629-30 plague experienced persistent, long-term effects (i.e., up to 1800) on their pattern of relative population growth.


Figure 2. Giacomo Borlone de Buschis (attributed), Triumph of Death (1485), fresco

Alfani 2

Source: Oratorio dei Disciplini, Clusone (Italy).


We complete our analysis by estimating the absolute impact of the epidemic. We find that in northern Italian regions the plague caused a lasting decline in both the size and rate of change  of urban populations. The lasting damage done to the urban population are shown in Figure 3. For urbanization rates it will suffice to notice that across the North of Italy, by 1700 (70 years after the 1630 plague), they were still more than 20 per cent lower than in the decades preceding the catastrophe (16.1 per cent in 1700 versus an estimated 20.4 per cent in 1600, for cities >5,000). Overall, these findings suggest that surges in plagues may contribute to the decline of economic regions or whole countries. Our conclusions are  strengthened by showing that while there is clear evidence of the negative consequences of the 1630 plague, there is hardly any evidence for a positive effect (Pamuk 2007). We hypothesize that the potential positive consequences of the 1630 plague were entirely eroded by a negative productivity shock.


Figure 3. Size of the urban population in Piedmont, Lombardy, and Veneto (1620-1700)

Alfani 3

Source: see original article


Demonstrating that the plague had a persistent negative effect on many key Italian urban economies, we provide support for the hypothesis that the origins of  relative economic decline in northern Italy are to be found in particularly unfavorable epidemiological conditions. It was the context in which an epidemic occurred that increased its ability to affect the economy, not the plague itself.  Indeed, the 1630 plague affected the main states of the Italian Peninsula at the worst possible moment when its manufacturing were dealing with increasing competition from northern European countries. This explanation, however, provides a different interpretation to the Little Divergence in recent literature.


To contact the author:



Alfani, G., ‘Plague in seventeenth century Europe and the decline of Italy: and epidemiological hypothesis’, European Review of Economic History, 17, 4 (2013), pp.  408-430

Alfani, G. and Murphy, T., ‘Plague and Lethal Epidemics in the Pre-Industrial World’, Journal of Economic History, 77, 1 (2017), pp. 314-343.

Alfani, G. and Percoco, M., ‘Plague and long-term development: the lasting effects of the 1629-30 epidemic on the Italian cities’, The Economic History Review, forthcoming,

Álvarez Nogal, C. and Prados de la Escosura,L., ‘The Rise and Fall of Spain (1270-1850)’, Economic History Review, 66, 1 (2013), pp. 1–37.

Brakman, S., Garretsen H., Schramm M. ‘The Strategic Bombing of German Cities during World War II and its Impact on City Growth’, Journal of Economic Geography, 4 (2004), pp. 201-218.

Clark, G., A Farewell to Alms (Princeton, 2007).

Davis, D.R. and Weinstein, D.E. ‘Bones, Bombs, and Break Points: The Geography of Economic Activity’, American Economic Review, 92, 5 (2002), pp. 1269-1289.

Pamuk, S., ‘The Black Death and the origins of the ‘Great Divergence’ across Europe, 1300-1600’, European Review of Economic History, 11 (2007), pp. 289-317.

Voigtländer, N. and H.J. Voth, ‘The Three Horsemen of Riches: Plague, War, and Urbanization in Early Modern Europe’, Review of Economic Studies 80, 2 (2013), pp. 774–811.

The Political Economy of the Army in a Nonconsolidated Democracy: Spain (1931-1939)

by Alvaro La Parra-Perez (Weber State University)

The full article is published by the Economic History Review and is available for Early View at this link 

The Spanish Civil War (1936-9; henceforth, SCW) ended the Second Spanish Republic (1931-9), which is often considered Spain’s first democracy. Despite the hopes raised by the Republic – which enfranchised women and held free and fair elections, separated Church and state, and drafted and ambitious agrarian reform- , its end was not very different from many previous Spanish regimes: a military coup started the SCW which ultimately resulted in a dictatorship led by one of the rebel officers: Francisco Franco (1939/75).

In my article “For a Fistful of Pesetas? The Political Economy of the Army in a Non-Consolidated Democracy: The Second Spanish Republic and Civil War (1931-9)”, I open the “military black box” to understand the motivations driving officers’ behavior. In particular, the article explores how the redistribution of economic and professional rents during the Republic influenced officers’ likelihood of rebelling or remaining loyal to the republican government in 1936. By looking at (military) intra-elite conflict, I depart from the traditional assumption of an “elite single agent” that characterizes the neoclassical theory of the state (e.g. here, here, here, or here; also here).

The article uses a new data set with almost 12,000 active officers active in 1936 who belonged to the corps more directly involved in combat. Using the Spanish military yearbooks between 1931 and 1936, I traced officers’ individual professional trajectories and assessed the impact that republican military reforms in 1931-6 had on their careers. The side –loyal or rebel- chosen by each officer comes from Carlos Engel.

Figure 1. Extract from the 1936 military yearbook. Source: 1936 Military Yearbook published by the Spanish Minister of War:

The main military reforms during the Republic took place under Manuel Azaña’s term as Minister of the War (1931-3). Azaña was also the leader of the leftist coalition that ruled the Republic when some officers rebelled and the SCW began. Azaña’s reforms favored the professional and economic independence of the Air Force and harmed many officers’ careers when some promotions passed during Primo de Rivera’s dictatorship (1923/30) were revised and cancelled. The system of military promotions was also revised and rendered more impersonal and meritocratic. Some historians also argue that the elimination of the highest rank in the army (Lieutenant General) worsened the professional prospects of many officers because vacancies for promotions became scarcer.

The results suggest that, at the margin, economic and professional considerations had a significant influence on officers’ choice of side during the SCW. The figure below shows the probit average marginal effects for the likelihood of rebelling among officers in republican-controlled areas. The main variables of interest are the ones under the “Rents” header. In general, those individuals or factions that improved their economic rents under Azaña’s reforms were less likely to rebel. For example, aviators were almost 20 percentage points less likely to rebel than the reference corps (artillerymen) and those officers with worse prospects after the rank of lieutenant general was eliminated were more likely to join the rebel ranks. Also, officers with faster careers (greater “change of position”) in the months before the SCW were less likely to rebel. The results also suggest that officers had a high discount rate for changes in their rank or position in the scale. Pre-1935 promotions are not significantly related to officers’ side during the SCW. Officers negatively affected by the revision of promotions in 1931/3 were more likely to rebel only at the 10 percent significance level (p-value=0.089).

Untitled 2
Figure 2. Probit average marginal effects for officers in republican-controlled areas with 95-percent confidence intervals. Source: see original article

To be clear, economic and professional interest were not the only elements explaining officers’ behavior. The article also finds evidence for the significance of other social and ideological factors. Take the case of hierarchical influences. Subordinates’ likelihood of rebelling in a given unit increased if their leader rebelled. Also, officers were less likely to rebel in those areas where the leftist parties that ruled in July 1936 had obtained better results in the elections held in February. Finally, members of the Assault Guard –a unit for which proven loyalty to the Republic was required- were more likely to remain loyal to the republican government.

The results are hardly surprising for an economist: people respond to incentives and officers – being people- were influenced at the margin by the impact that Azaña’s reforms had on their careers. This mechanism adds to the ideological explanations that have often dominated the narratives of the SCW, which tend to depict the army –more or less explicitly- as a monolithic agent aligned with conservative elites. As North, Wallis, and Weingast showed for other developing societies, intra-elite conflict and the redistribution of rents were an important factor in the dynamics (and ultimate fall) of the dominant coalition in Spain’s first democracy.


To contact the author:

Twitter: @AlvaroLaParra

Professional website:

Can school centralization foster human capital accumulation? A quasi-experiment from early twentieth-century Italy

By Gabriele Cappelli (University of Siena) and Michelangelo Vasta (University of Siena)

The article is available on Early View at the Economic History Review’s link here


The issue of school reform is a key element of institutional change across countries. In developing economies the focus is rapidly shifting from increasing enrolments to improving educational outputs (literacy and skills) and outcomes (wages and productivity). In advanced economies, policy-makers focus on generating skills from educational inputs despite limited resources. This is unsurprising, because human capital formation is largely acknowledged as one of the main factors of economic growth.

Related to education policy, reforms have long focused on the way that the school systems can be organized, particularly its management and funding by local v. central government. On the one hand, local policy makers are more aware of the needs of local communities, which is supposed to improve schooling. On the other hand, school preferences might vary considerably between the central government and the local ruling elites, hampering the diffusion of education. Despite the importance of the topic, there is little historical research on this topic.

In this paper, we offer fresh evidence using a quasi-experiment that aims to explore dramatic changes in Italy’s educational institutions at the beginning of the 20th century, i.e. the 1911 Daneo-Credaro Reform. Due to this legislation, most municipalities moved from a decentralized school system, which had been based on the 1859 Casati Law, to direct state management and funding, while other municipalities, mainly provincial and district capitals, retained their autonomy, thus forming two distinct groups (Figure 1).

The Reform design allows us to compare these two groups through a quasi-experiment based on an innovative technique, namely Propensity Score Matching (henceforth PSM). PSM tackles an issue with the Reform that we study, namely that the assignment into treatment (centralization) of the municipalities is not random: the municipalities that retained school autonomy were those characterized by high literacy. By contrast, the poorest and less literate municipalities were more likely to end up under state control, implying that the analysis of the Daneo-Credaro Reform as an experiment will tend to overestimate the impact of centralization. PSM tackles the issue by ‘randomizing’ the selection into treatment: a statistical model is used to estimate the probability of being selected into centralization (propensity score) for each municipality; then, an algorithm matches municipalities in the treatment group with municipalities in the control group that have an equal (or very similar) propensity score – meaning that the only different feature will be whether they are treated or not. To perform PSM, we construct a novel database at the municipal level (a large sample of 1,000+ comuni). Secondly, we fill a gap in the historiography by providing an in-depth discussion of the way that the Reform worked, which has so far been neglected.

Figure 1 – Municipalities that still retained school autonomy in Italy by 1923. Source: Ministero della Pubblica Istruzione (1923), Relazione sul numero, la distribuzione e il funzionamento delle scuole elementari. Rome. Note: both the grey and black dots represent municipalities that retained school autonomy by 1923, while the others (not shown in the map) had shifted to centralized school management and funding. 

We find that the municipalities that switched to state control were characterized by a 0.43 percentage-point premium on the average annual growth of literacy between 1911 and 1921, compared to those that retained autonomy (Table 1). The estimated coefficient means that two very similar municipalities with equal literacy rates at 60% in 1911 will have a literacy gap equal to 3 percentage points in 1921, i.e. 72.07% (school autonomy) vs 75.17% (treated). This difference is similar to the gap between the treatment group and a counterfactual that we estimated in a robustness check based on Italian provinces (Figure 2).

Screen Shot 2019-07-22 at 20.34.56
Table 1 – Estimated treatment (Daneo-Credaro Reform) effect, 1911 – 1921.
Figure 2 – Literacy rates in the treatment and control groups, 1881 – 1921, pseudo-DiD. Source: see original article

Centralization improved the overall functioning of the school system and the efficiency of school funding. First, it reduced the distance between the central government and the city councils by granting more decision-making power to the provincial schooling board under the supervision of the central government. Thus, the control exercised by the Ministry reassured teachers that their salary would be increased, and the government could now guarantee that they would be paid regularly, which was not always the case when the municipalities managed primary schooling. Secondly, additional funding was provided to build new schools. The resultant increase appears to have been very large and its impact was amplified by the full reorganization of the school system. The funds could be directed to where they were most needed. Consequently, we argue, a mere increase in funding without institutional change would have been less effective in increasing literacy rates.
To conclude, the 50-year persistence of decentralized primary schooling hampered the accumulation of human capital and regional convergence in basic education, thus casting a long shadow on the future pace of aggregate and regional economic growth. The centralization of primary education via the Daneo-Credaro Reform in 1911 was a major breakthrough, which fostered the spread of literacy and allowed the country to reduce the human-capital gap with the most advanced economies.


To contact the author: Gabriele Cappelli


Twitter: gabercappe

Landlords and tenants in Britain, 1440-1660

review by James P. Bowen (University of Liverpool)

book edited by Jane Whittle

‘Landlords and tenanta in Britain, 1440-1660’ is published by Boydell and Brewer. SAVE  25% when you order direct from the publisher – offer ends on the 15th August 2019. See below for details.



This book, the first volume in the Economic History Society’s ‘People, Markets, Goods: Economies and Societies in History’ paperback series, revisits Tawney’s classic work, The Agrarian Problem in the Sixteenth Century, published in 1912. It arises from a conference held to mark the centenary of the book’s publication and includes the leading figures in rural and agrarian history showcasing the latest research on issues originally discussed by Tawney. The book is logically structured. Keith Wrightson’s foreword provides personal insight as to attitudes amongst Cambridge economic historians who maligned Tawney. The first three chapters offer overviews beginning with Jane Whittle’s historiographical essay concerning Tawney, providing background to his Agrarian Problem. Christopher Dyer surveys the fifteenth century, given Tawney’s view that demographic changes were key in creating change in fifteenth-century England, providing the conditions for the ‘problem’ of the sixteenth century. Harold Garrett-Goodyear addresses the issues surrounding copyhold tenure and the institutional function of manor courts in promoting lords’ private interests as landowners and how this was reflected in economic and social change with the emergence of agrarian capitalism, greater social differentiation and the transition from feudal to modern society.

The remaining chapters are thematic, several of which are detailed local or micro-studies. Briony McDonagh and Heather Falvey explore the enclosure process at a local level. Complementing the rural viewpoint, Andy Wood shows how notions of custom and popular memory were prominent in urban society below the ‘middling sort’, specifically weavers of Malmesbury, Wiltshire, a cloth-working town. Whilst there is an apparent lack of evidence for Tawney’s sense of ‘ideal customary’, he suggests this does not undermine his view, conversely reinforcing his argument about the centrality of custom in popular political culture and disputes arising because of struggles over customary entitlement and urban identity. Providing a comparative dimension Julian Goodare searches for a Scottish agrarian problem, pointing out that whilst the two countries had different legal and political systems, similar processes seem to have been at play, suggesting a common economic problem rather than law or political structures.

Several chapters address the issue of tenure, Tawney having pointed to the insecurity of leasehold tenure and the increasing commercial landlord policies as being central to the agrarian problems of the sixteenth century. Jean Morrin examines a landlord-tenant dispute on the Durham Cathedral Estate over the abolition of traditional customary tenures, specifically tenant-right. She argues for a more subtle approach to leases in the early modern period given the various forms which they took, presenting a picture of negotiation and compromise, which not only encouraged tenants to improve farms, but also granted them the right to bequeath, sell or mortgage their leases to whomever they chose. Jennifer Holt explores the case of the Hornby Castle Estate in north Lancashire, analyzing the potential income from customary land and quantifying the shares of lords and tenants, demonstrating how manorial tenants benefitted despite the lord’s attempt to raise rents and fines, retaining their tenures on a customary basis.

Chapters by Bill Shannon and Elizabeth Griffiths look at landlord-driven agrarian improvement intended to raise revenue. Christopher Brooks considers the legal and political context, in particular the impact of the Civil Wars and Interregnum, highlighting the complexities which weakened Tawney’s assessment of the mid- and later seventeenth century. He highlights the common laws engagement with customary tenures by 1640, arguing that greater security afforded to smallholders enabled them to assert their rights more aggressively, with patriarchal and seigniorial landlord-tenant relationships being replaced by economic relations. Legal developments meant common law served the interests of ‘middling’ agricultural society and the gentry and that by the 1680s, land, including copyhold, had been absorbed into the market for both property and credit. Finally, David Ormrod reflects on the significance of Tawney’s work in relation to long-standing theoretical debates regarding the rise of capitalism and the transition from feudalism to capitalism.

Whittle’s short conclusion effectively synthesizes the chapters, showing that debates have progressed since Tawney’s work not least with regard to the newer approaches towards political, social and rural history. Emphasis is placed on the ‘blurred boundaries’ which existed, leading to disputes notably over enclosure and tenure. Developments in England are viewed in a wider western European perspective, with reference to up-to-date research and future questions identified. The chapters form a coherent volume which, as the title suggests, focuses on the changing relationship between landlords and tenants, a well-established trend in agrarian historiography. Moreover, while it is recognized that any notion of a sixteenth-century agrarian revolution has been rejected, it nevertheless rightly argued that Tawney’s Agrarian Problem, ‘remains a crucial reference point’, containing much to, ‘inform and inspire the twenty-first-century historian seeking to understand the changes that took place in rural England between 1440 and 1660’ (pp. 17-18).


SAVE 25% when you order direct from the publisher using the offer code B125 online hereOffer ends 15th August 2019. Discount applies to print and eBook editions. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291, and quote the same code. Any queries please email


Note: this post appeared as a book review article in the Review. We have obtained the necessary permissions.

Obituary: Professor Robert (Bob) Millward

The Economic History Society is saddened to learn of the recent death of Professor Robert (Bob) Millward. Professor Millward was professor of economics at Salford University before taking the chair in economic history at Manchester University in 1989. Bob was a highly-regarded scholar with diverse interests in economic history and he will be sorely missed. Read an academic appreciation of Robert Millward here

The spread of Hindu-Arabic numerals in the tradition of European practical mathematics

by Raffaele Danna (University of Cambridge)


Comparison between five different styles of writing Arabic numerals. Available at Wikimedia Commons.

0, 1, 2, 3, 4, 5, 6, 7, 8, 9

The ten digits we use to represent numbers are everywhere in our modern world. But they reached a widespread diffusion in the ‘west’ only at a relatively late stage. The positional numeral system was central for the development of the scientific revolution, but – contrary to what one might expect – their spread in Europe was not driven just by scientists, but also by practitioners.

How did these numbers reach the almost universal diffusion we see today? What were the causes and broad consequences of their introduction?

As a matter of fact, for a very long time the ‘west’ did not know the numbers we now use every day. People had to rely on Roman numerals and the corresponding reckoning tools (such as counting boards).

Arabic numbers, or more precisely Hindu-Arabic numbers, were invented sometime in fifth century India. From India they spread westwards, together with the spread of Islam, reaching the Mediterranean around the eighth century.

Europe picked up these numbers from the Arabic civilisation, and that is the reason why we call them ‘Arabic’. But it took a long time before Europeans widely adopted Arabic numbers in their practice. This was due to difficult relationships with Islam, but also to the low levels of literacy and numeracy in Europe at the time, together with a more general cultural backwardness in comparison with the Arabic civilisation.

Starting from the eleventh century, Europe experienced an economic renaissance that reached its peak in the thirteenth century. With the development of international trade, several key financial and organisational innovations were introduced. This is the moment when the first international companies appear, together with the earliest examples of banking and international finance.

This new economic complexity raised the need for a higher level of computing power, especially to solve calculations of interest and exchange rates. It is at this stage that merchant-bankers, who were already literate and numerate, realised that Hindu-Arabic numerals suited their needs better than Roman ones. Arithmetic with Hindu-Arabic numerals became part of the required training for merchant-bankers.

By the late thirteenth century, we see the first examples of practical arithmetic texts published in central Italy, the cradle of early finance and banking. From here, the publication of these manuals slowly spread to the rest of Europe, with a dramatic acceleration in the sixteenth century driven by the introduction of the printing press.

A detailed reconstruction of these traditions, comprising more than 1,280 manuals, makes it possible to study the main characteristics of such spread. It was a movement from the south to the north of Europe, with late adopters – such as the north of Germany and England – taking up such texts only in the second half of the sixteenth century.

The spread of these texts allows us to reconstruct a slow process of transmission of practical mathematics throughout Europe. The use of such knowledge transformed economic practices, together with several other fields, such as visual arts, architecture, shipbuilding, surveying and engineering.

During the seventeenth century, this practical mathematics combined with the academic understanding of astronomy, reaching a new synthesis in the scientific revolution. Following the story of the adoption of Hindu-Arabic numerals allows us to appreciate that the scientific revolution was also indebted to more than three centuries of mathematical experimentation carried out by European practitioners.

Competition and rent-seeking during the slave trade

by Jose Corpuz (University of Warwick)

Untitled 1

The Royal African Company of England (RAC) gave gifts and similar payments to African chiefs for exclusive services. But African chiefs who could stop or redirect trade from inland could extract payments from RAC, particularly when competition from other English merchants increased.

There is a lot of descriptive evidence on rent-seeking in Africa during the slave trade. My research provides quantitative evidence of rent-seeking and shows that it changed over time. I constructed the database of more than 20,000 payments myself, from handwritten seventeenth century RAC archives.

My study contributes to the debate about rent-seeking during the slave trade. The ‘fishers-of-men’ view argues that slaves were a common property resource and competition among enslavers would dissipate any rents (Thomas and Bean, 1974). The ‘hunters-of-rent’ view, however, argues that the competition was restricted by barriers to entry, enabling rents (Evans and Richardson, 1995).

My research provides quantitative evidence that rent-seeking existed and shows when, where and how much rent-seeking increased during the slave trade.

I used my own dataset to examine rent-seeking during the slave trade, looking at more than 20,000 payments (for example ‘dashey,’ a local term used in West Africa which literally means gift) that the RAC made to seventeenth century chiefs in Ghana. RAC made these payments to African chiefs in return for exclusive trade with caravan merchants from inland. These payments were separate from any price paid for slaves themselves.

I use an event study to show that this power of chiefs increased when the RAC lost royal privileges after the Glorious Revolution in 1688, and this change increased competition from other English merchants.

I answer three questions. First: what was the distribution of payments across chiefs?

I find that the distribution of payments to chiefs was unequal. In particular, the highest-ranking head chiefs received the greatest value of payments per capita. These findings provide quantitative evidence that the slave trade was ‘the business of kings, rich men, and prime merchants’ (in other words, elites) and that the distribution of payments among them was unequal (Hopkins, 1973).

Untitled 2

Second: what commodities were included and how did this change over time?

Usually, the payments were European cloth, firearms and alcohol. Head chiefs used European cloth to signal authority and prestige, and received most of the European cloth, particularly when their bargaining position improved after 1688.

These findings highlight the importance of payments as a channel through which European merchants supplied goods in response to African demand. Unlike Rodney’s (1988) ‘How Europe Underdeveloped Africa’ thesis, European merchants did not determine the goods they supplied to Africans, and Africans were not passive recipients of these goods.

Untitled 3

Third: did payments rise after the Glorious Revolution in 1688, reducing the RAC’s privileges – such as the power to seize other English merchants’ ships and cargoes (Davies, 1957; Carlos and Kruse, 1996) – and facilitating competition from other English merchants?

Using ‘diff-in-diff’ estimation, I find that the RAC made greater payments to chiefs whose compliance was most important in deterring other English merchants from competing with the RAC after 1688.

In particular, I find that payments increased the most to chiefs in ‘non-coast caravan routes’ or locations where they could stop or redirect trade flowing from inland. The chiefs demanded an increased share of the RAC’s total revenue. Qualitative evidence from the letters (Law, 2001, 2006, 2010) supports the view that this increase can be explained by the chiefs’ increased bargaining power.

Untitled 4Overall, the findings are consistent with the hunters-of-rent view of rent-seeking during the slave trade. Some chiefs found themselves in the right place at the right time and took advantage of the situation.



Carlos, AM, and J Brown-Kruse (1996) ‘The decline of the Royal African Company: Fringe firms and the role of the charter’, Economic History Review 49(2): 291-313.

Davies, KG (1957) The Royal African Company, Longmans, Green and Co. Ltd.

Evans, EW, and D Richardson (1995) ‘Hunting for rents: The economics of slaving in pre-colonial Africa’, Economic History Review 48(4): 665-86.

Hopkins, AG (1973) An Economic History of West Africa, Routledge.

Law, R (2001, 2006, 2010) The English in West Africa, 1685-1698: The local correspondence of the Royal African Company of England, 1681-1699 (Vols. 1-3), Oxford University Press.

Rodney, W (1988) How Europe Underdeveloped Africa, Bogle-L’Ouverhure Publications.

Thomas, RP, and RN Bean (1974) ‘The fishers of men: The profits of the slave trade’, Journal of Economic History 34(4): 885-914.

Inequality dynamics in turbulent times

by María Gómez León (Instituto Figuerola/Universidad Carlos III, Madrid) and Herman J. de Jong (University of Groningen)


The Home Front in Britain during the Second World War. Available here

Recent influential studies on the historical evolution of inequality and its causes (Milanovic 2016; Piketty 2014) have attracted new interest in the topic. While attributed to different factors, there is a wide consensus on the slowdown of inequality in western Europe during the twentieth century up to the 1980s—a phenomenon commonly referred to as the ‘great levelling’ or ‘egalitarian revolution’. Yet, we do not know how differently this deceleration evolved across countries. Turbulent episodes during the first half of the twentieth century—including two World Wars, the Great Depression and the upsurge of radical parties—suggest that, at least in the short run, inequality may have followed very different patterns across European nations. However, we have little empirical evidence, due to the lack of data on income distribution before 1950, especially for the interwar years.

In a forthcoming article we provide new annual data on income inequality for two leading European countries, Germany and Britain, for the first half of the twentieth century. Using dynamic social tables, we obtain comparable annual estimates (measured as Gini coefficients) covering the full range of income distribution. Evidence from Germany and Britain (Figure 1) yields two main results. First, the drop in inequality was neither steady nor similar across these countries, supporting the notion of inequality cycles (Milanovic 2016; Prados de la Escosura 2008). Second, inequality trends in Germany and Britain tended to follow opposite patterns.

Figure 1. Inequality trends in Britain and Germany.  For data and sources see Gómez León and de Jong (Forthcoming)

What drove inequality changes in these two countries? How did inequality develop for specific groups of the population?  On the first question, we find that in Germany before 1933 and from 1939 onwards, variations in the relationship between owners and workers as well as variations within the group of workers (across work status and gender) drove changes in income distribution. During the Nazi period, only differences between owners and workers help to explain changes in inequality, as the abolition of trade unions and the setting of maximum wages precluded the dispersion of labour earnings. On the other hand, the dispersion of earnings among British workers appears to have been the main driver of changes in inequality before the Great War and after 1939, when the reduction of skill premiums and gender payment inequalities offset the relative increase in incomes. However, from the First World War to the outbreak of the Second World War, differences between proprietors and workers, as well as changes in labour earnings dispersion, drove inequality changes.

On the second question, we observe that in both countries the winners of the economic expansion experienced between 1900 and 1950 were the upper-low and lower-middle classes (i.e. the salaried and wage-earners in both manufacturing and war-related heavy industries). However, the gains linked to industrial expansion during the First World War and the Second World War were concentrated among the upper classes in Germany, while in Britain the benefits were more evenly distributed among the working classes. The reverse occurred during the interwar period.

In line with Lindert and Williamson (2016) and Piketty (2014), our paper points primarily towards political and institutional factors as the crucial drivers of inequality trends during the first half of the twentieth century. The usefulness of dynamic social tables for exploring national income distributions in the past invites future research on other European countries as well as on other potential factors (e.g. migration, technological change) affecting short-term inequality dynamics during the period.


To contact the lead author: e-mail: ; Twitter: @Maria0zmg



Gómez León, M. and de Jong, J. H., ‘Inequality in turbulent times: Income distribution in Germany and Britain 1900–1950’, Economic History Review, (Forthcoming)

Lindert, P. H. and Williamson, J. G., Unequal gains: American growth and inequality since 1700 (Princeton, NJ, 2016).

Milanovic, B., Global inequality: A new approach for the age of globalization (Cambridge, Mass., 2016).

Piketty, T., Capital in the twenty-first century (Cambridge, Mass., 2014).
Prados de la Escosura, L., ‘Inequality, poverty and the Kuznets curve in Spain, 1850–2000’, European Review of Economic History, 12 (2008), pp. 287–324.


from VOXEU: “Urban planning and town foundations”


The founding of new towns has been at the core of urban planning since the onset of civilisation. In recent times, policymakers have shown renewed interested in the creation of towns to channel regional economic growth. A prominent example is China, where a large-scale urban planning programme began in the 1980s to cope with the pressure of a growing urban population. The idea was to relocate hundreds of millions of rural inhabitants to live in purpose-built towns. Western media has branded these new towns as ‘ghost towns’, as ‘bridges to nowhere’, or as towns in search of populations.


Read the full post here