Welcome to The Long Run

On behalf of the Economic History Society (EHS), it is a pleasure to welcome you to The Long Run, the EHS blog.

This blog aims to encourage discussion of economic and social history, broadly defined. We live in a time of major social and economic change, and recent research in social science is showing more and more how much a historical and long-term approach to current issues can be the key to understanding our times. The importance of bringing together the disciplines that compose what we call social sciences and have them interact in a historical perspective is absolutely fundamental not only for the future directions of economic history, but also to be able to reflect on the mechanisms that regulate our world through data and knowledge.

The Long Run aims to host senior scholars as well as young researchers, and anyone who has interesting and scientifically sound contributions to make to Economic and Social History. The blog will also be a way for EHS members and anyone interested in the activities of the Society to keep updated on its activities, from our Review to CFPs and other activities that you can find on our website

We welcome any contribution or suggestion – please contact us at ehs.thelongrun@gmail.com

or take a look at our CFP here

It is appropriate to record my appreciation for Professor Peter Fearon, outgoing Chair of the EHS Public Engagement committee, who originally conceived the idea for the blog. A special thanks also to our editorial team: Marta Musso, Bernardo Bátiz-Lazo, Amy Ridgway, Judy Stephenson and Romesh Vaitilingam, without whom this blog would not exist. Special thanks are due to Marta who did much of the ‘heavy lifting’.

We look forward to seeing you here regularly, always standing on the shoulders of giants.

Professor David Higgins (EHS Public Engagement Committee)

13 June 2016

The impact of new universities on regional growth: evidence from the United States 1930-80

by Alexandra López Cermeño, Lund University / Universidad Carlos III de Madrid

ChI20hTW4AEYKFj
From ODU  Twitter account

Universities generate growth spillovers beyond simply the local market. Analysing data on the universities founded in the United States between 1930 and 1980, my research shows that these drove growth of GDP and population not only in the counties that hosted them, but also in their neighbouring regions. But analysis of their longer-term impact suggests that although there are growth spillovers, the positive effect wears out if it is not periodically renewed.

The role of universities in generating growth is rarely contested. But most research tend to associate the presence of a university with long-term path dependency. In the era of knowledge and information, the role of universities as producers of new ideas and technologies is crucial to productivity. New light on this subject is required not only to understand the role of cultural amenities but also to explore the spatial dynamics around them.

Long-term analysis that compares recipient counties of their first universities between 1930 and 1980 with statistically similar counties that never got an institution shows that the effect of these new universities implies 20% more growth in terms of GDP. Moreover, the analysis shows that the new amenities eventually had an impact neighbouring counties. These dynamics seem to be related to population migration.

This sizeable increase of GDP in these counties is corresponded by a similar size increase in population: new universities generate migratory movements of workers, which eventually lead to higher housing prices and costs to use other infrastructures. Higher costs motivate many workers to relocate to nearby areas where housing and infrastructures are less expensive and access to the amenity is still feasible.

The positive effect of new universities is therefore neutralised in the longer term unless further investments reduce congestion costs. Indeed, the role of infrastructures such as roads seems to explain a large share of the effect of universities.

But the interaction of universities and infrastructure seems to be defined by the decreasing importance of the latter: whereas physical access to infrastructure seemed to constrain the impact of new amenities before the 1950s, more recently established institutions seem no longer dependent on face-to-face contact.

There is further evidence on the role of knowledge dynamics in my study: in the earlier half of the period 1930-80, all that mattered was getting a new university in the county, whereas in the latter half of the period, the quality of the institution seems to have become much more relevant. Counties where research-intensive institutions were established during the period 1950-80 grew almost 40% more.

My analysis shows that the effect of new academic institutions during the twentieth century induced regional spatial dynamics in terms of migration and GDP. But it indicates that the impact of these new amenities was seriously constrained by the congestion of utilities, which limited the extent of growth to the short run.

Thus, it questions the extent of the impact generated by these institutions that is so praised in recent literature since it suggests that their growth dynamics are not self-sustaining: further investments are needed to keep up with the agglomeration forces that attract population and firms to these counties.

THE HEALTH AND HUMAN CAPITAL OF WAR REFUGEES: Evidence from Jewish migrants escaping the Nazis 1940-42

by Matthias Blum (Queen’s University Belfast ) and Claudia Rei (Vanderbilt University)

AA

At Europe’s doorstep, the current refugee crisis poses considerable challenges to world leaders. Whether refugees are believed beneficial or detrimental to future economic prospects, decisions about them are often based on unverified priors and uninformed opinions.

There is a vast body of scholarly work on the economics of international migration. But when it comes to the sensitive topic of war refugees, we usually learn about the overall numbers of the displaced while knowing next to nothing about the human capital of the displaced populations.

Our study, to be presented at the Economic History Society’s 2017 annual conference in London, contributes to this under-researched, and often hard to document, area of international migration based on a newly constructed dataset of war refugees from Europe to the United States after the outbreak of the Second World War.

We analyse holocaust refugees travelling from Lisbon to New York on steam vessels between 1940 and 1942. Temporarily, the war made Lisbon the last major port of departure when all other options had shut down.

Escaping Europe before 1940 was difficult, but there were still several European ports providing regular passenger traffic to the Americas. The expansion of Nazi Germany in 1940 made emigration increasingly difficult and by 1942, it was nearly impossible for Jews to leave Europe due to mass deportations to concentration camps in the east.

The Lisbon migrants were wartime refugees and offer a valuable insight into the larger body of Jewish migrants who left Europe between the Nazi seizure of power in Germany in January 1933 and the invasion of Poland in September 1939.

The majority of migrants in our dataset were Jews from Germany and Poland, but we identify migrants from 17 countries in Europe. We define as refugees all Jewish passengers as well as their non-Jewish family members travelling with them.

Using individual micro-level evidence, we find that regardless of refugee status all migrants were positively selected – that is, they carried a higher level of health and human capital when compared with the populations in their countries of origin. This pattern is stronger for women than men.

Furthermore, refugees and non-refugees in our sample were no different in terms of skills and income level, but they did differ with respect to the timing of the migration decision. Male refugees were more positively selected if they migrated earlier, whereas women migrating earlier were more positively selected regardless of refugee status.

These findings suggest large losses of human capital in Europe, especially from women, since the Nazi arrival in power seven years before the period we analyse in our data.

The civil war in Syria broke out six years ago in March 2011, making the analysis of the late holocaust refugees all the more relevant. Syrian refugees fleeing war today are not just lucky to escape, they are probably also healthier and coming from a higher social background than average in their home country.

Agency House Crises in India: What Role Did Indigo Play?

by Tehreem Husain

Mocha_Dapper_1680
English, Dutch, and Danish factories at Mocha, 1680 ca. Public Domain picture

 

History provides us with many examples of asset bubbles which have led to systemic crises in the economy. Popular examples are that of the Tulip mania and the South Sea Bubble. This blog discusses the case of an indigo price bubble in nineteenth century India, perhaps the first of its kind, which lead to a contagion like crises in the economy.

 Almost 17.4% of Indian GDP was derived from the agricultural sector in 2015-16, with nearly half of the Indian population being dependent on agriculture and allied activities for livelihood. This makes smooth functioning of commodity markets of considerable importance to policymakers. Throughout time, there have been many episodes of commodity price surges and ensuing market volatility due to traditional demand-supply gaps, monetary stress and financialization of commodity markets inclusive of speculation (Varadi, 2012). What role did agriculture play in commodity market volatility during the late 18th/ early 19th century? Little is known about perhaps the first asset bubble of its kind in India – the indigo crisis, the reasons attributed to it and the cost it imposed on different sectors of the economy.

With the advent of the East India Company, India was a global trade destination for a number of commodities including cotton, silk, indigo, saltpetre and tea. In order to trade these commodities with global markets, European traders needed banks to finance foreign trade. Indigenous bankers in India did not provide this particular banking function and hence the East India Company diversified its business by introducing agency houses in Calcutta which amongst others also performed banking functions. These agency houses performed all the banking functions of receiving deposits, making advances and issuing paper money. Their responsibility of note circulation crucially helped them in carrying out their diversified lines of businesses as ship-owners, land owners, farmers, manufacturers, money lenders and bankers (Cooke, 1830). It was the agency house of Messrs. Alexander & Co. which started the first European bank in India, called the Bank of Hindostan, in 1770 (Singh, 1966).

In the early nineteenth century these agency houses were tested for their endurance and continuance due to three factors. Firstly and most importantly, during the early 1820s, agency houses borrowed money at low interest rates and invested it prodigally in indigo concerns-the crop being the only profitable means of remittance in Europe. The crisis multiplied when newly formed agency houses, besides investing capital in their own indigo concerns, fiercely competed with the old houses in making indiscriminate advances to indigo planters and paid little regard to the actual state of the market. Excessive demand of indigo fuelled the prices in the mid 1820s and encouraged increased production of the commodity which eventually led to a glut in the market and sharp decline in its price. This rise and fall in prices is evident from the fact that the indigo price shot up from Rs. 130/maund in 1813 to Rs. 300 in 1824, and then fell to Rs. 145/maund in 1832 (Singh, 1966).

The second challenge, along with indigo price volatility, was the start of the first Anglo Burmese war in 1825. This further led to stressed monetary conditions resulting in a scarcity of metal in Calcutta (Sinha, 1927).

Thirdly, in terms of the global landscape, this period marked the peak of investment boom in Britain, which characterized an explosion of company promotions and bond issues by foreign governments, mining companies, railways, utilities, docks and steamships. In total during 1824-25 some 624 companies hoping to raise £372 million were brought to the market. However, with the investment boom peaking out in 1825, market conditions had changed. Interest rates had risen making borrowing more expensive, investor sentiment had become more cautious which eventually led to a panic like situation resulting in bank failures and bankruptcies (Brunnermeier & Schnabel, 2015).

In such times of local and global economic stress, several minor agency houses failed in 1827 which shook investor confidence in the remaining agency houses. A notable case is that of the agency house of Messrs. Palmer and Co., known as the ‘indigo king of Bengal’, which faced heavy withdrawals from their partners and eventually led to the closure of their private bank and finally their own demise in 1830. This panicked the market and led to further withdrawals of capital investments.

During this period agency houses made desperate appeals to the government for financial relief and highlighted their importance in the Indian financial system at that time. In a minute dated 14th May 1830, Lord William Bentick, Governor General of India from 1828-35, accentuated systemic importance of agency houses. He highlighted that not only would there be a dislocation of trade in some staple commodities, any damage to the ‘conglomerate’ nature of the agency houses would cause severe disruptions in other industries, most notably shipping. Finally, loans were granted to these houses in the form of treasury notes bearing 6 percent interest.

Despite the monetary aids provided by the government, the wave of agency house failures could not be curbed. More agency houses failed in January 1832. In addition to this, the unexpected fall in the price of indigo created difficulties for one of the biggest agency houses Messrs. Alexander & Co. It is important to note that the relief package came under stringent conditions. They were obliged to withdraw their bank notes from circulation, and were given an extended period for the payment of their debts provided they end their banking operations (Savkar, 1938). This resulted in the demise of the Bank of Hindostan and the Commercial Bank.

Overall seven great Agency Houses of Calcutta failed within a short span of four years which had detrimental effects on the Indian economy at that time. It may be summarized that speculation in indigo and mixing of trading and agency business were the pivotal reasons behind the failure of these agency houses. More importantly, this episode of a commodity price bubble spreading its tentacles to the entire economy had a phenomenal impact on the structure of business. It is recoded that from a handful of firms in the year before 1850, there were 170 firms working as joint stock organizations in 1868. The first commercial register to identify firms with tradable stock was established in 1843 which listed eights firms (Aldous, 2015). Joint stock organizational form also entered banking. A key example is the rise of the Union Bank of Calcutta (Cooke, 1830). The crisis also led to the establishment of a number of private banks by the British expats (Jones, 1995).

 

THE INEFFECTIVENESS OF GOVERNMENT EFFORTS TO PROMOTE PRODUCTS MADE AT HOME: Evidence from the ‘Buy British’ campaigns of the 1960s and 1980s

David Clayton (University of York) and David Higgins (Newcastle University)

keep-calm-and-buy-british-18

Campaigns to promote the purchase of domestic manufactures feature prominently during national economic crises. The key triggers of such schemes include growing import penetration and concern that consumers have been misled into purchasing foreign products instead of domestic ones. Early examples of such initiatives occurred in the United States in 1890 and 1930, with the introduction of the McKinley tariff and the ‘Buy American’ Act, respectively.

In Britain, similar schemes were launched during the interwar years and in the post-1945 period. For the latter, Britain’s share of world trade in manufactures declined from 25% to 10%, and between 1955 and 1980, import penetration in the manufacturing sector increased from 8% to 30%.

Simultaneously, there were numerous government public policy interventions designed to improve productivity, for example, the National Economic Development Council and the Industrial Relations Commission. Both Labour and Conservative governments were much more interventionist than today.

Currently, the rise of protectionist sentiment in the United States and across Europe may well generate new campaigns to persuade consumers to boycott foreign products and give their preference to those made at home. Indeed, President Trump has vowed to ‘Make America Great Again’: to preserve US jobs he has threatened to tax US companies that import components from abroad.

Using a case study of the ‘Buy British’ campaigns of the 1960s and 1980s, our research, to be presented at the Economic History Society’s 2017 annual conference in London, considers what general lessons can be learned from such initiatives and why, in Britain, they failed.

Our central arguments can be summarised as follows. In the 1960s, before Britain acceded to the European Economic Community, there was considerable scope for a government initiative to promote ‘British’ products. But a variety of political and economic obstacles blocked a ‘Buy British’ campaign. During the 1980s, there was less freedom of manoeuvre to enact an official policy of ‘Buy British’ because by then Britain had to abide by the terms of the Treaty of Rome.

In the 1960s, efforts to promote ‘Buy British’ were hindered by the reluctance of British governments to lead on this initiative because of Treasury constraints on national advertising campaigns and a general belief that such a campaign would be ineffective.

For example, the nationalised industries, which were a large proportion of the economy at this time, could not be used to spearhead any campaign because they relied on industrial and intermediate inputs, not consumer durables; and in any case, the ability of these industries to direct more of their purchases to domestic sources was severely constrained: total purchases by all nationalised industries in the early 1970s were around £2,000 million, of which over 90% went to domestic suppliers.

Efforts to nudge private organisations into running these campaigns were also ineffective. The CBI refused to take the lead on a point of principle, arguing that ‘A general campaign would… conflict with [our] view that commercial freedom should be as complete as possible. British goods must sell on their merits and their price in relation to those of our competitors, not because they happen to be British’.

During the 1980s, government intervention to promote ‘Buy British’ would have contravened Britain’s new international treaty obligations. The Treaty of Rome (1957) required the liberalisation of trade between members, the reduction and eventual abolition of tariffs and the elimination of measures, such as promotion of ‘British’ products, ‘having equivalent effect’. Attempts by the French and Irish governments to persuade their consumers to give preference to domestic goods were declared illegal.

The only way to overcome this legislative restriction was if domestic companies chose to mark their products as ‘British’ voluntarily. This was not a rational strategy for individual firms to follow. Consumers generally prefer domestic to foreign products.

But when price, quality and product-country images are taken into account, rather than origin per se, the country of origin effect is weakened considerably. From the perspective of individual firms promoting their products, using a ‘British’ mark risked devaluing their pre-existing brands by associating then with inferior products.

Our conclusions are that in both periods, firms acting individual or collectively (via industry-wide bodies) did not want to promote their products using ‘British’ marks. Action required top-down pressure from government to persuade consumers to ‘Buy British’. In the 1960s, there was no consensus within government in favour of this position, and, by the 1980s, government intervention was illegal due to international treaty obligation.

In a post-Brexit Britain, with a much weakened manufacturing capacity compared even with the 1960s and 1980s, the case for the government to nudge consumers to ‘Buy British’ is weak.

Extractive Policies and Economic Outcomes: the Unitary Origins of the Present-Day North-South of Italy Divide

by Guilherme de Oliveira (Columbia Law School) and Carmine Guerriero (University of Bologna)

manifesto_emigrazione_san_paolo_brasile

Italy emerged from the Congress of Vienna as a carefully thought equilibrium among eight absolutists states, all under the control of Austria except the Kingdom of the Two Sicilies, dominated by the Bourbons, and the Kingdom of Sardinia, ruled by the Savoys and erected as a barrier between Austria and France. This status quo fed the ambitions of the Piedmontese lineage, turning it into the champion of the liberals, who longed to establish a unitary state by fomenting the beginning of the century unrest. Although ineffective, these insurrections forced the implementation, especially in the South, of the liberal reforms first introduced by the Napoleonic armies, and allowed a rising class of bourgeoisie, attracted by the expanding international demand, to acquire the nester nobility’s domains and prioritize export-oriented farming. Among these activities, arboriculture and sericulture, which were up to 60 times more lucrative than wheat breeding, soon became dominant, constituting half of the 1859 exports. Consequently, farming productivity increased, reaching similar levels in the Northern farms and the Southern latifundia, but the almost exclusive specialization in the agrarian sectors left the Italian economy stagnant as implied by the evolution of the GDP per capita in the regions in our sample, which we group by their political relevance for the post-unitary rulers as inversely picked by Distance-to-Enemies (see upper-left graph of figure 1). This is the distance between each region’s main city and the capital of the fiercer enemy of the Savoys—i.e., Vienna over the 1801-1813, 1848-1881, and 1901-1914 periods, and Paris otherwise—and is the lowest for Veneto, which we then label the “high” political relevance cluster. Similarly, we refer to the regions with above(below)-average values as “low” (“middle”) political relevance group or “South” and to the union of the high-middle relevance regions and the key Kingdom of Sardinia regions—i.e., Liguria and Piedmont—as “North.”

 

Figure 1: Income, Political Power, Land Property Taxes, and Railway Diffusion

1  Note: “GDP-L” is the income in 1861 lire per capita, “Political-Power” is the share of prime ministers born in the region averaged over the previous decade, “Land-Taxes” is the land property tax revenues in 1861 lire per capita, and “Railway” is the railway length built in the previous decade in km per square km. _M (_H) includes Abruzzi, Emilia Romagna, Lombardy, Marche, Tuscany, and Umbria (Veneto), whereas KS gathers Liguria and Piedmont. The North (_L) cluster includes the M, H, and KS groups (Apulia, Basilicata, Calabria, Campania, Lazio, and Sicily). See de Oliveira and Guerriero (2017) for each variable sources and definition.

 

Despite some pre-unitary differences, both clusters were largely underdeveloped with respect to the leading European powers at unification, and the causes of this backwardness ranged from the scarcity of coal and infrastructures to the shortage of human and real capital. Crucially, none of such conditions was significantly different across groups since, differently from the Kingdom of Sardinia, none of the pre-unitary states established a virtuous balance between military spending and investment in valuable public goods as railway and literacy. Even worst, they intensified taxation only when necessary to finance the armies needed to tame internal unrest, which were especially fierce in the Kingdom of Two Sicilies. The bottom graphs of figure 1 exhibit this pattern by displaying the key direct tax, which was the land property duty, and the main non-military expenditure, which was the railway investment.

Meanwhile, the power of the Piedmontese parliament relative to the king grew steadily and its leader Camillo of Cavour succeeded to guarantee an alliance with France in a future conflict against Austria by sustaining the former in the 1856 Crimean War. The 1859 French-Piedmontese victory against the Habsburgs then triggered insurrections in Tuscany, the conquest of the South by Garibaldi, and the proclamation of the Kingdom of Italy in 1861. Dominated by a narrow elite of northerners (see upper-right graphs of figure 1), the new state favoured the Northern export-oriented farming and manufacturing industries while selecting public spending and the Northern populations when levying the taxes necessary to finance these policies. To illustrate, the 1887 protectionist reform, instead of safeguarding the arboriculture sectors crushed by 1880s fall in prices, shielded the Po Valley wheat breeding and those Northern textile and manufacturing industries that had survived the liberal years thanks to state intervention. While indeed the former dominated the allocation of military clothing contracts, the latter monopolized both coal mining permits and public contracts. A similar logic guided the assignment of the monopoly rights in the steamboat construction and navigation sectors and, notably, the public spending in railway, which represented the 53 percent of the 1861-1911 total. Over this period indeed, Liguria and Piedmont gained a 3 (4) times bigger railway spending per square km than Veneto (the other regions). Moreover, the aim of this effort “was more the military one of controlling the national territory, especially in the South, than favouring commerce” [Iuzzolino et al. 2011, p. 22]. Crucially, this infrastructural program was financed through highly unbalanced land property taxes, which in turn affected the key source of savings available to the investment in the growth sectors absent a developed banking systems. The 1864 reform fixed a 125 million target revenue to be raised from 9 districts resembling the pre-unitary states. The ex-Papal State took on the 10 percent, the ex-Kingdom of Two Sicilies the 40, and the rest of the state (ex-Kingdom of Sardinia) only the 29 (21). To further weigh this burden down, a 20 percent surcharge was added by 1868 creating the disparities displayed in the bottom-left graph of figure 1.

The 1886 cadastral reform opened the way to more egalitarian policies and, after the First World War, to the harmonization of the tax-rates, but the impact of extraction on the economies of the two blocks was at that point irreversible. While indeed a flourishing manufacturing sector was established in the North, the mix of low public spending and heavy taxation squeezed the Southern investment to the point that the local industry and export-oriented farming were wiped out. Moreover, extraction destroyed the relationship between the central state and the southern population by unchaining first a civil war, which brought about 20,000 victims by 1864 and the militarization of the area, and then favouring emigration. Because of these tensions, the population started to display a progressively weaker culture as implied by the fall in our proxy for social capital depicted in the bottom-left graph of figure 2.

The fascist regime’s aversion to migrations and its rush to arming first, and the 1960s pro-South state aids then have further affected the divide, which can be safely attributed to the extractive policies selected by the unitary state between 1861 and 1911.

Empirical Evidence

Because the 13 regions remained agrarian over our 1801-1911 sample, we capture the extent of extraction with the land property taxation and the farming productivity with the geographic drivers of the profitability of the arboriculture and sericulture sectors. In addition, we use as inverse metrics of each region’s tax-collection costs (political relevance) the share of previous decade in which the region partook in external wars (Distance-to-Enemies).

Our fixed region and time effects OLS estimates imply that pre-unitary revenues from land property taxes in 1861 lire per capita decrease with each region’s farming productivity but not with its relevance for the Piedmontese elite, whereas the opposite was true for the post-unitary ones. Moreover, post-unitary distortions in land property tax revenues—proxied with the difference between the observed and the counterfactual ones forecasted through pre-unitary estimates (see upper-left graph of figure 2)—and the severity of the other extractive policies—negatively captured by the tax-collection costs and the political relevance (see below)—positively determined the opening gaps in culture, literacy (see bottom-right graph of figure 2), and development, i.e., the income in 1861 lire per capita, the gross saleable farming product, and the textile industry value added in thousands of 1861 lire per capita.

 

Figure 2: The Rise of the North-South Divide

2Note: “Distortion-LT” are the land property tax distortions in 1861 lire per capita, “Distortion-R” is the difference between Railway and the forecasted length of railway built in the previous decade in km per square km, “Culture-N” is the normalized share of the active population engaged in political, union, and religious activities, and “Illiterates-N” is the normalized percentage points of illiterates in the population over the age of six. See figure 1 for each cluster definition and de Oliveira and Guerriero (2017) for each variable sources and definition.

 

These results are consistent with the predictions of the model we lay out to inform our test. First, because of limited state-capacity, the pre-unitary states should reduce extraction if confronted by a more productive and so powerful citizenry, whereas the extractive power of the unitary state should be sufficiently strong to make taxation of the South profitable at the margin and so crucially shaped by his relevance. Second, it should also induce the Southern citizenry to prefer private to public good production and his investment and welfare to rise with factors limiting taxation, i.e., marginal tax-collection costs and political relevance.

Since our proxies for the drivers of extraction are driven by either geographic features independent of human effort or events outside the control of the policy-makers, reverse causation is not an issue. Nevertheless, our results could still be produced by unobserved heterogeneity. To evaluate this aspect, we control for the interactions of time effects with the structural conditions differentiating the two blocks in 1861 and considered key by the extant literature (Franchetti and Sonnino, 1876; Gramsci, 1966; Barbagallo, 1980; Krugman, 1981), i.e., the pre-unitary inclusiveness of political institutions, the land ownership fragmentation, the coal price, and the railway length. Including these controls has little effect on our results. Finally, two extra pieces of evidence rule out the possibility that extraction was an acceptable price for the Italian development (Romeo, 1987). First, it did not shape the manufacturing sector value added. Second, while the pre-unitary length of railway additions was only affected by the farming productivity, the post-unitary one was only driven by the political relevance, resulting useless in creating a unitary market (see upper-right graph of figure 2).

Conclusions

Although the North-South divide has been linked to post-unitary policies before (Salvemini 1963; Cafagna, 1989), nobody has formally clarified how the unitary state solved the trade-off between extraction-related losses and rent-seeking gains. In doing so, we also contribute to the literature comparing extractive and inclusive institutions (North et al., 2009, Acemoglu and Robinson, 2012), endogenizing however the extent of extraction in a setup sufficiently general to be applied to other instances, as for instance the post-Civil War USA.

References

The TOWER OF BABEL: why we are still a long way from everyone speaking the same language

Nearly a third of the world’s 6,000 plus distinct languages have more than 35,000 speakers. But despite the big communications advantages of a few widely spoken languages such as English and Spanish, there is no sign of a systematic decline in the number of people speaking this large group of relatively small languages.

the_tower_of_babel

These are among the findings of a new study by Professor David Clingingsmith, published in the February 2017 issue of the Economic Journal. His analysis explains how it is possible to have a stable situation in which the world has a small number of very large languages and a large number of small languages.

Does this mean that the benefits of a universal language could never be so great as to induce a sweeping consolidation of language? No, the study concludes:

‘Consider the example of migrants, who tend to switch to the language of their adopted home within a few generations. When the incentives are large enough, populations do switch languages.’

‘The question we can’t yet answer is whether recent technological developments, such as the internet, will change the benefits enough to make such switching worthwhile more broadly.’

Why don’t all people speak the same language? At least since the story of the Tower of Babel, humans have puzzled over the diversity of spoken languages. As with the ancient writers of the book of Genesis, economists have also recognised that there are advantages when people speak a common language, and that those advantages only increase when more people adopt a language.

This simple reasoning predicts that humans should eventually adopt a common language. The growing role of English as the world’s lingua franca and the radical shrinking of distances enabled by the internet has led many people to speculate that the emergence of a universal human language is, if not imminent, at least on the horizon.

There are more than 6,000 distinct languages spoken in the world today. Just 16 of these languages are the native languages of fully half the human population, while the median language is known by only 10,000 people.

The implications might appear to be clear: if we are indeed on the road to a universal language, then the populations speaking the vast majority of these languages must be shrinking relative to the largest ones, on their way to extinction.

The new study presents a very different picture. The author first uses population censuses to produce a new set of estimates of the level and growth of language populations.

The relative paucity of data on the number of people speaking the world’s languages at different points in time means that this can be done for only 344 languages. Nevertheless, the data clearly suggest that the populations of the 29% of languages that have 35,000 or more speakers are stable, not shrinking.

How could this stability be consistent with the very real advantages offered by widely spoken languages? The key is to realise that most human interaction has a local character.

This insight is central to the author’s analysis, which shows that even when there are strong benefits to adopting a common language, we can still end up in a world with a small number of very large languages and a large number of small ones. Numerical simulations of the analytical model produce distributions of language sizes that look very much like the one that actually obtain in the world today.

Summary of the article ‘Are the World’s Languages Consolidating? The Dynamics and Distribution of Language Populations’ by David Clingingsmith. Published in Economic Journal on February 2017

Holding Brexiteers to account

by Adrian Williamson, University of Cambridge

fbd5393b-0a6b-4101-9803-2a0eb6e6e36b
Margaret Thatcher and Ted Heat campaigning during the 1975 Common Market Referendum, when conservative leaders took a rather different approach to Europe. Source: http://www.eureferendum.com

The House of Commons has voted overwhelmingly to trigger Article 50, on the explicit basis that this process will be irrevocable and that, at the end of the negotiations, Parliament will have a choice between a hard Brexit (leaving the Single Market and the EEA) and an ultra-hard Brexit (WTO terms, if available).

It follows that arguments about whether the UK should remain in the EU, or should stay in all but name (the so called Norwegian option) are now otiose. What role can economic historians play as the terms of exit unfold? I think that there is an important role for scholars in seeking to analyse the promises of the Brexiteers and how feasible these appear in the light of previous experience.

Thus far, the economic debate over Brexit has been conducted on a very general basis. Remainers have argued that leaving the EU spells disaster, whereas Leavers have dismissed such concerns and promised a golden economic future. But what exactly will this future consist of? Doing the best one can, the Brexit proposition must surely be that the rate of economic growth per capita will be significantly higher in the future than it would have been if the UK had retained its EU membership. Since, at the same time, there was to be a massive and permanent reduction in EU and non-EU immigration (from c.330,000 p.a. net immigration to ‘tens of thousands’), it is per capita improvements that will have to be achieved.

The path to this goal will, it is said, be clear once the UK leaves. In particular:

  • the UK will be able to make its own trade deals and become a great global trading nation;
  • the UK can develop a less restrictive regulatory framework than that imposed by the EU;
  • industries such as manufacturing, fisheries and agriculture will revive once the country is no longer ‘tethered to the corpse’ of the EU;
  • the post-referendum devaluation will provide a boost for exporters.

In relation to each of these claims, there is plenty of helpful evidence from economic history. After all, the UK was the first nation to embrace a global trading role. As Keynes pointed out in a famous passage, in 1914:

The inhabitant of London could order by telephone, sipping his morning tea in bed, the various products of the whole earth, in such quantity as he might see fit, and reasonably expect their early delivery upon his doorstep; he could at the same moment and by the same means adventure his wealth in the natural resources and new enterprises of any quarter of the world, and share, without exertion or even trouble, in their prospective fruits and advantages…

 Yet, despite this background, and despite the economically advantageous legacies of Empire, the UK spent the period between 1961 and 1973 making increasingly desperate attempts to join a (then much smaller) Common Market. British policymakers were initially dismissive of the European Community. Exports to the Six were thought less important than trade with the Commonwealth. Britain’s initial response was to establish EFTA as a rival free trade area. However, it soon became apparent that this arrangement was lopsided: Britain was part of a free trade area with a population of 89m (including its own 51m), but stood outside the EEC’s tariff walls and population of 170m. Will the 2020s be different from the 1960s? In any event, ‘free trade’ is an elusive concept. As John Biffen, a Tory Trade Minister in the Thatcher government (and no friend of the EU), acknowledged, free trade has never existed ‘outside a textbook’.

As regards to decoupling from EU regulations, the UK was, of course, completely free to devise its own regulatory framework prior to accession to the EU in 1973. Nonetheless, in this period, much of the current labour market structure, such as protection against unfair dismissal and redundancy, was enacted. EU regulations, such as the Social Chapter, have complemented, not undermined, this domestic framework. In any event, does the evidence suggest that a mature economy, such as the UK, will be able to establish a more rapid rate of growth with a looser regulatory framework? The obvious comparisons in this respect are the developed North American and Japanese economies. The data suggests that the UK has performed extremely well within the EU framework.

 

Table: GDP per capita (current US $, source: World Bank

Country

1980

2015

Cumulative increase

USA

12,598

56,116

345%

UK

10,032

43,876

337%

Canada

11,135

43,249

288%

EU

8,314

32,005

285%

Japan

9,308

34,524

271%

 

Of course, much higher rates of growth have recently been achieved in developing economies such as China and India. But it cannot seriously be argued that an economy like the UK, which underwent an industrial revolution in the eighteenth century, can achieve rates of progress comparable to economies that are industrialising now. The whole course of economic history shows that mature economies have much slower rates of growth and that the increases achieved by the USA and the UK over the last few decades are close to optimum performance.

The maturity of the UK economy is also germane to arguments suggesting that it will be possible to revive industries that have suffered long term decline, such as manufacturing, agriculture and fisheries. After all, one consequence of the UK’s early start in manufacturing is that primary industries declined first and most rapidly here. Economic historians have been pointing out since the 1950s that in advanced economies the working population inevitably drifts from agriculture to manufacturing and then from manufacturing to services. In 1973, the American sociologist Daniel Bell greeted the arrival of the post-industrial society. He pointed out that the American economy was the first in the world in which more than 60% of the population were engaged in services, and that this trend was deepening in the USA and elsewhere. Brexit is scarcely likely to reverse these very long-term developments.

The British economy has also had considerable past experience of enforced devaluation (for example in 1931, 1949 and 1967). Research following the 1967 devaluation suggested that a falling pound gave only a temporary fillip to the trade balance, whilst delivering a permanent increase in inflation. Over the same period the West German economy performed extremely strongly, despite a constantly appreciating currency.

Finally, one may question whether the UK can achieve an economic miracle whilst, at the same time, pursuing a very restrictive approach to immigration. Successful economies tend to be extremely open to outsiders, who are both a cause and a consequence of growth. After all, in the pre-1914 golden age to which Keynes referred, there were no controls at all, and the British businessman ‘could secure forthwith, if he wished it, cheap and comfortable means of transit to any country or climate without passport or other formality…and could then proceed abroad to foreign quarters…and would consider himself greatly aggrieved and much surprised at the least interference’. Our putative partners in trade deals are not likely to be offering such access and, if they do, they will want substantial concessions in return.

Of course, past performance is no guarantee of future prosperity. Historic failure does not preclude future success. And sections of British public opinion have, it appears, ‘had enough of experts’. Even so, economic historians can hold up to scrutiny some of the more extravagant claims of the Brexiteers.

 

From NEP-HIS Blog: ‘The market turn: From social democracy to market liberalism’, by Avner Offer

The market turn: From social democracy to market liberalism By Avner Offer, All Souls College, University of Oxford (avner.offer@all-souls.ox.ac.uk) Abstract: Social democracy and market liberalism offered different solutions to the same problem: how to provide for life-cycle dependency. Social democracy makes lateral transfers from producers to dependents by means of progressive taxation. Market liberalism uses […]

via How do we eliminate wealth inequality and financial fragility? — The NEP-HIS Blog

From VOX – Short poppies: the height of WWI servicemen

From Timothy Hatton, Professor of Economics, Australian National University and University of Essex. Originally published on 9 May 2014

The height of today’s populations cannot explain which factors matter for long-run trends in health and height. This column highlights the correlates of height in the past using a sample of British army soldiers from World War I. While the socioeconomic status of the household mattered, the local disease environment mattered even more. Better education and modest medical advances led to an improvement in average health, despite the war and depression.

hattongraph
Distribution of heights in a sample of army recruits. From Bailey et al. (2014)

The last century has seen unprecedented increases in the heights of adults (Bleakley et al., 2013). Among young men in western Europe, that increase amounts to about four inches. On average, sons have been taller than their fathers for the last five generations. These gains in height are linked to improvements in health and longevity.

Increases in human stature have been associated with a wide range of improvements in living conditions, including better nutrition, a lower disease burden, and some modest improvement in medicine. But looking at the heights of today’s populations provides limited evidence on the socioeconomic determinants that can account for long-run trends in health and height. For that, we need to understand the correlates of height in the past. Instead of asking why people are so tall now, we should be asking why they were so short a century ago.

In a recent study Roy Bailey, Kris Inwood and I ( Bailey et al. 2014) took a sample of soldiers joining the British army around the time of World War I. These are randomly selected from a vast archive of two million service records that have been made available by the National Archives, mainly for the benefit of genealogists searching for their ancestors.

For this study, we draw a sample of servicemen who were born in the 1890s and who would therefore be in their late teens or early twenties when they enlisted. About two thirds of this cohort enlisted in the armed services and so the sample suffers much less from selection bias than would be likely during peacetime, when only a small fraction joined the forces. But we do not include officers who were taller than those they commanded. And at the other end of the distribution, we also miss some of the least fit, who were likely to be shorter than average.

FULL TEXT HERE

WELFARE SPENDING DOESN’T ‘CROWD OUT’ CHARITABLE WORK: Historical evidence from England under the Poor Laws

Cutting the welfare budget is unlikely to lead to an increase in private voluntary work and charitable giving, according to research by Nina Boberg-Fazlic and Paul Sharp.

Their study of England in the late eighteenth and early nineteenth century, published in the February 2017 issue of the Economic Journal, shows that parts of the country where there was increased spending under the Poor Laws actually enjoyed higher levels of charitable income.

refusing_a_beggar_with_one_leg_and_a_crutch
Edmé Jean Pigal, 1800 ca. An amputee beggar holds out his hat to a well dressed man who is standing with his hands in his pockets. Artist’s caption’s translation: “I don’t give to idlers”. From Wikimedia Commons

 

 

The authors conclude:

‘Since the end of the Second World War, the size and scope of government welfare provision has come increasingly under attack.’

‘There are theoretical justifications for this, but we believe that the idea of ‘crowding out’ – public spending deterring private efforts – should not be one of them.’

‘On the contrary, there even seems to be evidence that government can set an example for private donors.

Why does Europe have considerably higher welfare provision than the United States? One long debated explanation is the existence of a ‘crowding out’ effect, whereby government spending crowds out private voluntary work and charitable giving. The idea is that taxpayers feel that they are already contributing through their taxes and thus do not contribute as much privately.

Crowding out makes intuitive sense if people are only concerned with the total level of welfare provided. But many other factors might play a role in the decision to donate privately and, in fact, studies on this topic have led to inconclusive results.

The idea of crowding out has also caught the imagination of politicians, most recently as part of the flagship policy of the UK’s Conservative Party in the 2010 General Election: the so-called ‘big society’. If crowding out holds, spending cuts could be justified by the notion that the private sector will take over.

The new study shows that this is not necessarily the case. In fact, the authors provide historical evidence for the opposite. They analyse data on per capita charitable income and public welfare spending in England between 1785 and 1815. This was a time when welfare spending was regulated locally under the Poor Laws, which meant that different areas in England had different levels of spending and generosity in terms of who received how much relief for how long.

The research finds no evidence of crowding out; rather, it finds that parts of the country with higher state provision of welfare actually enjoyed higher levels of charitable income. At the time, Poor Law spending was increasing rapidly, largely due to strains caused by the Industrial Revolution. This increase occurred despite there being no changes in the laws regulating relief during this period.

The increase in Poor Law spending led to concerns among contemporary commentators and economists. Many expressed the belief that the increase in spending was due to a disincentive effect of poor relief and that mandatory contributions through the poor rate would crowd out voluntary giving, thereby undermining social virtue. That public debate now largely repeats itself two hundred years later.

 

Summary of the article ‘Does Welfare Spending Crowd Out Charitable Activity? Evidence from Historical England under the Poor Laws’ by Nina Boberg-Fazlic (University of Duisberg-Essen) and Paul Sharp (University of Southern Denmark). Published in  Economic Journal, February 2017