The impact of new universities on regional growth: evidence from the United States 1930-80

by Alexandra López Cermeño, Lund University / Universidad Carlos III de Madrid

ChI20hTW4AEYKFj
From ODU  Twitter account

Universities generate growth spillovers beyond simply the local market. Analysing data on the universities founded in the United States between 1930 and 1980, my research shows that these drove growth of GDP and population not only in the counties that hosted them, but also in their neighbouring regions. But analysis of their longer-term impact suggests that although there are growth spillovers, the positive effect wears out if it is not periodically renewed.

The role of universities in generating growth is rarely contested. But most research tend to associate the presence of a university with long-term path dependency. In the era of knowledge and information, the role of universities as producers of new ideas and technologies is crucial to productivity. New light on this subject is required not only to understand the role of cultural amenities but also to explore the spatial dynamics around them.

Long-term analysis that compares recipient counties of their first universities between 1930 and 1980 with statistically similar counties that never got an institution shows that the effect of these new universities implies 20% more growth in terms of GDP. Moreover, the analysis shows that the new amenities eventually had an impact neighbouring counties. These dynamics seem to be related to population migration.

This sizeable increase of GDP in these counties is corresponded by a similar size increase in population: new universities generate migratory movements of workers, which eventually lead to higher housing prices and costs to use other infrastructures. Higher costs motivate many workers to relocate to nearby areas where housing and infrastructures are less expensive and access to the amenity is still feasible.

The positive effect of new universities is therefore neutralised in the longer term unless further investments reduce congestion costs. Indeed, the role of infrastructures such as roads seems to explain a large share of the effect of universities.

But the interaction of universities and infrastructure seems to be defined by the decreasing importance of the latter: whereas physical access to infrastructure seemed to constrain the impact of new amenities before the 1950s, more recently established institutions seem no longer dependent on face-to-face contact.

There is further evidence on the role of knowledge dynamics in my study: in the earlier half of the period 1930-80, all that mattered was getting a new university in the county, whereas in the latter half of the period, the quality of the institution seems to have become much more relevant. Counties where research-intensive institutions were established during the period 1950-80 grew almost 40% more.

My analysis shows that the effect of new academic institutions during the twentieth century induced regional spatial dynamics in terms of migration and GDP. But it indicates that the impact of these new amenities was seriously constrained by the congestion of utilities, which limited the extent of growth to the short run.

Thus, it questions the extent of the impact generated by these institutions that is so praised in recent literature since it suggests that their growth dynamics are not self-sustaining: further investments are needed to keep up with the agglomeration forces that attract population and firms to these counties.

THE HEALTH AND HUMAN CAPITAL OF WAR REFUGEES: Evidence from Jewish migrants escaping the Nazis 1940-42

by Matthias Blum (Queen’s University Belfast ) and Claudia Rei (Vanderbilt University)

AA

At Europe’s doorstep, the current refugee crisis poses considerable challenges to world leaders. Whether refugees are believed beneficial or detrimental to future economic prospects, decisions about them are often based on unverified priors and uninformed opinions.

There is a vast body of scholarly work on the economics of international migration. But when it comes to the sensitive topic of war refugees, we usually learn about the overall numbers of the displaced while knowing next to nothing about the human capital of the displaced populations.

Our study, to be presented at the Economic History Society’s 2017 annual conference in London, contributes to this under-researched, and often hard to document, area of international migration based on a newly constructed dataset of war refugees from Europe to the United States after the outbreak of the Second World War.

We analyse holocaust refugees travelling from Lisbon to New York on steam vessels between 1940 and 1942. Temporarily, the war made Lisbon the last major port of departure when all other options had shut down.

Escaping Europe before 1940 was difficult, but there were still several European ports providing regular passenger traffic to the Americas. The expansion of Nazi Germany in 1940 made emigration increasingly difficult and by 1942, it was nearly impossible for Jews to leave Europe due to mass deportations to concentration camps in the east.

The Lisbon migrants were wartime refugees and offer a valuable insight into the larger body of Jewish migrants who left Europe between the Nazi seizure of power in Germany in January 1933 and the invasion of Poland in September 1939.

The majority of migrants in our dataset were Jews from Germany and Poland, but we identify migrants from 17 countries in Europe. We define as refugees all Jewish passengers as well as their non-Jewish family members travelling with them.

Using individual micro-level evidence, we find that regardless of refugee status all migrants were positively selected – that is, they carried a higher level of health and human capital when compared with the populations in their countries of origin. This pattern is stronger for women than men.

Furthermore, refugees and non-refugees in our sample were no different in terms of skills and income level, but they did differ with respect to the timing of the migration decision. Male refugees were more positively selected if they migrated earlier, whereas women migrating earlier were more positively selected regardless of refugee status.

These findings suggest large losses of human capital in Europe, especially from women, since the Nazi arrival in power seven years before the period we analyse in our data.

The civil war in Syria broke out six years ago in March 2011, making the analysis of the late holocaust refugees all the more relevant. Syrian refugees fleeing war today are not just lucky to escape, they are probably also healthier and coming from a higher social background than average in their home country.

Agency House Crises in India: What Role Did Indigo Play?

by Tehreem Husain

Mocha_Dapper_1680
English, Dutch, and Danish factories at Mocha, 1680 ca. Public Domain picture

 

History provides us with many examples of asset bubbles which have led to systemic crises in the economy. Popular examples are that of the Tulip mania and the South Sea Bubble. This blog discusses the case of an indigo price bubble in nineteenth century India, perhaps the first of its kind, which lead to a contagion like crises in the economy.

 Almost 17.4% of Indian GDP was derived from the agricultural sector in 2015-16, with nearly half of the Indian population being dependent on agriculture and allied activities for livelihood. This makes smooth functioning of commodity markets of considerable importance to policymakers. Throughout time, there have been many episodes of commodity price surges and ensuing market volatility due to traditional demand-supply gaps, monetary stress and financialization of commodity markets inclusive of speculation (Varadi, 2012). What role did agriculture play in commodity market volatility during the late 18th/ early 19th century? Little is known about perhaps the first asset bubble of its kind in India – the indigo crisis, the reasons attributed to it and the cost it imposed on different sectors of the economy.

With the advent of the East India Company, India was a global trade destination for a number of commodities including cotton, silk, indigo, saltpetre and tea. In order to trade these commodities with global markets, European traders needed banks to finance foreign trade. Indigenous bankers in India did not provide this particular banking function and hence the East India Company diversified its business by introducing agency houses in Calcutta which amongst others also performed banking functions. These agency houses performed all the banking functions of receiving deposits, making advances and issuing paper money. Their responsibility of note circulation crucially helped them in carrying out their diversified lines of businesses as ship-owners, land owners, farmers, manufacturers, money lenders and bankers (Cooke, 1830). It was the agency house of Messrs. Alexander & Co. which started the first European bank in India, called the Bank of Hindostan, in 1770 (Singh, 1966).

In the early nineteenth century these agency houses were tested for their endurance and continuance due to three factors. Firstly and most importantly, during the early 1820s, agency houses borrowed money at low interest rates and invested it prodigally in indigo concerns-the crop being the only profitable means of remittance in Europe. The crisis multiplied when newly formed agency houses, besides investing capital in their own indigo concerns, fiercely competed with the old houses in making indiscriminate advances to indigo planters and paid little regard to the actual state of the market. Excessive demand of indigo fuelled the prices in the mid 1820s and encouraged increased production of the commodity which eventually led to a glut in the market and sharp decline in its price. This rise and fall in prices is evident from the fact that the indigo price shot up from Rs. 130/maund in 1813 to Rs. 300 in 1824, and then fell to Rs. 145/maund in 1832 (Singh, 1966).

The second challenge, along with indigo price volatility, was the start of the first Anglo Burmese war in 1825. This further led to stressed monetary conditions resulting in a scarcity of metal in Calcutta (Sinha, 1927).

Thirdly, in terms of the global landscape, this period marked the peak of investment boom in Britain, which characterized an explosion of company promotions and bond issues by foreign governments, mining companies, railways, utilities, docks and steamships. In total during 1824-25 some 624 companies hoping to raise £372 million were brought to the market. However, with the investment boom peaking out in 1825, market conditions had changed. Interest rates had risen making borrowing more expensive, investor sentiment had become more cautious which eventually led to a panic like situation resulting in bank failures and bankruptcies (Brunnermeier & Schnabel, 2015).

In such times of local and global economic stress, several minor agency houses failed in 1827 which shook investor confidence in the remaining agency houses. A notable case is that of the agency house of Messrs. Palmer and Co., known as the ‘indigo king of Bengal’, which faced heavy withdrawals from their partners and eventually led to the closure of their private bank and finally their own demise in 1830. This panicked the market and led to further withdrawals of capital investments.

During this period agency houses made desperate appeals to the government for financial relief and highlighted their importance in the Indian financial system at that time. In a minute dated 14th May 1830, Lord William Bentick, Governor General of India from 1828-35, accentuated systemic importance of agency houses. He highlighted that not only would there be a dislocation of trade in some staple commodities, any damage to the ‘conglomerate’ nature of the agency houses would cause severe disruptions in other industries, most notably shipping. Finally, loans were granted to these houses in the form of treasury notes bearing 6 percent interest.

Despite the monetary aids provided by the government, the wave of agency house failures could not be curbed. More agency houses failed in January 1832. In addition to this, the unexpected fall in the price of indigo created difficulties for one of the biggest agency houses Messrs. Alexander & Co. It is important to note that the relief package came under stringent conditions. They were obliged to withdraw their bank notes from circulation, and were given an extended period for the payment of their debts provided they end their banking operations (Savkar, 1938). This resulted in the demise of the Bank of Hindostan and the Commercial Bank.

Overall seven great Agency Houses of Calcutta failed within a short span of four years which had detrimental effects on the Indian economy at that time. It may be summarized that speculation in indigo and mixing of trading and agency business were the pivotal reasons behind the failure of these agency houses. More importantly, this episode of a commodity price bubble spreading its tentacles to the entire economy had a phenomenal impact on the structure of business. It is recoded that from a handful of firms in the year before 1850, there were 170 firms working as joint stock organizations in 1868. The first commercial register to identify firms with tradable stock was established in 1843 which listed eights firms (Aldous, 2015). Joint stock organizational form also entered banking. A key example is the rise of the Union Bank of Calcutta (Cooke, 1830). The crisis also led to the establishment of a number of private banks by the British expats (Jones, 1995).

 

THE INEFFECTIVENESS OF GOVERNMENT EFFORTS TO PROMOTE PRODUCTS MADE AT HOME: Evidence from the ‘Buy British’ campaigns of the 1960s and 1980s

David Clayton (University of York) and David Higgins (Newcastle University)

keep-calm-and-buy-british-18

Campaigns to promote the purchase of domestic manufactures feature prominently during national economic crises. The key triggers of such schemes include growing import penetration and concern that consumers have been misled into purchasing foreign products instead of domestic ones. Early examples of such initiatives occurred in the United States in 1890 and 1930, with the introduction of the McKinley tariff and the ‘Buy American’ Act, respectively.

In Britain, similar schemes were launched during the interwar years and in the post-1945 period. For the latter, Britain’s share of world trade in manufactures declined from 25% to 10%, and between 1955 and 1980, import penetration in the manufacturing sector increased from 8% to 30%.

Simultaneously, there were numerous government public policy interventions designed to improve productivity, for example, the National Economic Development Council and the Industrial Relations Commission. Both Labour and Conservative governments were much more interventionist than today.

Currently, the rise of protectionist sentiment in the United States and across Europe may well generate new campaigns to persuade consumers to boycott foreign products and give their preference to those made at home. Indeed, President Trump has vowed to ‘Make America Great Again’: to preserve US jobs he has threatened to tax US companies that import components from abroad.

Using a case study of the ‘Buy British’ campaigns of the 1960s and 1980s, our research, to be presented at the Economic History Society’s 2017 annual conference in London, considers what general lessons can be learned from such initiatives and why, in Britain, they failed.

Our central arguments can be summarised as follows. In the 1960s, before Britain acceded to the European Economic Community, there was considerable scope for a government initiative to promote ‘British’ products. But a variety of political and economic obstacles blocked a ‘Buy British’ campaign. During the 1980s, there was less freedom of manoeuvre to enact an official policy of ‘Buy British’ because by then Britain had to abide by the terms of the Treaty of Rome.

In the 1960s, efforts to promote ‘Buy British’ were hindered by the reluctance of British governments to lead on this initiative because of Treasury constraints on national advertising campaigns and a general belief that such a campaign would be ineffective.

For example, the nationalised industries, which were a large proportion of the economy at this time, could not be used to spearhead any campaign because they relied on industrial and intermediate inputs, not consumer durables; and in any case, the ability of these industries to direct more of their purchases to domestic sources was severely constrained: total purchases by all nationalised industries in the early 1970s were around £2,000 million, of which over 90% went to domestic suppliers.

Efforts to nudge private organisations into running these campaigns were also ineffective. The CBI refused to take the lead on a point of principle, arguing that ‘A general campaign would… conflict with [our] view that commercial freedom should be as complete as possible. British goods must sell on their merits and their price in relation to those of our competitors, not because they happen to be British’.

During the 1980s, government intervention to promote ‘Buy British’ would have contravened Britain’s new international treaty obligations. The Treaty of Rome (1957) required the liberalisation of trade between members, the reduction and eventual abolition of tariffs and the elimination of measures, such as promotion of ‘British’ products, ‘having equivalent effect’. Attempts by the French and Irish governments to persuade their consumers to give preference to domestic goods were declared illegal.

The only way to overcome this legislative restriction was if domestic companies chose to mark their products as ‘British’ voluntarily. This was not a rational strategy for individual firms to follow. Consumers generally prefer domestic to foreign products.

But when price, quality and product-country images are taken into account, rather than origin per se, the country of origin effect is weakened considerably. From the perspective of individual firms promoting their products, using a ‘British’ mark risked devaluing their pre-existing brands by associating then with inferior products.

Our conclusions are that in both periods, firms acting individual or collectively (via industry-wide bodies) did not want to promote their products using ‘British’ marks. Action required top-down pressure from government to persuade consumers to ‘Buy British’. In the 1960s, there was no consensus within government in favour of this position, and, by the 1980s, government intervention was illegal due to international treaty obligation.

In a post-Brexit Britain, with a much weakened manufacturing capacity compared even with the 1960s and 1980s, the case for the government to nudge consumers to ‘Buy British’ is weak.

Extractive Policies and Economic Outcomes: the Unitary Origins of the Present-Day North-South of Italy Divide

by Guilherme de Oliveira (Columbia Law School) and Carmine Guerriero (University of Bologna)

manifesto_emigrazione_san_paolo_brasile

Italy emerged from the Congress of Vienna as a carefully thought equilibrium among eight absolutists states, all under the control of Austria except the Kingdom of the Two Sicilies, dominated by the Bourbons, and the Kingdom of Sardinia, ruled by the Savoys and erected as a barrier between Austria and France. This status quo fed the ambitions of the Piedmontese lineage, turning it into the champion of the liberals, who longed to establish a unitary state by fomenting the beginning of the century unrest. Although ineffective, these insurrections forced the implementation, especially in the South, of the liberal reforms first introduced by the Napoleonic armies, and allowed a rising class of bourgeoisie, attracted by the expanding international demand, to acquire the nester nobility’s domains and prioritize export-oriented farming. Among these activities, arboriculture and sericulture, which were up to 60 times more lucrative than wheat breeding, soon became dominant, constituting half of the 1859 exports. Consequently, farming productivity increased, reaching similar levels in the Northern farms and the Southern latifundia, but the almost exclusive specialization in the agrarian sectors left the Italian economy stagnant as implied by the evolution of the GDP per capita in the regions in our sample, which we group by their political relevance for the post-unitary rulers as inversely picked by Distance-to-Enemies (see upper-left graph of figure 1). This is the distance between each region’s main city and the capital of the fiercer enemy of the Savoys—i.e., Vienna over the 1801-1813, 1848-1881, and 1901-1914 periods, and Paris otherwise—and is the lowest for Veneto, which we then label the “high” political relevance cluster. Similarly, we refer to the regions with above(below)-average values as “low” (“middle”) political relevance group or “South” and to the union of the high-middle relevance regions and the key Kingdom of Sardinia regions—i.e., Liguria and Piedmont—as “North.”

 

Figure 1: Income, Political Power, Land Property Taxes, and Railway Diffusion

1  Note: “GDP-L” is the income in 1861 lire per capita, “Political-Power” is the share of prime ministers born in the region averaged over the previous decade, “Land-Taxes” is the land property tax revenues in 1861 lire per capita, and “Railway” is the railway length built in the previous decade in km per square km. _M (_H) includes Abruzzi, Emilia Romagna, Lombardy, Marche, Tuscany, and Umbria (Veneto), whereas KS gathers Liguria and Piedmont. The North (_L) cluster includes the M, H, and KS groups (Apulia, Basilicata, Calabria, Campania, Lazio, and Sicily). See de Oliveira and Guerriero (2017) for each variable sources and definition.

 

Despite some pre-unitary differences, both clusters were largely underdeveloped with respect to the leading European powers at unification, and the causes of this backwardness ranged from the scarcity of coal and infrastructures to the shortage of human and real capital. Crucially, none of such conditions was significantly different across groups since, differently from the Kingdom of Sardinia, none of the pre-unitary states established a virtuous balance between military spending and investment in valuable public goods as railway and literacy. Even worst, they intensified taxation only when necessary to finance the armies needed to tame internal unrest, which were especially fierce in the Kingdom of Two Sicilies. The bottom graphs of figure 1 exhibit this pattern by displaying the key direct tax, which was the land property duty, and the main non-military expenditure, which was the railway investment.

Meanwhile, the power of the Piedmontese parliament relative to the king grew steadily and its leader Camillo of Cavour succeeded to guarantee an alliance with France in a future conflict against Austria by sustaining the former in the 1856 Crimean War. The 1859 French-Piedmontese victory against the Habsburgs then triggered insurrections in Tuscany, the conquest of the South by Garibaldi, and the proclamation of the Kingdom of Italy in 1861. Dominated by a narrow elite of northerners (see upper-right graphs of figure 1), the new state favoured the Northern export-oriented farming and manufacturing industries while selecting public spending and the Northern populations when levying the taxes necessary to finance these policies. To illustrate, the 1887 protectionist reform, instead of safeguarding the arboriculture sectors crushed by 1880s fall in prices, shielded the Po Valley wheat breeding and those Northern textile and manufacturing industries that had survived the liberal years thanks to state intervention. While indeed the former dominated the allocation of military clothing contracts, the latter monopolized both coal mining permits and public contracts. A similar logic guided the assignment of the monopoly rights in the steamboat construction and navigation sectors and, notably, the public spending in railway, which represented the 53 percent of the 1861-1911 total. Over this period indeed, Liguria and Piedmont gained a 3 (4) times bigger railway spending per square km than Veneto (the other regions). Moreover, the aim of this effort “was more the military one of controlling the national territory, especially in the South, than favouring commerce” [Iuzzolino et al. 2011, p. 22]. Crucially, this infrastructural program was financed through highly unbalanced land property taxes, which in turn affected the key source of savings available to the investment in the growth sectors absent a developed banking systems. The 1864 reform fixed a 125 million target revenue to be raised from 9 districts resembling the pre-unitary states. The ex-Papal State took on the 10 percent, the ex-Kingdom of Two Sicilies the 40, and the rest of the state (ex-Kingdom of Sardinia) only the 29 (21). To further weigh this burden down, a 20 percent surcharge was added by 1868 creating the disparities displayed in the bottom-left graph of figure 1.

The 1886 cadastral reform opened the way to more egalitarian policies and, after the First World War, to the harmonization of the tax-rates, but the impact of extraction on the economies of the two blocks was at that point irreversible. While indeed a flourishing manufacturing sector was established in the North, the mix of low public spending and heavy taxation squeezed the Southern investment to the point that the local industry and export-oriented farming were wiped out. Moreover, extraction destroyed the relationship between the central state and the southern population by unchaining first a civil war, which brought about 20,000 victims by 1864 and the militarization of the area, and then favouring emigration. Because of these tensions, the population started to display a progressively weaker culture as implied by the fall in our proxy for social capital depicted in the bottom-left graph of figure 2.

The fascist regime’s aversion to migrations and its rush to arming first, and the 1960s pro-South state aids then have further affected the divide, which can be safely attributed to the extractive policies selected by the unitary state between 1861 and 1911.

Empirical Evidence

Because the 13 regions remained agrarian over our 1801-1911 sample, we capture the extent of extraction with the land property taxation and the farming productivity with the geographic drivers of the profitability of the arboriculture and sericulture sectors. In addition, we use as inverse metrics of each region’s tax-collection costs (political relevance) the share of previous decade in which the region partook in external wars (Distance-to-Enemies).

Our fixed region and time effects OLS estimates imply that pre-unitary revenues from land property taxes in 1861 lire per capita decrease with each region’s farming productivity but not with its relevance for the Piedmontese elite, whereas the opposite was true for the post-unitary ones. Moreover, post-unitary distortions in land property tax revenues—proxied with the difference between the observed and the counterfactual ones forecasted through pre-unitary estimates (see upper-left graph of figure 2)—and the severity of the other extractive policies—negatively captured by the tax-collection costs and the political relevance (see below)—positively determined the opening gaps in culture, literacy (see bottom-right graph of figure 2), and development, i.e., the income in 1861 lire per capita, the gross saleable farming product, and the textile industry value added in thousands of 1861 lire per capita.

 

Figure 2: The Rise of the North-South Divide

2Note: “Distortion-LT” are the land property tax distortions in 1861 lire per capita, “Distortion-R” is the difference between Railway and the forecasted length of railway built in the previous decade in km per square km, “Culture-N” is the normalized share of the active population engaged in political, union, and religious activities, and “Illiterates-N” is the normalized percentage points of illiterates in the population over the age of six. See figure 1 for each cluster definition and de Oliveira and Guerriero (2017) for each variable sources and definition.

 

These results are consistent with the predictions of the model we lay out to inform our test. First, because of limited state-capacity, the pre-unitary states should reduce extraction if confronted by a more productive and so powerful citizenry, whereas the extractive power of the unitary state should be sufficiently strong to make taxation of the South profitable at the margin and so crucially shaped by his relevance. Second, it should also induce the Southern citizenry to prefer private to public good production and his investment and welfare to rise with factors limiting taxation, i.e., marginal tax-collection costs and political relevance.

Since our proxies for the drivers of extraction are driven by either geographic features independent of human effort or events outside the control of the policy-makers, reverse causation is not an issue. Nevertheless, our results could still be produced by unobserved heterogeneity. To evaluate this aspect, we control for the interactions of time effects with the structural conditions differentiating the two blocks in 1861 and considered key by the extant literature (Franchetti and Sonnino, 1876; Gramsci, 1966; Barbagallo, 1980; Krugman, 1981), i.e., the pre-unitary inclusiveness of political institutions, the land ownership fragmentation, the coal price, and the railway length. Including these controls has little effect on our results. Finally, two extra pieces of evidence rule out the possibility that extraction was an acceptable price for the Italian development (Romeo, 1987). First, it did not shape the manufacturing sector value added. Second, while the pre-unitary length of railway additions was only affected by the farming productivity, the post-unitary one was only driven by the political relevance, resulting useless in creating a unitary market (see upper-right graph of figure 2).

Conclusions

Although the North-South divide has been linked to post-unitary policies before (Salvemini 1963; Cafagna, 1989), nobody has formally clarified how the unitary state solved the trade-off between extraction-related losses and rent-seeking gains. In doing so, we also contribute to the literature comparing extractive and inclusive institutions (North et al., 2009, Acemoglu and Robinson, 2012), endogenizing however the extent of extraction in a setup sufficiently general to be applied to other instances, as for instance the post-Civil War USA.

References

The TOWER OF BABEL: why we are still a long way from everyone speaking the same language

Nearly a third of the world’s 6,000 plus distinct languages have more than 35,000 speakers. But despite the big communications advantages of a few widely spoken languages such as English and Spanish, there is no sign of a systematic decline in the number of people speaking this large group of relatively small languages.

the_tower_of_babel

These are among the findings of a new study by Professor David Clingingsmith, published in the February 2017 issue of the Economic Journal. His analysis explains how it is possible to have a stable situation in which the world has a small number of very large languages and a large number of small languages.

Does this mean that the benefits of a universal language could never be so great as to induce a sweeping consolidation of language? No, the study concludes:

‘Consider the example of migrants, who tend to switch to the language of their adopted home within a few generations. When the incentives are large enough, populations do switch languages.’

‘The question we can’t yet answer is whether recent technological developments, such as the internet, will change the benefits enough to make such switching worthwhile more broadly.’

Why don’t all people speak the same language? At least since the story of the Tower of Babel, humans have puzzled over the diversity of spoken languages. As with the ancient writers of the book of Genesis, economists have also recognised that there are advantages when people speak a common language, and that those advantages only increase when more people adopt a language.

This simple reasoning predicts that humans should eventually adopt a common language. The growing role of English as the world’s lingua franca and the radical shrinking of distances enabled by the internet has led many people to speculate that the emergence of a universal human language is, if not imminent, at least on the horizon.

There are more than 6,000 distinct languages spoken in the world today. Just 16 of these languages are the native languages of fully half the human population, while the median language is known by only 10,000 people.

The implications might appear to be clear: if we are indeed on the road to a universal language, then the populations speaking the vast majority of these languages must be shrinking relative to the largest ones, on their way to extinction.

The new study presents a very different picture. The author first uses population censuses to produce a new set of estimates of the level and growth of language populations.

The relative paucity of data on the number of people speaking the world’s languages at different points in time means that this can be done for only 344 languages. Nevertheless, the data clearly suggest that the populations of the 29% of languages that have 35,000 or more speakers are stable, not shrinking.

How could this stability be consistent with the very real advantages offered by widely spoken languages? The key is to realise that most human interaction has a local character.

This insight is central to the author’s analysis, which shows that even when there are strong benefits to adopting a common language, we can still end up in a world with a small number of very large languages and a large number of small ones. Numerical simulations of the analytical model produce distributions of language sizes that look very much like the one that actually obtain in the world today.

Summary of the article ‘Are the World’s Languages Consolidating? The Dynamics and Distribution of Language Populations’ by David Clingingsmith. Published in Economic Journal on February 2017