By Alexandra L. Cermeño and Kerstin Enflo (Lund University)
Urban growth is crucial for modernisation, and the wave of new towns in China since the 1980s is one example of a strategy employed by policymakers to encourage the process. This column analyses the long-run success of a town foundation policy in Sweden between 1570 and 1810. While the ‘artificially’ created towns failed to grow in the short term, they eventually began to grow and thrive, and today are as resilient as their medieval counterparts.
The founding of new towns has been at the core of urban planning since the onset of civilisation. In recent times, policymakers have shown renewed interested in the creation of towns to channel regional economic growth. A prominent example is China, where a large-scale urban planning programme began in the 1980s to cope with the pressure of a growing urban population. The idea was to relocate hundreds of millions of rural inhabitants to live in purpose-built towns. Western media has branded these new towns as ‘ghost towns’, as ‘bridges to nowhere’, or as towns in search of populations.
The paper was published on The Economic History Review and is available here on early view
What do we know about how a market economy operates in the immediate aftermath of a major natural disaster such as an earthquake? Well, actually less than you might think. Specialists in disaster studies have understandably focussed on resilience, relief and reconstruction. The economics of disasters has offered limited frameworks for such addressing this kind of question, although major natural disasters have usefully served in analysis as exogenous shocks or natural experiments. Yet rebuilding economic activity following a major natural disaster must be helped by improving our understanding of the mechanisms whereby the disaster impacts on market activity, and the response of economic actors, both individual and collective. Our work builds on the idea that the analysis of markets in the short-term recuperation phase is best undertaken using the laws of supply and demand, an argument put forward in one of the classic works of the economics of disasters back in 1969.
The Great Kantō Earthquake
The so-called Great Kantō Earthquake of September 1923 in Japan devastated the cities of Tokyo and Yokohama and much of the surrounding area. The location of the disaster is shown on the map.
While significant damage was caused by the seismic shocks and subsequent tsunami, much of the destruction and casualties (well over 100,000 died) was inflicted by the fires that broke out following the earthquake. The physical destruction was immense, and affected not only the country’s political capital, but its largest urban conglomeration and its major export port. The Kantō plain was also in many respects the hub of an increasingly integrated national economy.
Our concern in this article has been to analyse the shifts in the availability of, and demand for, different commodities following the 1923 disaster, with a view to identifying the magnitude and duration of such shifts. We have done this through an analysis of price changes for different products over the period before and after the disaster, something that allows us to explore what economists have termed the ‘ripple effects’ of natural disasters.
Our data confirmed that the economic impact of the disaster was far from being confined to the area of destruction. Price changes were experienced across the Japanese archipelago. We did find, though, that the extent of any change tended to diminish the greater the distance from the capital area.
It was also clear that the impact on prices was somewhat stronger in the north and northeastern half of Japan, a region that had traditionally been more closely integrated with Tokyo, than was the case in the southwest, which had long been more closely integrated with the urban area around the city of Osaka. The variation and pattern of the price changes we identified conformed with what we know about patterns of market integration in Japan in the early 20th century. At the same time, the ripples were in many cases less than might have been expected, and certainly less than cotemporary reports suggested. Nor were they in most cases of a significant duration. While there were some initially significant price rises associated with a sudden demand for reconstruction goods, for example, price levels tended to fall again within a few months, suggesting a move back towards some kind of market equilibrium. The pattern of change varied according to the product; a diversity of factors affected supply and demand for different products, and understanding these factors, as well as the pattern of government and institutional intervention in some markets, requires further analysis. It is also the case that analysis of retail prices, for which data are much more difficult to obtain, might well show a somewhat different story from the wholesale price data set that we have been able to compile. Retail prices are, of course, much more difficult to control and a better reflection of what the consumer actually had to cope with.
Overall, though, our analysis confirms that at this time Japan was a relatively well integrated economy. Our findings that prices reverted relatively rapidly toward equilibrium are in line with most other economic indicators showing that there was a relatively rapid reversion to former trends. The disaster, in short, was a short-term exogenous shock from which Japan soon recovered. That does not mean that it did not matter. We still have insufficient knowledge of the factors accelerating or limiting the spread and duration of any price changes following a natural disaster of this kind, and, crucially, of any implications for the longer term consequences of such a shock for the economy as a whole. For contemporary disaster studies, understanding these factors is one of the keys to recovery and the building of resilience.
 D.Dacy & H.Kunreuther, Economics of Natural Disasters (New York, 1969).
by Gareth Austin (University of Cambridge) and Leigh Shaw-Taylor (University of Cambridge)
This paper was presented at the EHS Annual Conference 2019 in Belfast.
The general story in the research literature is that under colonial rule from the around the 1890s to around 1960, African economies became structured around exports of primary products, and that this persisted through the unsuccessful early post-colonial policies of import-substituting industrialisation, was entrenched by ‘structural adjustment’ in the 1980s, and has continued through the relatively strong economic growth across the continent since around 1995.
Our research offers a preliminary overview of the AFCHOS project, an international collaboration involving 20 scholars currently preparing 15 national or sub-national case studies. The discussion is organised in two sections.
Section I describes how by creating country data-bases as an essential first step, we aim to develop the first overview of changing occupational structures across sub-Saharan Africa, from the moment when the necessary data became available in the country concerned, to the present.
We track the shifts between agriculture, extraction, the secondary sector and services, and explore the trends in specific occupational groups within each of these sectors. We also examine the closely related process of urbanisation.
The core of the enterprise is the construction of datasets that reflect without distortion the specificities of African conditions, are commensurable across the continent, and are also commensurable with the datasets developed by parallel projects on the occupational structures of Eurasia and the Americas.
Section II outlines preliminary findings. It is centred on four graphs, depicting the evolution of the share of the economically active population in each sector, for about 14 countries. We relate these to the indications of the evolution of the size and location of population, and the size and composition of GDP.
The population of sub-Saharan Africa has increased perhaps six times since the influenza pandemic of 1918, and average living standards have not fallen: a remarkable achievement in terms of aggregate economic growth, and one that has not been sufficiently appreciated.
It is also striking that the multiplication of population, enabled by falling mortality rates, was accompanied by rapid urbanisation. There were also improvements in living standards, though modest and uneven.
Agriculture’s share in employment generally fell, especially after 1960. The share of manufacturing evolved quite differently over space and time within Africa, as we will elaborate.
Urbanisation has been accompanied by a general growth of employment in services. Where we have disaggregated the latter, so far, there has been dramatic growth in transport and distributive trades, suggesting increasing integration of national and regional economies – an important step in economic development.
by Jim Tomlinson (Economic and Social History, University of Glasgow)
This research will be presented during the EHS Annual Conference in Belfast, April 5th – 7th 2019. Conference registration can be found on the EHS website.
The huge loss of industrial employment – ‘de-industrialisation’ – has been one of the most important economic and social changes in Britain since the Second World War. But its timing, causes and effects are often misunderstood.
My study of Dundee, a typical post-industrial city, enables us to examine this process and to demonstrate important aspects of the process relevant to the whole country. The key messages, which I will present at the Economic History Society’s 2019 annual conference, are as follows:
De-industrialisation in Britain began in the 1950s: since then, the proportion of industrial jobs has shrunk from over 50% to around 15%, with the fall in manufacturing jobs even more dramatic.
De-industrialisation was greatly accelerated by the ‘Thatcherite’ policies of the 1980s, but the process began long before that date.
In particular, the ‘old staple’ industries, such as textiles, coal and the railways, lost more workers in the 1950s and 1960s than in the 1980s.
De-industrialisation was not mainly caused by the recent phase of ‘globalisation’.
The most important causes were technological change and shifts in patterns of consumption.
De-industrialisation doesn’t mean ‘we don’t make anything any more’; the trend in industrial output was upwards until the 1970s and roughly flat since then, but higher productivity means it takes far fewer workers to produces this output.
Most job losses arose from either long, slow attrition of employment levels in existing firms, or the slow growth of new jobs, not from dramatic, large-scale closures.
De-industrialisation matters especially because it has polarised the labour market much more into ‘lovely and lousy’ jobs; ‘lovely’ jobs are well-paid and relatively secure; ‘lousy’ jobs poorly paid and precarious.
The number of ‘lovely’ jobs, such as professionals, administrators, managers and technicians, has increased across all sectors of the economy, including industry.
The number of ‘lovely’ jobs has been particularly increased by the expansion of public sector employment, especially in health and education, and the numbers in these areas have barely been affected by recent austerity (unlike employment in local authorities).
Public sector ‘outsourcing’ has increased the polarisation of the labour market, as many of the outsourced jobs have been the low-skilled ones where public employment previously provided some protection against the impact of weak bargaining power.
‘Lovely’ jobs commonly require significant educational qualifications, and average educational achievement has shot up in the period of de-industrialisation, especially in universities. Universities in turn have been a significant source of expansion of ‘lovely’ jobs.
The disadvantages of low educational attainment have been magnified by de-industrialisation, which makes access to ‘lovely’ jobs almost entirely reliant on high levels of attainment.
The transition from the dominance of industry has pushed many people out of the labour market, something that is evident not only in unemployment but also in much higher levels of long-term sickness and disability.
As a result of this transition, there has been a large increase in self-employment, much of which is poorly paid.
by Kerstin Enflo (Lund University), Anna Missiaia (Lund University) and Joan Rosés (LSE)
This research will be presented during the EHS Annual Conference in Belfast, April 5th – 7th 2019. Conference registration can be found on the EHS website.
Fast urbanisation is a phenomenon often associated with the image of African or Asian mega-cities, but migrations from rural to urban areas are also a European phenomenon (see the growth experienced by large capitals such as London and Paris, but also smaller ones such as Stockholm and Copenhagen). And according to United Nations forecasts, the urbanisation trend will continue, with an estimated 2.5 billion people added to the world’s urban population by 2050.
The first question that comes to mind is whether urbanisation triggers economic growth, and therefore should be favoured by policy-makers, as suggested by eminent scholars such as Ed Glaeser (The Triumph of the City: How Our Best Invention Makes Us Richer, Smarter, Greener, Healthier, and Happier, 2011) or Richard Florida (The Rise of the Creative Class, 2002).
Although this relationship is overall positive, the paradigm has been challenged with respect to African mega-cities: their urbanisation rate takes off in periods of growth but it does not immediately decrease in periods of recession. As cities continue to grow in size, but fail to grow in GDP per capita, their inhabitants experience falling income levels, ultimately leading to falling living standards (Fay and Opal, 2000).
If we look at Europe, urbanisation without growth does not appear to be an issue when countries are the units of analysis. But the national dimension could be concealed by the success stories of the large capitals, with less clear success stories for middle-sized declining cities. Net of the big successful capitals, many cities that thrived during the post-war period are now struggling, with clear economic, social and political consequences.
Our work, to be presented at the Economic History Society’s 2019 annual conference, contributes to the debate by looking at this relationship for the first time at the regional, rather than national level, using urbanisation rates and GDP per capita in EU regions in the twentieth century. The regional dimension makes it possible to disentangle the effects of urbanisation from the effects of being the capital’s region.
Our main findings are that the relationship between urbanisation and growth is positive and significant until the middle of the twentieth century, while it is not significant in recent years. We therefore observe a progressive decoupling of regional urbanisation and economic growth. The effect on growth of the presence of the capital in the region is very large: between 60% and 70% of that of urbanisation until the mid-twentieth century.
When looking at macro areas, both Southern Europe and Northern Europe show no statistically significant relationship between urbanisation and economic growth, suggesting that regions containing urban areas without the status of capital do not necessarily grow more than regions without such urban areas.
This is consistent with the idea of a ‘winner-take-all urbanism’ presented by Florida (The New Urban Crisis, 2017) in which there is a growing divide between the winner cities (London, New York, Paris, San Francisco) and the rest.
In the winner cities, the middle class, the service class and the working class are priced out by highly paid creative workers. In the rest of the cities, where creative workers are not based, the middle class declines without being replaced by the new rich.
Our results are relevant for policy-makers as they challenge the view that urbanisation per se is a strong channel for economic growth regardless of the period and geographical area considered.
Britain’s unusually high house price to income ratio plays an important role in reducing living standards and increasing “housing poverty”. This article shows that Britain’s housing shortage partly stems from deliberate long-term government policies aimed at restricting both public and private sector house-building. From the 1950s to the early 1980s, successive governments reduced housing starts as part of `stop-go’ macroeconomic policy, with major cumulative impacts.
This policy had its roots in the Second World War, when an influential coalition of Bank of England and Treasury officials pressed for a post-war policy of savage deflation, to restore sterling’s credibility and re-establish London as a major financial centre. John Maynard Keynes warned that prioritising international ‘obligations’ over the war-time commitment to build a fairer society would be repeating the 1920s gold standard error – though his direct influence ended with his untimely death. Deflationary policy proved politically impracticable in the short-term, as evidenced by Labour’s 1945 landslide election victory, though its supporters bided their time and were able to implement much of their agenda in the changed political climate of the 1950s.
The Conservatives’ 1951 election victory was based on a pledge to build 300,000 new homes per year. This was achieved in 1953 and building peaked at 340,000 completions in 1954. However, officials took advantage of the 1955-57 credit squeeze to press for severe cuts in housing investment. Municipal house-building was cut, while private house-building was depressed largely through restricting the growth of building society funds (by pressurising the building societies’ cartel to keep interest rates at such low levels that they were starved of mortgage funds). While the severity of policy varied over time, these restrictions were maintained almost continually until the early 1980s.
These restrictions were never formally announced and were hidden from Cabinet for much of this period. Meanwhile, given the political importance of housing, the Conservative government simultaneously proposed ever-larger housing targets (culminating in a 1964 election pledge to build 400,000 per annum). This created a perverse situation, whereby the government was spending substantial sums on highly publicised policies to increase demand for private housing (such as the 1959 House Purchase and Housing Act and the 1963 abolition of Schedule A income tax), while covertly reducing housing supply through restricting mortgage funding, limiting building firms’ access to credit, and reducing municipal housing investment. The following Labour government found itself drawn in to a similarly restrictive housing policy, as part of its ill-fated commitment to avoid sterling devaluation (arguably based on misleading Treasury advice), while housing restrictions were also used as an instrument of macroeconomic stabilisation in the 1970s.
A 1974 Bank of England analysis found that this policy had created both an exaggerated housing cycle and a structural deficit (with house-building being held below market-clearing levels at all points in the cycle). This had in turn reduced the capacity of the housing market to respond to rising demand, by reducing builders’ land banks, building materials capacity, and building labour, which raised house-prices while lowering productivity and technical progress. There is also evidence of “learning effects” by house-builders, who avoided expanding their activities during cyclical upturns, as they correctly perceived that tighter government restrictions might be imposed before their houses were ready to sell. These pressures fuelled house price inflation, both directly, and because housing became increasingly regarded as as a hedge against inflation.
Figure 1: Capital formation in dwellings, as percentage of total capital formation, and housing completions per thousand families, private houses and all houses, 1924-38 and 1954-79
British house-building during this era compared unfavourably to inter-war levels, as shown in Figure 1. Moreover, private house-building was even more depressed that total housing – as the Treasury found it easier to covertly restrict private housing than to reduce municipal building starts, where policy was more open to Cabinet and public scrutiny. British gross domestic fixed capital investment in housing was also very low relative to other European nations. Our time-series econometric analysis for 1955-1979 corroborates the `success’ of the restrictions and also shows the predicted asymmetric impact in `stop’ and `go’ phases of policy. This is an important finding – as stop-go policy is often examined in terms of the volatility of the variable under examination – based on the unrealistic assumption that industry would fail to realise that demand upturns might be rapidly terminated by the re-imposition of controls.
Housing restriction policy has persisting consequences. Additions to the housing stock were depressed for several decades, while the inflationary-hedge benefits for house-purchase became a self-fulfilling prophecy. Meanwhile restrictive planning policy (which was substantially intensified in the 1950s, as a further measure of housing restriction) has proved difficult to reverse. Average house-prices to income ratios have thus continued the upward trend established in this era, currently excluding a substantial and growing proportion of the population from owner-occupation.
Only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. That is one of the findings of research by Guy Michaels (London School of Economics) and Ferdinand Rauch (University of Oxford), which uses the contrasting experiences of British and French cities after the fall of the Roman Empire as a natural experiment to explore the impact of history on economic geography – and what leads cities to get stuck in undesirable locations, a big issue for modern urban planners.
The study, published in the February 2018 issue of the Economic Journal, notes that in France, post-Roman urban life became a shadow of its former self, but in Britain it completely disappeared. As a result, medieval towns in France were much more likely to be located near Roman towns than their British counterparts. But many of these places were obsolete because the best locations in Roman times weren’t the same as in the Middle Ages, when access to water transport was key.
The world is rapidly urbanising, but some of its growing cities seem to be misplaced. Their locations are hampered by poor access to world markets, shortages of water or vulnerability to flooding, earthquakes, volcanoes and other natural disasters. This outcome – cities stuck in the wrong places – has potentially dire economic and social consequences.
When thinking about policy responses, it is worth looking at the past to see how historical events can leave cities trapped in locations that are far from ideal. The new study does that by comparing the evolution of two initially similar urban networks following a historical calamity that wiped out one, while leaving the other largely intact.
The setting for the analysis of urban persistence is north-western Europe, where the authors trace the effects of the collapse of the Western Roman Empire more than 1,500 years ago through to the present day. Around the dawn of the first millennium, Rome conquered, and subsequently urbanised, areas including those that make up present day France and Britain (as far north as Hadrian’s Wall). Under the Romans, towns in the two places developed similarly in terms of their institutions, organisation and size.
But around the middle of the fourth century, their fates diverged. Roman Britain suffered invasions, usurpations and reprisals against its elite. Around 410CE, when Rome itself was first sacked, Roman Britain’s last remaining legions, which had maintained order and security, departed permanently. Consequently, Roman Britain’s political, social and economic order collapsed. Between 450CE and 600CE, its towns no longer functioned.
Although some Roman towns in France also suffered when the Western Roman Empire fell, many of them survived and were taken over by Franks. So while the urban network in Britain effectively ended with the fall of the Western Roman Empire, there was much more urban continuity in France.
The divergent paths of these two urban networks makes it possible to study the spatial consequences of the ‘resetting’ of an urban network, as towns across Western Europe re-emerged and grew during the Middle Ages. During the High Middle Ages, both Britain and France were again ruled by a common elite (Norman rather than Roman) and had access to similar production technologies. Both features make is possible to compare the effects of the collapse of the Roman Empire on the evolution of town locations.
Following the asymmetric calamity and subsequent re-emergence of towns in Britain and France, one of three scenarios can be imagined:
First, if locational fundamentals, such as coastlines, mountains and rivers, consistently favour a fixed set of places, then those locations would be home to both surviving and re-emerging towns. In this case, there would be high persistence of locations from the Roman era onwards in both British and French urban networks.
Second, if locational fundamentals or their value change over time (for example, if coastal access becomes more important) and if these fundamentals affect productivity more than the concentration of human activity, then both urban networks would similarly shift towards locations with improved fundamentals. In this case, there would be less persistence of locations in both British and French urban networks relative to the Roman era.
Third, if locational fundamentals or their value change, but these fundamentals affect productivity less than the concentration of human activity, then there would be ‘path-dependence’ in the location of towns. The British urban network, which was reset, would shift away from Roman-era locations towards places that are more suited to the changing economic conditions. But French towns would tend to remain in their original Roman locations.
The authors’ empirical investigation finds support for the third scenario, where town locations are path-dependent. Medieval towns in France were much more likely to be located near Roman towns than their British counterparts.
These differences in urban persistence are still visible today; for example, only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. This finding suggests that the urban network in Britain shifted towards newly advantageous locations between the Roman and medieval eras, while towns in France remained in locations that may have become obsolete.
But did it really matter for future economic development that medieval French towns remained in Roman-era locations? To shed light on this question, the researchers focus on a particular dimension of each town’s location: its accessibility to transport networks.
During Roman times, roads connected major towns, facilitating movements of the occupying army. But during the Middle Ages, technical improvements in water transport made coastal access more important. This technological change meant that having coastal access mattered more for medieval towns in Britain and France than for Roman ones.
The study finds that during the Middle Ages, towns in Britain were roughly two and a half times more likely to have coastal access – either directly or via a navigable river – than during the Roman era. In contrast, in France, there was little change in the urban network’s coastal access.
The researchers also show that having coastal access did matter for towns’ subsequent population growth, which is a key indicator of their economic viability. Specifically, they find that towns with coastal access grew faster between 1200 and 1700, and for towns with poor coastal access, access to canals was associated with faster population growth. The investments in the costly building and maintenance of these canals provide further evidence of the value of access to water transport networks.
The conclusion is that many French towns were stuck in the wrong places for centuries, since their locations were designed for the demands of Roman times and not those of the Middle Ages. They could not take full advantage of the improved transport technologies because they had poor coastal access.
Taken together, these findings show that urban networks may reconfigure around locational fundamentals that become more valuable over time. But this reconfiguration is not inevitable, and towns and cities may remain trapped in bad locations over many centuries and even millennia. This spatial misallocation of economic activity over hundreds of years has almost certainly induced considerable economic costs.
‘Our findings suggest lessons for today’s policy-makers – conclude the authors – The conclusion that cities may be misplaced still matters as the world’s population becomes ever more concentrated in urban areas. For example, parts of Africa, including some of its cities, are hampered by poor access to world markets due to their landlocked position and poor land transport infrastructure. Our research suggests that path-dependence in city locations can still have significant costs.’
‘‘Resetting the Urban Network: 117-2012’ by Guy Michaels and Ferdinand Rauch was published in the February 2018 issue of the Economic Journal.
Werner Troeksen (University of Pittsburgh) Nicola Tynan (Dickinson College) Yuanxiaoyue (Artemis) Yang (Harvard T.H. Chan School of Public Health)
The United Nations Sustainable Development Goals aim to ensure access to water and sanitation for all. This means not just treating water but supplying it reliably. Lives are at stake because epidemiological research shows that a reliable, constant supply of water reduces water-borne illness.
Nineteenth century London faced the same challenge. Not until 1886 did more than half of London homes have water supplied 24 hours a day, 7 days a week. The move to a constant water supply reduced mortality. For every 5% increase in the number of households with a constant supply, deaths from water-borne illnesses fell 3%.
During Victoria’s reign, eight water companies supplied the metropolis with water: 50% from the river Thames, 25% from the river Lea and 25% from wells and springs. By the 1860s, the companies filtered all surface water and Bazalgette’s intercepting sewer was under construction. Still, more than 80% of people received water intermittently, storing it in cisterns often located outside the house, uncovered or beside the toilet.
Rapid population and housing growth required the expansion of the water network and companies found it easier to introduce constant service in new neighbourhoods. Retrofitting older neighbourhoods proved challenging and risked a substantial waste of scarce water. The Metropolis Water Act of 1871 finally gave water companies the power to require waste-limiting fixtures. After 1871, new housing estates received a constant supply of water immediately, while old neighbourhoods transitioned slowly.
As constant water supply reached more people, mortality from diarrhoea, dysentery, typhoid and cholera combined fell. With 24-hour supply, water was regularly available for everyone without risk of contamination. Unsurprisingly, poorer, crowded districts had higher mortality from water-borne diseases.
Even though treated, piped water was available to all by the mid-nineteenth century, everyone benefitted from the move to constant service. By the time the Metropolitan Water Board acquired London’s water infrastructure, 95% of houses in the city received their water directly from the mains.
According to Sergio Campus, water and sanitation head at the Inter-American Development Bank, the current challenge in many places is providing a sustainable and constant supply of water. In line with this, the World Bank’s new Water Supply, Sanitation, and Hygiene (WASH) poverty diagnostic has added frequency of delivery as a measure of water quality, in addition to access, water source and treatment.
Regularity of supply varies substantially across locations. London’s experience during the late Victorian years suggest that increased frequency of water supply has the potential to deliver further reductions in mortality in developing countries beyond the initial gains from improved water sources and treatment.