This blog aims to encourage discussion of economic and social history, broadly defined. We live in a time of major social and economic change, and research in social science is showing more and more that a historical and long-term approach to current issues is the key to understanding our times.
Only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. That is one of the findings of research by Guy Michaels (London School of Economics) and Ferdinand Rauch (University of Oxford), which uses the contrasting experiences of British and French cities after the fall of the Roman Empire as a natural experiment to explore the impact of history on economic geography – and what leads cities to get stuck in undesirable locations, a big issue for modern urban planners.
The study, published in the February 2018 issue of the Economic Journal, notes that in France, post-Roman urban life became a shadow of its former self, but in Britain it completely disappeared. As a result, medieval towns in France were much more likely to be located near Roman towns than their British counterparts. But many of these places were obsolete because the best locations in Roman times weren’t the same as in the Middle Ages, when access to water transport was key.
The world is rapidly urbanising, but some of its growing cities seem to be misplaced. Their locations are hampered by poor access to world markets, shortages of water or vulnerability to flooding, earthquakes, volcanoes and other natural disasters. This outcome – cities stuck in the wrong places – has potentially dire economic and social consequences.
When thinking about policy responses, it is worth looking at the past to see how historical events can leave cities trapped in locations that are far from ideal. The new study does that by comparing the evolution of two initially similar urban networks following a historical calamity that wiped out one, while leaving the other largely intact.
The setting for the analysis of urban persistence is north-western Europe, where the authors trace the effects of the collapse of the Western Roman Empire more than 1,500 years ago through to the present day. Around the dawn of the first millennium, Rome conquered, and subsequently urbanised, areas including those that make up present day France and Britain (as far north as Hadrian’s Wall). Under the Romans, towns in the two places developed similarly in terms of their institutions, organisation and size.
But around the middle of the fourth century, their fates diverged. Roman Britain suffered invasions, usurpations and reprisals against its elite. Around 410CE, when Rome itself was first sacked, Roman Britain’s last remaining legions, which had maintained order and security, departed permanently. Consequently, Roman Britain’s political, social and economic order collapsed. Between 450CE and 600CE, its towns no longer functioned.
Although some Roman towns in France also suffered when the Western Roman Empire fell, many of them survived and were taken over by Franks. So while the urban network in Britain effectively ended with the fall of the Western Roman Empire, there was much more urban continuity in France.
The divergent paths of these two urban networks makes it possible to study the spatial consequences of the ‘resetting’ of an urban network, as towns across Western Europe re-emerged and grew during the Middle Ages. During the High Middle Ages, both Britain and France were again ruled by a common elite (Norman rather than Roman) and had access to similar production technologies. Both features make is possible to compare the effects of the collapse of the Roman Empire on the evolution of town locations.
Following the asymmetric calamity and subsequent re-emergence of towns in Britain and France, one of three scenarios can be imagined:
First, if locational fundamentals, such as coastlines, mountains and rivers, consistently favour a fixed set of places, then those locations would be home to both surviving and re-emerging towns. In this case, there would be high persistence of locations from the Roman era onwards in both British and French urban networks.
Second, if locational fundamentals or their value change over time (for example, if coastal access becomes more important) and if these fundamentals affect productivity more than the concentration of human activity, then both urban networks would similarly shift towards locations with improved fundamentals. In this case, there would be less persistence of locations in both British and French urban networks relative to the Roman era.
Third, if locational fundamentals or their value change, but these fundamentals affect productivity less than the concentration of human activity, then there would be ‘path-dependence’ in the location of towns. The British urban network, which was reset, would shift away from Roman-era locations towards places that are more suited to the changing economic conditions. But French towns would tend to remain in their original Roman locations.
The authors’ empirical investigation finds support for the third scenario, where town locations are path-dependent. Medieval towns in France were much more likely to be located near Roman towns than their British counterparts.
These differences in urban persistence are still visible today; for example, only three of the 20 largest cities in Britain are located near the site of Roman towns, compared with 16 in France. This finding suggests that the urban network in Britain shifted towards newly advantageous locations between the Roman and medieval eras, while towns in France remained in locations that may have become obsolete.
But did it really matter for future economic development that medieval French towns remained in Roman-era locations? To shed light on this question, the researchers focus on a particular dimension of each town’s location: its accessibility to transport networks.
During Roman times, roads connected major towns, facilitating movements of the occupying army. But during the Middle Ages, technical improvements in water transport made coastal access more important. This technological change meant that having coastal access mattered more for medieval towns in Britain and France than for Roman ones.
The study finds that during the Middle Ages, towns in Britain were roughly two and a half times more likely to have coastal access – either directly or via a navigable river – than during the Roman era. In contrast, in France, there was little change in the urban network’s coastal access.
The researchers also show that having coastal access did matter for towns’ subsequent population growth, which is a key indicator of their economic viability. Specifically, they find that towns with coastal access grew faster between 1200 and 1700, and for towns with poor coastal access, access to canals was associated with faster population growth. The investments in the costly building and maintenance of these canals provide further evidence of the value of access to water transport networks.
The conclusion is that many French towns were stuck in the wrong places for centuries, since their locations were designed for the demands of Roman times and not those of the Middle Ages. They could not take full advantage of the improved transport technologies because they had poor coastal access.
Taken together, these findings show that urban networks may reconfigure around locational fundamentals that become more valuable over time. But this reconfiguration is not inevitable, and towns and cities may remain trapped in bad locations over many centuries and even millennia. This spatial misallocation of economic activity over hundreds of years has almost certainly induced considerable economic costs.
‘Our findings suggest lessons for today’s policy-makers – conclude the authors – The conclusion that cities may be misplaced still matters as the world’s population becomes ever more concentrated in urban areas. For example, parts of Africa, including some of its cities, are hampered by poor access to world markets due to their landlocked position and poor land transport infrastructure. Our research suggests that path-dependence in city locations can still have significant costs.’
‘‘Resetting the Urban Network: 117-2012’ by Guy Michaels and Ferdinand Rauch was published in the February 2018 issue of the Economic Journal.
Economists Peter Leeson (George Mason University) and Jacob Russ (Bloom Intelligence) have uncovered new evidence to resolve the longstanding puzzle posed by the ‘witch craze’ that ravaged Europe in the sixteenth and seventeenth centuries and resulted in the trial and execution of tens of thousands for the dubious crime of witchcraft.
In research forthcoming in the Economic Journal, Leeson and Russ argue that the witch craze resulted from competition between Catholicism and Protestantism in post-Reformation Christendom. For the first time in history, the Reformation presented large numbers of Christians with a religious choice: stick with the old Church or switch to the new one. And when churchgoers have religious choice, churches must compete.
In an effort to woo the faithful, competing confessions advertised their superior ability to protect citizens against worldly manifestations of Satan’s evil by prosecuting suspected witches. Similar to how Republicans and Democrats focus campaign activity in political battlegrounds during US elections to attract the loyalty of undecided voters, Catholic and Protestant officials focused witch trial activity in religious battlegrounds during the Reformation and Counter-Reformation to attract the loyalty of undecided Christians.
Analysing new data on more than 40,000 suspected witches whose trials span Europe over more than half a millennium, Leeson and Russ find that when and where confessional competition, as measured by confessional warfare, was more intense, witch trial activity was more intense too. Furthermore, factors such as bad weather, formerly thought to be key drivers of the witch craze, were not in fact important.
The new data reveal that the witch craze took off only after the Protestant Reformation in 1517, following the new faith’s rapid spread. The craze reached its zenith between around 1555 and 1650, years co-extensive with peak competition for Christian consumers, evidenced by the Catholic Counter-Reformation, during which Catholic officials aggressively pushed back against Protestant successes in converting Christians throughout much of Europe.
Then, around 1650, the witch craze began its precipitous decline, with prosecutions for witchcraft virtually vanishing by 1700.
What happened in the middle of the seventeenth century to bring the witch craze to a halt? The Peace of Westphalia, a treaty entered in 1648, which ended decades of European religious warfare and much of the confessional competition that motivated it by creating permanent territorial monopolies for Catholics and Protestants – regions of exclusive control, in which one confession was protected from the competition of the other.
The new analysis suggests that the witch craze should also have been focused geographically, located where Catholic-Protestant rivalry was strongest and vice versa. And indeed it was: Germany alone, which was ground zero for the Reformation, laid claim to nearly 40% of all witchcraft prosecutions in Europe.
In contrast, Spain, Italy, Portugal and Ireland – each of which remained a Catholic stronghold after the Reformation and never saw serious competition from Protestantism – collectively accounted for just 6% of Europeans tried for witchcraft.
Religion, it is often said, works in unexpected ways. The new study suggests that the same can be said of competition between religions.
Some media commentators have identified the decimalisation of the UK’s currency in 1971 as the start of a submerging of British identity. For example, writing in the Daily Mail, Dominic Sandbrook characterises it as ‘marking the end of a proud history of defiant insularity and the beginning of the creeping Europeanisation of Britain’s institutions.’
This research, based on Cabinet papers, Bank of England archives, Parliamentary records and other sources, reveals that this interpretation is spurious and reflects more modern preoccupations with the arguments that dominated much of the Brexit debate, rather than the actual motivation of key players at the time.
The research examines arguments made by the proponents of alternative systems based on either decimalising the pound, or creating a new unit worth the equivalent of 10 shillings. South Africa, Australia and New Zealand had all recently adopted a 10-shilling unit, and this system was favoured by a wide range of interest groups in the UK, representing consumers, retailers, small and large businesses, and media commentators.
Virtually a lone voice in lobbying for retention of the pound was the City of London, and its arguments, articulated by the Bank of England, were based on a traditional attachment to the international status of sterling. These arguments were accepted, both by the Committee of Enquiry on Decimal currency, which reported in 1963, and, in 1966, by a Labour government headed by Harold Wilson, who shared the City’s emotional attachment to the pound.
Yet by 1960, the UK had faced the imminent prospect of being virtually the only country retaining non-decimal coinage. Most key economic players agreed that decimalisation was necessary and the only significant bone of contention was the choice of system.
Most informed opinion favoured a new major unit equivalent to 10 shillings, as reflected in evidence given by retailers and other businesses to the Committee of Enquiry on Decimal Coinage, and the formation of a Decimal Action Committee by the Consumers Association to press for such a system.
The City, represented by the Bank of England, was implacably opposed to such a system, arguing that the pound’s international prestige was crucial to underpinning the position of the City as a leading financial centre. This assertion was not evidence-based, and internal Bank documents acknowledge that their argument was ‘to some extent based on sentiment’.
This sentiment was shared by Harold Wilson, whose government announced the decision to introduce decimal currency based on the pound in 1966. Five years earlier, he had made an emotional plea to keep the pound arguing that ‘the world will lose something if the pound disappears from the markets of the world’.
Far from being the end of ‘defiant insularity’, the decision to retain a higher-value basic currency unit of any major economy, rather than adopting one closer in value either to the US dollar or the even lower-value European currencies, reflected the desire of the City and government to maintain a distinctive symbol of Britishness, the pound, overcoming opposition from interests with more practical concerns.
‘Almost idyllic’ – this was the view of one British commentator on the state of post-war industrial relations in West Germany. No one could say the same about British industrial relations. Here, industrial conflict grew inexorably from year to year, forcing governments to expend ever more effort on preserving industrial peace.
Deeply frustrated, successive governments alternated between appeasing trade unionists and threatening them with new legal sanctions in an effort to improve their behaviour, thereby avoiding tackling the fundamental issue of their institutional structure. If the British had only studied the German ‘model’ of industrial relations more closely, they would have understood better the reforms that needed to be made.
Britain’s poor state of industrial relations was a major, if not the major, factor holding back Britain’s economic growth, which was regularly less than half the rate in Germany, not to speak of the chronic inflation and balance of payments problems that only made matters worse. So, how come the British did not take a deeper look at the successful model of German industrial relations and learn any lessons?
Ironically, the British were in control of Germany at the time the trade union movement was re-establishing itself after the war. The Trades Union Congress and the British labour movement offered much goodwill and help to the Germans in their task.
But German trade unionists had very different ideas to the British trade unions on how to go about organising their industrial relations, ideas that the British were to ignore consistently over the post-war period. These included:
In Britain, there were hundreds of trade unions, but in Germany, there were only 16 re-established after the war, each representing one or more industries, thereby avoiding the demarcation disputes so common in Britain.
Terms and conditions were negotiated on this industry-basis by strong well-funded trade unions, which welcomed the fact that their two or three year long collective agreements were legally enforceable in Germany’s system of industrial courts.
Trade unions were not involved in workplace grievances and disputes. These were left to employees and managers meeting together in Germany’s highly successful works councils to resolve such issues informally along with engaging in consultative exercises on working practices and company reorganisations. As a result, German companies did not seek to lay-off staff as British companies did on any fall in demand, but rathet to retrain and reallocate them.
British trade unions pleaded that their very untidy institutional structure with hundreds of competing trade unions was what their members actually wanted and should therefore be outside any government interference. The trade unions jealously guarded their privileges and especially rejected any idea of industry-based unions, legally enforceable collective agreements and works councils.
A heavyweight Royal Commission was appointed, but after three years’ deliberation, it came up with little more than the status quo. It was reluctant to study any ideas emanating from Germany.
While the success of industrial relations in Germany was widely recognised in Britain, there was little understanding about why this was so or indeed much interest in it. The British were deeply conservative about the ‘institutional shape’ of industrial relations and had a fear of putting forward any radical German ideas. Britain was therefore at a big disadvantage as far as creating modern trade unions operating in a modern state.
So, what economic price the failure to sort out the institutional structure of the British trade unions?
The history of transatlantic slavery is one of the most active and fruitful fields of international historical research, and an important lesson of the latest work on maritime countries like Britain and France is that there the profits of slavery and indeed abolition ‘trickled down’ to very wide sections of the population and to places well away from the principal slave-trading ports. Recently historians have started to look beyond the familiar Atlantic axis and to apply the same paradigm to the European hinterlands of the triangular trade. That is, they have sought its traces and impacts in territories that were not directly involved (or were relatively minor participants) in the traffic in Africans: the German-speaking countries, Scandinavia, Italy and Central Europe. And they are finding that the slave trade, the plantation economies that it fed, the consequences of its abolition, and not least the questions of moral and political principle that it threw up, were very much a part of the texture of society right across Europe.
In material terms, it is clear that the manufacture of trade goods – the wares with which Europeans paid African traders for the enslaved men, women and children whom they then shipped to the Americas – was an important element of many regional economies. Firearms, iron bars and ironware travelled from Denmark and the Baltic to Western Europe’s slaving ports. Glass beads were exported from Bohemia (the Czech lands), and the higher quality Venetian products attracted Liverpool merchants to set up branch offices in Italy to secure their supply. The Swiss family firm Burckhardt/Bourcard began by supplying cotton cloth for the slave trade and importing slave-produced luxury goods and moved into equipping its own slaving ships. Textile plants in the Wupper Valley in Western Germany and the hand looms of Eastern Prussia provided linens of varying quality for use on the slave plantations, though because they were shipped through English and Dutch ports their German origins have often been obscured. And the trading networks established in the context of the slave economy supported German exporting projects even after the trade was abolished, as German firms continued to trade into territories – Brazil and the Caribbean – where slavery persisted until the late 19th century.
Germans in particular were keen observers of the Atlantic slave economy, and they had their own perspective on international debates about the trade and its abolition. At the beginnings of the trade, the rulers of Brandenburg Prussia had some hopes of buying into it, establishing a slave fort on the Gold Coast between 1682 and 1720. One of the key documents of this episode is the diary of a ship’s barber, Johann Peter Oettinger, who sailed on slaving expeditions. He chose to make no comment about the brutalities that he witnessed and recorded. Characteristically, though, when the diaries were published for German readers 200 years later, they were given a moralising spin; by the 1880s, Germany was at the forefront of the Scramble for Africa, justifying colonisation in the name of suppressing the internal slave trade. Before that, and once the German states were no longer involved in the slave trade, German-speaking scientists and administrators placed themselves in the service of those states that were: Ernst Schimmelmann, whose family had one foot in Hamburg and one in Copenhagen, was a plantation owner and manager of the Swedish state slaving company, but also responsible for the abolition of the Danish slave trade in 1792. And initiatives for the post-abolition exploitation of tropical territories relied on the work of German scientists in service to the Danish state like the botanist Julius von Rohr.
Scholarly attention to the German case is also bringing the Atlantic plantation economies into dialogue with the practices of unfree labour that existed in Central Europe at the same time. Analysis of the conditions of linen production on eastern Prussia’s aristocratic estates indicates that their low production costs helped to keep down the costs of production on slave plantations. And when Germans confronted the moral and legal challenges to slavery that were crystallising into a political movement in Britain and France by the 1790s, they could not escape the implications of abolitionist arguments for the future of their own ‘peculiar institutions’ of serfdom and personal service. This was true of Theresa Huber, the author and journalist who stands for two generations of Germans who engaged in transnational abolitionist networks, and who was equally sharp in her critique of serfdom. And it was true of Prussian administrators who, when challenged by enslaved Africans on German soil to enforce the notion that ‘there are no slaves in Prussia’, could not help asking themselves what that might mean for the process towards reform of feudal institutions.
These issues have only begun to receive greater attention – more studies are needed to gain a clearer understanding of the various links through which continental Europe was connected to the Transatlantic slave business and its abolition.
SAVE 25% when you order direct from the publisher using the offer codeBB500in the box at the checkout. Discount applies to print and eBook editions. Alternatively call Boydell’s distributor, Wiley, on 01243 843 291, and quote the same code. Offer ends on the 28th June 2018. Any queries please email email@example.com
Rising trends in GDP per capita are often interpreted as reflecting rising levels of general wellbeing. But GDP per capita is at best a crude proxy for wellbeing, neglecting important qualitative dimensions. 36 more words
To elaborate further on the topic, Prof. Leandro de la Escosura has made available several databases on inequality, accessible here, as well as a book on long-term Spanish economic growth, available as open source here
In Ottoman Istanbul, privileged groups such as men, Muslims and other elites paid more for credit than the under-privileged – the exact opposite of what happens in a modern economy.
New research by Professors Timur Kuran (Duke University) and Jared Rubin (Chapman University), published in the March 2018 issue of the Economic Journal, explains why: a key influence on the cost of borrowing is the rule of law and in particular the extent to which courts will enforce a credit contract.
In pre-modern Turkey, it was the wealthy who could benefit from judicial bias to evade their creditors – and who, because of this default risk, faced higher interest rates on loans. Nowadays, it is under-privileged people who face higher borrowing costs because there are various institutions through which they can escape loan repayment, including bankruptcy options and organisations that will defend poor defaulters as victims of exploitation.
In the modern world, we take it for granted that the under-privileged incur higher borrowing costs than the upper socio-economic classes. Indeed, Americans in the bottom quartile of the US income distribution usually borrow through pawnshops and payday lenders at rates of around 450% per annum, while those in the top quartile take out short-term loans through credit cards at 13-16%. Unlike the under-privileged, the wealthy also have access to long-term credit through home equity loans at rates of around 4%.
The logic connecting socio-economic status to borrowing costs will seem obvious to anyone familiar with basic economics: the higher costs of the poor reflect higher default risk, for which the lender must be compensated.
The new study sets out to test whether the classic negative correlation between socio-economic status and borrowing cost holds in a pre-modern setting outside the industrialised West. To this end, the authors built a data set of private loans issued in Ottoman Istanbul during the period from 1602 to 1799.
These data reveal the exact opposite of what happens in a modern economy: the privileged paid more for credit than the under-privileged. In a society where the average real interest rate was around 19%, men paid an interest surcharge of around 3.4 percentage points; Muslims paid a surcharge of 1.9 percentage points; and elites paid a surcharge of about 2.3 percentage points (see Figure 1).
What might explain this reversal of relative borrowing costs? Why did socially advantaged groups pay more for credit, not less?
The data led the authors to consider a second factor contributing to the price of credit, often taken for granted: the partiality of the law. Implicit in the logic that explains relative credit costs in modern lending markets is that financial contracts are enforceable impartially when the borrower is able to pay. Thus, the rich pay less for credit because they are relatively unlikely to default and because, if they do, lenders can force repayment through courts whose verdicts are more or less impartial.
But in settings where the courts are biased in favour of the wealthy, creditors will expect compensation for the risk of being unable to obtain restitution. The wealth and judicial partiality effects thus work against each other. The former lowers the credit cost for the rich; the latter raises it.
Islamic Ottoman courts served all Ottoman subjects through procedures that were manifestly biased in favour of clearly defined groups. These courts gave Muslims rights that they denied to Christians and Jews. They privileged men over women.
Moreover, because the courts lacked independence from the state, Ottoman subjects connected to the sultan enjoyed favourable treatment. Theory developed in the new study explains why their weak legal power may translate into strong financial power.
More generally, this research suggests that in a free financial market, any hindrance to the enforcement of a credit contract will raise the borrower’s credit cost. Just as judicial biases in favour of the wealthy raise their interest rates on loans, institutions that allow the poor to escape loan repayment – bankruptcy options, shielding of assets from creditors, organisations that defend poor defaulters as victims of exploitation – raise interest rates charged to the poor.
Today, wealth and credit cost are negatively correlated for multiple reasons. The rich benefit both from a higher capacity to post collateral and from better enforcement of their credit obligations relative to those of the poor.
To contact the authors:
Timur Kuran (firstname.lastname@example.org); Jared Rubin (email@example.com)
Why is it so common for Muslims to marry their cousins (more than 30% of all marriages in the Middle East)? Why, despite explicit injunctions in the Quran to include women in inheritance, do women in the Middle East generally face unequal gender relations, and their labour force participation remain the lowest in the world (less than 20%)?
This study presents a theory, supported by empirical evidence, concerning the historical origins of such marriage and gender norms. It argues that in patrilineal societies that nevertheless mandate female inheritance, cousin marriage becomes a way to preserve property in the male line and prevent fragmentation of land.
In these societies, female inheritance also leads to the seclusion and veiling of women as well as restrictions on their sexual freedom in order to encourage cousin marriages and avoid out-of-wedlock children as potential heirs. The incompatibility of such restrictions with female participation in agriculture has further influenced the historical gender division of labour.
Analyses of data on pre-industrial societies, Italian provinces, and women in Indonesia show that female inheritance, consistent with these hypotheses, is associated with lower female labour participation, greater stress on female virginity before marriage and higher rates of endogamy, consanguinity and arranged marriages.
The study also uses the recent reform of inheritance regulations in India – which greatly enhanced Indian women’s right to inherit property – to provide further evidence of the causal impact of female inheritance. The analysis shows that among women affected by the reform, the rate of cousin marriage is significantly higher, and that of premarital sex significantly lower.
The implications of these findings are important. It is believed that cousin marriage helps create and maintain kinship groups such as tribes and clans, which impair the development of an individualistic social psychology, undermine social trust, large-scale cooperation and democratic institutions, and encourage corruption and conflict.
This study contributes to this literature by highlighting a historical origin of clannish social organisation. It also sheds light on the origins of gender inequality as both a human rights issues and a development issue.
The First World War brought about an upheaval in British investment, forcing savers to repatriate billions of pounds held abroad and attracting new investors among those living far from London, this research finds. The study also points to declining inequality between Britain’s wealthiest classes and the middle class, and rising purchasing power among the lower middle classes.
The research is based on samples from ledgers of investors in successive War Loans. These are lodged in archives at the Bank of England and have been closed for a century. The research covers roughly 6,000 samples from three separate sets of ledgers of investors between 1914 and 1932.
While the First World War is recalled as a period of national sacrifice and suffering, the reality is that war boosted Britain’s output. Sampling from the ledgers points to the extent to which war unleashed the industrial and engineering innovations of British industry, creating and spreading wealth.
Britain needed capital to ensure it could outlast its enemies. As the world’s capital exporter by 1914, the nation imposed increasingly tight measures on investors to ensure capital was used exclusively for war.
While London was home to just over half the capital raised in the first War Loan in 1914, that had fallen to just under 10% of capital raised in the years after. In contrast, the North East, North West and Scotland – home to the mining, engineering and shipbuilding industries – provided 60% of the capital by 1932, up from a quarter of the total raised by the first War Loan.
The concentration of investor occupations also points to profound social changes fostered by war. Men describing themselves as ‘gentleman’ or ‘esquire’ – titles accorded those wealthy enough to live on investment returns – accounted for 55% of retail investors for the first issue of War Loan. By the post-war years, these were 37% of male investors.
In contrast, skilled labourers – blacksmiths, coal miners and railway signalmen among others– were 9.0% of male retail investors by the after-war years, up from 4.9% in the first sample.
Suppliers of war-related goods may not have been the main beneficiaries of newly-created wealth. The sample includes large investments by those supplying consumer goods sought by households made better off by higher wages, steady work and falling unemployment during the war.
During and after the war, these sectors were accused of ‘profiteering’, sparking national indignation. Nearly a quarter of investors in 5% War Loan listing their occupations as ‘manufacturer’ were producing boots and leather goods, a sector singled out during the war for excess profits. Manufacturers in the final sample produced mineral water, worsteds, jam and bread.
My findings show that War Loan was widely held by households likely to have had relatively modest wealth; while the largest concentration of capital remained in the hands of relatively few, larger numbers had a small stake in the fate of the War Loans.
In the post-war years, over half of male retail investors held £500 or less. This may help to explain why efforts to pay for war by taxing wealth as well as income – a debate that echoes today – proved so politically challenging. The rentier class on whom additional taxation would have been levied may have been more of a political construct by 1932 than an actual presence.
The frontier of medieval warfare between Christian and Muslim armies in southern Spain provides a surprisingly powerful explanation of current low-density settlement patterns in those regions. This is the central finding of research by Daniel Oto-Peralías (University of Saint-Andrews), recently presented at the Royal Economic Society’s annual conference in March 2018.
His study notes that Southern Spain is one of the most deserted areas in Europe in terms of population density, only surpassed by parts of Iceland and the northern part of Scandinavia. It turns out that this outcome has roots going back to medieval times when Spain’s southern plateau was a battlefield between Christian and Muslim armies.
The study documents that Spain stands out in Europe with an anomalous settlement pattern characterised by a very low density in its southern half. Among the ten European regions with the lowest settlement density, six are from southern Spain (while the other four are from Iceland, Norway, Sweden and Finland).
On average only 29.8% of 10km2 grid cells are inhabited in southern Spain, which is a much lower percentage than in the rest of Europe (with an average of 74.4%). Extreme geographical and climatic conditions do not seem to be the reason for this low settlement density, which the author refers to as ‘Spanish anomaly’.
After ruling out geography as the main explanatory factor for the ‘Spanish anomaly’, the research investigates its historical roots by focusing on the Middle Ages, when the territory was retaken by the Christian kingdoms from Muslim rule.
The hypothesis is that the region’s character as a militarily insecure frontier conditioned the colonisation of the territory, which is tested by taking advantage of the geographical discontinuity in military insecurity created by the Tagus River in central Spain. Historical ‘accidents’ made the colonisation of the area south of the Tagus River very different from colonisation north of it.
The invasions of North Africa’s Almoravid and Almohad empires converted the territory south of the Tagus into a battlefield for a century and a half, this river being a natural defensive border. Continuous warfare and insecurity heavily conditioned the nature of the colonisation process in this frontier region, which was characterised by the leading role of the military orders as agents of colonisation, scarcity of population and a livestock-oriented economy. It resulted in the prominence of castles and the absence of villages, and consequently, a spatial distribution of the population characterised by a very low density of settlements.
The empirical analysis reveals a large difference in settlement density across the River Tagus, whereas there are no differences in geographical and climatic variables across it. In addition, it is shown that the discontinuity in settlement density already existed in the 16th and 18th centuries, and is not therefore the result of migration movements and urban developments taking place recently. Preliminary evidence also indicates that the territory exposed to the medieval ranching frontier is relatively poorer today.
Thus, the study shows that historical frontiers can decisively shape the economic geography of countries. Using Medieval Spain as a case study, it illustrates how the exposure to warfare and insecurity – typical in medieval frontiers– creates incentives for a militarised colonisation based on a few fortified settlements and a livestock-oriented economy, conditioning the occupation of a territory to such an extent to convert it into one of the most deserted areas in Europe. Given the ubiquity of frontiers in history, the mechanisms underlined in the analysis are of general interest and may operate in other contexts.