Police as ploughmen in the First World War

by Mary Fraser (Associate, The Scottish Centre for Crime & Justice Research, University of Glasgow)

This blog is part of our EHS 2020 Annual Conference Blog Series.

 

Fraser1
Police group portrait Bury St Edmunds Suffolk. Available at Wikimedia Commons.

That policemen across Britain were released to plough the fields in the food shortages of 1917 is currently unrecognised, although soldiers, prisoners of war, women and school children have been widely acknowledged as helping agriculture. A national project is seeking to redress the imbalance in our understanding.

In March 1917, Britain faced starvation. Massive losses of shipping, which brought around 80% of the population’s requirements of grain, mainly from America and Canada, were being sunk by enemy U-boats. Added to this, the harsh and lengthy winter rotted the potato crop in the ground. These factors largely removed two staple items from the diet: bread and potatoes. Together with soaring food prices, the poor faced starvation.

To overcome this threat, the campaign to change the balance from pasture to arable began in December 1916 (Ernle, 1961). Government took control of farming and demanded a huge increase in home-grown grain and potatoes, so that Britain could become self-sufficient in food.

But the land had been stripped of much of its skilled labour by the attraction of joining the army or navy, so that farmers felt helpless to respond. Also, equipment was idle due to lack of maintenance as mechanics had similarly signed up to war or had left for better-paid work in the munitions factories. The need to help farmers to produce home-grown food was so great that every avenue was explored.

When the severe winter broke around mid-March, not only were many hundreds of soldiers deployed to farms, but also local authorities were asked to help. One of the first groups to come forward was the police. Many had been skilled farm workers in their previous employment and so were ideal to operate the manual ploughs, which needed skill and strength to turn over heavy soil, some of which had not been ploughed for many years.

A police popular journal at the time revealed ‘Police as Ploughmen’ and gave many of the 18 locations across Britain (Fraser, 2019). Estimates are that between 500 and 600 policemen were released, some for around two months.

For example, Glasgow agreed to the release of 90 policemen while Berwick, Roxburgh and Selkirk agreed to release 40. These two areas were often held up as examples of how other police forces across Britain could help farmers: Glasgow being an urban police force while Berwick, Roxburgh and Selkirk was rural.

To release this number was a considerable contribution by police forces, as many of their young fit policemen had also been recruited into the army, to be partially replaced by part-time older Special Constables.

This help to farmers paid huge dividends. It prevented the food riots seen in other combatant nations, such as Austria-Hungary, Germany, Russia and France (Ziemann, 2014). By the harvest of 1917, the substitution of ploughmen allowed Britain to claim an increase of 1,000,000 acres of arable land, producing over 4,000,000 more tons of wheat, barley, oats and potatoes (Ernle, 1961). Britain was also able to send food to troops in France and Italy, supplementing their local failed harvests.

It is now time that policemen were recognised for their social conscience by helping their local populations. This example of ‘Police as Ploughmen’ shows that as well as investigations, cautions and arrests, the police in Britain also have a remit to help local people, particularly in times of dire need, such as in the food crisis of the First World War.

 

References

Ernle, Lord (RE Prothero) (1961) English Farming, Past and Present, 6th edition, Heinemann Educational Book Ltd.

Fraser, M (2019) Policing the Home Front, 1914-1918: The control of the British population at war, Routledge.

Ziemann, B (2014) The Cambridge History of the First World War. Volume 2: The State.


 

Mary Fraser

https://writingpolicehistory.blogspot.com 

@drmaryfraser

Are university endowments really long-term investors?

by David Chambers, Charikleia Kaffe & Elroy Dimson (Cambridge Judge Business School)

This blog is part of our EHS 2020 Annual Conference Blog Series.

 

 

Flags of the Ivy League
Flags of the Ivy League fly at Columbia’s Wien Stadium. Available at Wikimedia Commons.

 

Endowments are investment funds aiming to meet the needs of their beneficiaries over multiple generations and adhering to the principle of intergenerational equity. University endowments such as Harvard, Yale and Princeton, in particular, have been at the forefront of developments in long-horizon investing over the last three decades.

But little is known about how these funds invested before the recent past. While scholars have previously examined the history of insurance companies and investment trusts, very little historical analysis has been undertaken of such important and innovative long-horizon investors. This is despite the tremendous influence of the so-called ‘US endowment model’ of long-horizon investing – attributed to Yale University and its chief investment officer, David Swensen – on other investors.

Our study exploits a new long-run hand-collected data set of the investments belonging to the 12 wealthiest US university endowments from the early twentieth century up to the present: Brown University, Columbia University, Cornell University, Dartmouth College, Harvard University, Princeton University, the University of Pennsylvania, Yale University, the Massachusetts Institute of Technology, the University of Chicago, Johns Hopkins University and Stanford University.

All are large private doctoral institutions that were among the wealthiest university endowments in the early decades of the twentieth century and which made sufficient disclosures about how their funds were invested. From the latter, we estimate the annual time series of allocations across major asset classes (stocks, bonds, real estate, alternative assets, etc.), endowment market values and investment returns.

Our study has two main findings. First, we document two major shifts in the allocation of the institutions’ portfolios from predominantly bonds to predominantly stocks beginning in the 1930s and then again from stocks to alternative assets beginning in the 1980s. Moreover, the Ivy League schools (notably, Harvard, Yale and Princeton) led the way in these asset allocation moves in both eras.

Second, we examine whether these funds invest in a manner consistent with their mission as long-term investors, namely, behaving countercyclically – selling when prices are high and buying when low. Prior studies show that pension funds and mutual funds behave procyclically during crises – buying when prices are high and selling when low.

In contrast, our analysis finds that the leading university endowments on average behave countercyclically across the six ‘worst’ financial crises during the last 120 years in the United States: 1906-1907, 1929, 1937, 1973-74, 2000 and 2008. Hence, typically, during the pre-crisis price run-up, they decrease their allocation to risky assets but increase this allocation in the post-crisis price decline.

In addition, we find that this countercyclical behaviour became more pronounced in the two most recent crises – the Dot-Com Bubble and the 2008 Global Financial Crisis.

Fascistville: Mussolini’s new towns and the persistence of neo-fascism

by Mario F. Carillo (CSEF and University of Naples Federico II)

This blog is part of our EHS 2020 Annual Conference Blog Series.


 

Carillo3
March on Rome, 1922. Available at Wikimedia Commons.

Differences in political attitudes are prevalent in our society. People with the same occupation, age, gender, marital status, city of residence and similar background may have very different, and sometimes even opposite, political views. In a time in which the electorate is called to make important decisions with long-term consequences, understanding the origins of political attitudes, and then voting choices, is key.

My research documents that current differences in political attitudes have historical roots. Public expenditure allocation made almost a century ago help to explain differences in political attitudes today.

During the Italian fascist regime (1922-43), Mussolini undertook enormous investments in infrastructure by building cities from scratch. Fascistville (Littoria) and Mussolinia are two of the 147 new towns (Città di Fondazione) built by the regime on the Italian peninsula.

Carillo1

Towers shaped like the emblem of fascism (Torri Littorie) and majestic buildings as headquarters of the fascist party (Case del Fascio) dominated the centres of the new towns. While they were modern centres, their layout was inspired by the cities of the Roman Empire.

Intended to stimulate a process of identification of the masses based on the collective historical memory of the Roman Empire, the new towns were designed to instil the idea that fascism was building on, and improving, the imperial Roman past.

My study presents three main findings. First, the foundation of the new towns enhanced local electoral support for the fascist party, facilitating the emergence of the fascist regime.

Second, such an effect persisted through democratisation, favouring the emergence and persistence of the strongest neo-fascist party in the advanced industrial countries — the Movimento Sociale Italiano (MSI).

Finally, survey respondents near the fascist new towns are more likely today to have nationalistic views, prefer a stronger leader in politics and exhibit sympathy for the fascists. Direct experience of life under the regime strengthens this link, which appears to be transmitted across generations inside the family.

Carillo2

Thus, the fascist new towns explain differences in current political and cultural attitudes that can be traced back to the fascist ideology.

These findings suggest that public spending may have long-lasting effects on political and cultural attitudes, which persist across major institutional changes and affect the functioning of future institutions. This is a result that may inspire future research to study whether policy interventions may be effective in promoting the adoption of growth-enhancing cultural traits.

The Great Depression as a saving glut

by Victor Degorce (EHESS & European Business School) & Eric Monnet (EHESS, Paris School of economics & CEPR).

This blog is part of our EHS 2020 Annual Conference Blog Series.


 

GreatDepression
Crowd at New York’s American Union Bank during a bank run early in the Great Depression. Available at Wikimedia Commons.

Ben Bernanke, former Chair of the Federal Reserve, the central bank of the United States, once said ‘Understanding the Great Depression is the Holy Grail of macroeconomics’. Although much has been written on this topic, giving rise to much of modern macroeconomics and monetary theory, there remain several areas of unresolved controversy. In particular, the mechanisms by which banking distress led to a fall in economic activity are still disputed.

Our work provides a new explanation based on a comparison of the financial systems of 20 countries in the 1930s: banking panics led to a transfer of bank deposits to non-bank institutions that collected savings but did not lend (or lent less) to the economy. As a result, intermediation between savings and investment was disrupted, and the economy suffered from an excess of unproductive savings, despite a negative wealth effect caused by creditor losses and falling real wages.

This conclusion speaks directly to the current debate on excess savings after the Great Recession (from 2008 to today), the rise in the price of certain assets (housing, public debt) and the lack of investment.

An essential – but often overlooked – feature of the banking systems before the Second World War was the competition between unregulated commercial banks and savings institutions. The latter took very different forms in different countries, but in most cases they were backed by governments and subject to regulation that limited the composition of their assets.

Although the United States is the country where banking panics were most studied, it was an exception. US banks had been regulated since the nineteenth century and alternative forms of savings (postal savings in this case) were limited in scope.

By contrast, in Japan and most European countries, a large proportion of total savings was deposited in regulated specialised institutions. Outside the United States, central banks also accepted private deposits and competed with commercial banks in this area. There were therefore many alternatives for depositors.

Banks were generally preferred because they could offer additional payment services and loans. But in times of crisis, regulated savings institutions were a safe haven. The downside of this security was that they were obliged – often by law – to take little risk, investing in cash or government securities. As a result, they could replace banks as deposit-taking institutions, but not as lending institutions.

We prove our claim thanks to a new dataset on deposits in commercial banks, different types of savings institutions and central banks in 20 countries. We also study how the macroeconomic effect of excess savings depended on the safety of the government (since savings institutions mainly bought government securities) and on the exchange rate regime (since gold standard countries were much less likely to mobilise excess savings to finance countercyclical policies).

Our argument is not inconsistent with earlier mechanisms, such as the monetary and non-monetary effects of bank failures documented, respectively, by Milton Friedman and Anna Schwartz and by Ben Bernanke, or the paradox of thrift explained by John Maynard Keynes.

But our argument is based on a separate mechanism that can only be taken into account when the dual nature of the financial system (unregulated deposit-taking institutions versus regulated institutions) is recognised. It raises important concerns for today about the danger of competition between a highly regulated banking system and a growing shadow banking system.

Business bankruptcies: learning from historical failures

by Philip Fliers (Queen’s University Belfast), Chris Colvin (Queen’s University Belfast), and Abe de Jong (Monash University).

This blog is part of our EHS 2020 Annual Conference Blog Series.


 

 

BankruptBlog
The door of a bankrupt business locked with a chain and padlock. Available at Flickr.

 

Business bankruptcies are rare events. But when they occur, they can prove catastrophic. Employees lose their jobs, shareholders lose their savings and loyal customers lose their trusted suppliers.

Essentially, bankruptcies are ‘black swan’ events in that they come as a surprise, have a major impact and are often inappropriately rationalised after the fact with the benefit of hindsight. While they may be extreme outliers, they are also extremely costly for those affected.

Because bankruptcies are so rare, they are very hard to study. This makes it difficult to understand the causes of bankruptcies, and to develop useful early warning systems.

What are the risk factors for which shareholders should watch out when evaluating their investments, or when pension regulators audit the future sustainability of workplace pension schemes?

Our solution is to exploit the historical record. We collect a dataset of all bankruptcies of publicly listed corporations that occurred in the Netherlands over the past 100 years. And we look to see what we can learn from taking this long-run perspective.

In particular, we are interested in seeing whether these bankruptcies had common features. Are firms that are about to go out of business systematically different in terms of their financial performance, corporate financing or governance structures than those that are healthy and successful?

Our surprising result is that the features of bankrupt corporations vary considerably across the twentieth century.

During the 1920s and 1930s, small and risky firms were more likely to go bankrupt. In the wake of the Second World War, firms that did not pay dividends to their shareholders were more likely to fail. And since the 1980s, failure probabilities have been highest for over-leveraged firms.

Why does all this matter? What can we learn from our historical approach?

On first glance, it looks like we can’t learn anything; the drivers of corporate bankruptcies appear to change quite significantly across our economic past.

But we argue that this finding is itself a lesson from history.

The development of early warning failure systems needs to take account of context and allow for a healthy degree of flexibility.

What does this mean in practice?

Well, regulators and other policy-makers should not solely rely on ad hoc statistical models using recent data. Rather, they should combine these statistical approaches with common sense narrative analytics that incorporate the possibility of compensating mechanisms.

There are clearly different ways in which businesses can go bankrupt. Taking a very recent perspective ignores many alternative routes to business failure. Broadening our scope has permitted us to identify factors that can lead to business instability, but also how these factors can be mitigated.