What's happening at the
Monday 21 Aug 2017 | 09:01 | SYDNEY
Monday 21 Aug 2017 | 09:01 | SYDNEY

We need to forecast war



17 October 2012 10:12

Christopher Joye is a leading economist, policy advisor, fund manager and former director of the Menzies Research Centre.

In The Australian Financial Review today I have a column that responds to a question posed by Sam Roggeveen. Specifically, Sam asks, 'What's the best way to deal with strategic uncertainty?'

I was surprised by what I discovered when I started diving into the subject. In short, few defence planners or related researchers appear to engage in much scientific, or quantitative, forecasting of the likelihood of conflict based on historical data. Given the importance of defence as a form of national catastrophe insurance, and the amount of money we spend on it each year, I find the absence of this analysis rather startling.

During an initial conversation with ASPI's Mark Thomson, I proposed that for every conflict over the past 100 to 200 years, one could quantify the timing and severity of the battle and other germane variables (e.g. the participants). This would then supply one with a hard 'time-series' of data on both the incidence and impact of wars.

Trying to predict individual conflagrations decades down the track is near impossible. I suggested we could, however, use this time-series of data to project the 'probability distribution' of conflicts Australia prospectively faces over the next, say, 25 years.
In today's AFR column, I argue: 'Defence spending is about provisioning against adverse contingencies. Private insurers don't try to anticipate specific natural disasters. They take as much long-term data as they can access and, with conservative calibrations, employ it to describe the expected frequency and severity of crises.'
There are well-documented simulation methods defence researchers could use, which randomly 'draw' from a historical time-series to project future paths over any given period. These exercises are repeated thousands of times to get a multiplicity of potential trajectories, which allow one to build up a realistic distribution of outcomes.

ASPI's Thomson explained to me that the University of Michigan had, in fact, done exactly what I proposed in the first part of the exercise outlined above, with its 200 year Correlates of War database. This contains granular information on every intra-, inter- and non-state conflict since 1816.

But to date, defence researchers have not done much, if any, forecasting using this data. Instead of quantitative analysis, we get subjective historical investigations of discrete conflicts and supposedly expert verbiage on what might unfold.

In a 2012 paper, Swiss academic Thomas Chadefaux concludes:

'Unfortunately, the prediction of war has been the subject of surprisingly little interest in the literature, in marked difference to a wide range of fields, from finance to geology, which devote much of their attention to the prediction of extraordinary – black swan – events such as financial crises or earthquakes.'

So I have three ideas for the defence community to consider.

First, more research needs to be done on forecasting the distribution of military risks Australians face. This is well within the capabilities of universities and defence. By disclosing the true empirical probability of war, policymakers might also attenuate the tendency of younger generations to under-value defence insurance by extrapolating out from their own peaceful, albeit brief, existence.

Think-tanks could sponsor competitions on the subject. They could, for instance, leverage off innovative yet inexpensive global research platforms such as Kaggle, which are increasingly used by top companies for research and development. A small, say, $20,000, prize could be offered to the team that produces the best conflict forecasting model using historical data. Kaggle explains:

'Companies, governments and researchers present datasets and problems – the world's best data scientists then compete to produce the best [predictive] solutions … The competition host pays prize money in exchange for the intellectual property behind the winning model.'

A second idea is to evaluate 'aggregations' of expert opinions. That is, to capitalise on the 'wisdom of crowds'. Research has shown that the 'average' forecast is often more powerful than the individual estimates. In the US, the Intelligence Advanced Research Projects Activity has sponsored a four-year competition for statisticians and computer scientists to extract predictive political signals from public opinion.

Stakeholders could run quarterly surveys of hundreds of experts asking them to quantify the risk of wars and regime change. The explanatory power of these benchmarks could then be appraised over time and used to contextualise individual predictions.

There are many precedents for this in financial markets. Surveys of manufacturing activity, business and consumer confidence, and economist forecasts all offer statistically useful insights.

A third idea is the construction of research-based 'risk indices' that provide early warning signals of dislocations emerging.

Dr Chadefaux created a real-time risk index via weekly analysis of online news media for language that has been historically associated with conflicts. He finds this measure is able to anticipate 70 per cent of large-scale wars with few false positives.

It sounds like something we should be doing.

Photo of US Marines in 1943 by Flickr user Marine Corps Archives & Special Collections.

You may also be interested in...