Beware letting inexact science of modelling drive policy
This article is more than 3 years old

Beware letting inexact science of modelling drive policy

Originally published in The Australian.

Economists don’t know much about epidemiology, but they know quite a bit about the pitfalls of statistics in general and modelling in particular — learned from years of unhappy experience.

Modelling is an inexact science, and is sometimes misused in the policy process. This experience is relevant to the policy response to COVID-19.

Three or four decades ago, smart young economists were building large-scale models to mimic the interactions of the macro-economy. The promise was that if enough detail and data could be incorporated, the model would give an accurate representation of how the real economy worked. The model would tell policymakers what to do.

In practice, these large-scale macro-models never played a central role in policymaking. They were, at most, debating tools in the policy argument.

It’s not just that reality is very complex. After all, weather forecasting has improved spectacularly thanks to large-scale computer modelling. Unlike the physics of weather, the economy is evolving over time and doesn’t obey fixed rules. Consumers and investors aren’t robots, always behaving completely rationally and consistently over time.

Large “black box” models were relegated to the backrooms. Attention shifted to smaller models where the key relationships are transparent and can be understood intuitively, even by non-­experts. The models provide just one input into the broader policy debate.

Contrast this with the role that epidemiological modelling has played in the policy response to COVID-19. Whereas politicians are never heard to say that they are closely following the expert advice based on scientific economic modelling, the epidemiology models seem to be driving policy, with the politicians reluctant to add their overlay. Australia went into strong containment when the Imperial College model and the Doherty model foretold an overwhelming of intensive-care facilities. The model said that this could be averted only by the restrictions we still have in place. Now we are looking towards easing, based largely on the virus reproduction rate R (effective reproduction number) being clearly below 1, again based on the latest scientific modelling.

These models, however, have many of the same weaknesses as economic models, with the same temptations to use models as a debating tool to support a favoured argument.

As with economic modellers, epidemiologists can’t re-run a real-life episode multiple times to get a sound estimate of the parameters that determine how a disease develops or how the hospital system copes. The science of the disease may be less changeable than economic behaviour, but the critical value of the reproduction rate will depend on how the public behaves in response to the containment measures, and their own evolving perceptions of the risks.

The critical issue now is how will change in response to eased restrictions: in this, there is little statistical guidance. The apparently low community transmission value of R, even before the restrictions were tightened in the second half of March, might suggest that R will still be below the critical value of 1, even with all restrictions lifted except border controls. But this figure is based on a tiny sample.

In short, epidemiology models and economic models are both subject to the same “radical uncertainty” that Mervyn King and John Kay describe in their recent book, where the mathematical probabilities are unknown, the input data are uncertain and the behavioural relationships are unstable. How will R behave as the restrictions are lifted? This depends mainly on how people behave, and the honest answer is “no one knows”. It depends on trial and error, not a model.

What are the lessons here? Use small models that focus on a few critical parameters, and run multiple simulations to test how sensitive the results are to changing parameters. These simulations should be intuitive enough so that the non-expert policymakers can make their own judgments about the validity of the parameters, and add their political overlay.

Why not make this decision process more transparent, with all its uncertainties and competing views? Openness fosters trust. Earlier publication of the initial Doherty modelling, with its dramatic mortality predictions, could have drawn out how dependent these predictions were on the ­assumed parameter values.

Australia seems to be on a good path, especially compared with overseas experience, thanks largely to strong and timely epidemiology advice. This kind of success will leave the policymakers and their expert advisers open to the usual criticism of successful pre-emptive policies: the critics assert afterwards that the measures were too restrictive, even unnecessary. If the public feel informed during the decision process and can understand its logic, with all the uncertainties, they will more readily accept that it was better to err on the side of caution than to follow the early insouciance of Italy, Spain, the UK, Sweden and the US.

R is now well below 1; the number of new cases is tiny; and the low rate of positive testing, ­focused as it is on the most likely cases, strongly suggests that there is an even lower rate in the community at large. If effective eradication is achieved and main­tained, Australia can quickly get back to a semblance of normality — probably with border controls, limits on crowds and elderly isolation with particular attention to aged care. Delay imposes a huge cost on the economy. We should start this easing process now.

Stephen Grenville is a non-resident fellow at the Lowy Institute and former deputy governor at the Reserve Bank

Areas of expertise: Regional economic integration; Australia's economic relations with East Asia; international financial flows and the global financial architecture; financial sector development in East Asia
Top