When it comes to forecasts, politics fails more often than science
Politicians and the media struggle with predictions. This is more a problem resulting from a poor understanding of uncertainties than from the reliability of models, argues Reto Knutti.
“Taskforce confirms false prediction” was the headline in Blick.1. Meanwhile, Tagesanzeiger wrote: “Coronavirus taskforce admits to false prediction.” Below was a picture of Covid-19 Task Force president Tanja Stadler, subjected to a virtual pillory.2 And with the reliability of a Swiss watch, politicians critical of science seized the opportunity to challenge forecasts in general.3
What at first glance appears to be editorial clickbait or political polemics is a symptom of a rift between science and politics. “The experts – whose opinions I hardly pay attention to any more – to me do not seem to be living in the real world,” commented Federal Councillor Ueli Maurer recently. In January, Alain Berset said: “We cannot work with forecasts.”
Apparently, some people do not see any added value in science based forecasts. Why is that? Are these predictions really useless? Would someone dismiss as useless the prediction that spring will follow this winter?
How forecasts are made
Climate change, weather forecasts or assessing a pandemic – regardless of the subject, scientific forecasts in all fields are nearly always based on four elements: first, a model; second, data used to estimate unknown parameters; third, assumptions based on scenarios; fourth, expert knowledge. The weight of these four elements depends greatly on the specific problem.
The model describes our understanding of the dynamics of a particular system. The complexity of different systems varies greatly. Celestial mechanics determine the seasons so precisely that any uncertainty in the forecast is negligible. Much more difficult are biological systems that cannot be described with a simple equation, or systems that exhibit chaotic behaviour, such as the weather. Some complex processes surpass our understanding or our computational ability to model them directly, so they are described statistically or with approximations.
Models: only as good as the underlying data
Second, a model requires data for calibration and verification. This is where climate models differ from epidemiological models. Critical data and relationships in climate research have already been collected systematically for decades, but mutating SARS-CoV-2 variants meant the available data was limited or unrepresentative, and decisive factors depended on shifting test strategies or advances in treatment.
Third, where decisions are made, they must also be factored into forecasts. Forecasts – i.e. predictions about developments in the real world – thus become projections or “what-if” scenarios in technical terms. Examples include the expected epidemiological trend with a certain combination of measures, or the degree of warming that will accompany a specific course of CO2 emissions. If the worst-case scenario does not occur, this often does not mean the model is wrong but rather that measures have been taken to prevent it.
The nature of these two uncertainties is completely different: the scenarios present us with choices and are therefore ultimately a matter for policymakers. It is not a given, but it helps us to understand the system and identify the vulnerabilities. The uncertainty of a certain scenario, on the other hand, reflects an incomplete understanding of how the system behaves or limited data. It is the task of science to mitigate this.
Models are approximations of reality
Fourth and finally, expert knowledge is required to assess the imprecision of predictions due to simplifications in the model or errors in the data. The objective is to give context to model simplifications and data errors, and to communicate this. A model is never entirely accurate – it is and will remain a model and at best provides a more or less precise depiction of reality. As the British statistician George Box once put it so well: “All models are wrong – but some are useful.” So the question is not whether a model is correct, because every model is a simplification of reality and therefore “wrong” in a strict sense. What matters most is whether the model is suitable for addressing a specific question.
Were the Omicron predictions really wrong?
This brings us back to the issue of forecasting the Omicron wave.4 The progression of case numbers in January was predicted accurately, meaning the model was adequate. Hospitalisations, on the other hand, remained below the most optimistic predictions. As these figures lag behind reported cases, it was impossible to predict them based on Swiss data. It made sense to rely on data from the lab or other countries, but clearly this did not fully reflect the situation in Switzerland. In addition to medical aspects, such as lower virulence and higher infection rates among recovered or vaccinated individuals with existing basic immunity and thus less severe disease, behavioural aspects that are difficult to quantify may also have come into play. For instance, the more cautious behaviour of high-risk groups even where this is not mandated. The experts will eventually present a final epidemiological assessment.
But one thing is clear: namely, that projections are not deliberately distorted by scientists. They reflect the current data available and state of knowledge, as best as this can be represented quantitatively. As new knowledge becomes available, the projections are adjusted.
“The most relevant question is ultimately whether we can make better decisions with or without quantitative forecasts: the answer is almost always with.”Reto Knutti
However, forecasts can do more harm than good if they are systematically flawed or underestimate the uncertainties. In the case of SARS-CoV-2, forecasts were quite good over the entire duration of the pandemic given the often-sparse data available.5 The longer we work with models, the better we can determine the level of uncertainty and use the results to assess the risk. In the field of climate research, 100 years of data and 50 years of model calculations have impressively demonstrated their accuracy.6 Moreover, extreme weather predictions have given us a wealth of experience in how society makes decisions in response. Or when warnings have to be given and how.
Even daily decisions depend on forecasts
Without forecasting, it is practically impossible to make a decision. Even our gut feeling that nothing will change from today or the expectation that the problem will magically disappear tomorrow is, after all, a forecast. Simply ignoring models, scenarios, data and expertise – would you honestly prefer to navigate life without a professional forecast during a pandemic that has claimed about six million lives?
The most relevant question is ultimately whether we can make better decisions with or without quantitative forecasts: the answer is almost always with. When forecasts are scientifically sound, they provide good guidance and direction. The big challenge here is – as in every field – to systematically calculate the uncertainty and reliability. Ignoring the new knowledge generated by forecasts shows mainly that forecasts do not fit into one’s world view.
More dialogue and trust needed
Forecasts combine all our knowledge about a system. They are rarely perfect, but are constantly being improved. But however good a forecast is, it is of little use if people do not understand it, cannot deal with the uncertainties or are unwilling to react to them. Looking ahead towards future crises, more dialogue and trust between policymakers, society, the media and researchers is required.
This article by Reto Knutti was first published in the external page Klartext column of Higgs magzine.
References
1 Blick (13.02.2022): external page Taskforce best?tigt Omikron-Fehlprognose
2 Tagesanzeiger (12.02.2022): external page Corona-Taskforce gibt Fehlprognose zu
3 For example on external page Twitter
4 The Federal Counsil on Youtube: external page 11.01.2022 - Point de Presse
5 NZZ (29.01.2022): external page Warum die Wissenschafter manchmal falsch liegen
6 Geophysical Research Letters (2019): external page Evaluating the Performance of Past Climate Model Projections