Scientists, policymakers and the media should acknowledge inherent uncertainties in epidemiological models projecting the spread of COVID-19 and avoid “catastrophizing” worst-case scenarios, Cornell researchers say.
Threats about dire outcomes may mobilize more people to take public health precautions in the short term, but invite criticism and backlash if uncertainties in the models’ data and assumptions are not transparent and later prove flawed, the researchers found.
Among political elites, criticism from Democrats in particular may have the unintended consequence of eroding public trust in the use of models to guide pandemic policies and in science more broadly, their research shows.
“Acknowledging that models are grounded in uncertainty is not only the more accurate way to talk about scientific models, but political leaders and the media can do that without also having the effect of undermining confidence in science,” said Sarah Kreps, the John L. Wetherill Professor in the Department of Government in the College of Arts and Sciences (A&S).
Kreps is the author with Douglas Kriner, the Clinton Rossiter Professor in American Institutions in the Department of Government (A&S), of “Model Uncertainty, Political Contestation and Public Trust in Science: Evidence from the COVID-19 Pandemic,” published Sept. 25 in Science Advances.
Kreps and Kriner conducted five experiments – surveying more than 6,000 American adults in May and June – to examine how politicians’ rhetoric and media framing affected support for using COVID-19 models to guide policies about lockdowns or economic reopenings, and for science generally.
Models have been a flashpoint for debate since the start of the pandemic, as scientific understanding of the novel coronavirus has evolved rapidly and in public view. An early model from Imperial College London that was influential in guiding policy predicted more than 2 million U.S. fatalities, if no control measures were taken.
Criticism of such models became a mainstay on cable news, including Fox News host Tucker Carlson’s April characterization of them as “completely disconnected from reality.” In May, Nate Silver, a prominent data analyst, advocated humility in reporting about coronavirus data that he said is “highly imperfect and thus sometimes leads to misleading conclusions,” urging journalists to provide context about uncertainties.
Kreps and Kriner found that different presentations of scientific uncertainty – acknowledging it, contextualizing it or weaponizing it – can have important implications for public policy preferences and attitudes.
For example, they said, Republican elites have been more likely to attack or “weaponize” uncertainty in epidemiological models. But the survey experiments showed that their criticism, which the public apparently expected, didn’t shift confidence in models or in science. Support for COVID-19 science from several Republican governors who split with their party’s mainstream also did not affect confidence.
Criticism by Democrats, in contrast, registered as surprising and was influential. When shown a quote by New York Gov. Andrew Cuomo disparaging virus models, survey respondents’ support for using models to guide reopening policy dropped by 13%, and support for science in general decreased, too.
“It suggests that the onus is on Democrats to be particularly careful with how they communicate about COVID-19 science,” Kriner said. “Because of popular expectations about the alignments of the parties on science more broadly and on issues like COVID-19 and climate change, they can inadvertently erode confidence in science even when that isn’t their intent.”
Communication about models can ignore uncertainty by focusing on a single number or “point prediction” of infections or fatalities that is more susceptible to change or retraction over time. Alternatively, it could acknowledge uncertainty by providing a model’s ranges of estimates. But in experiments, that acknowledgment reduced support for science-based policymaking.
“Such an approach to communicating scientific uncertainty may be more intellectually honest,” the authors wrote, “but it nonetheless comes at a cost of eroding public confidence.”
Another way of ignoring or downplaying uncertainty is to present narratives that sensationalize or “catastrophize” the most alarming projections and potential consequences of inaction. An April article in The Atlantic about Georgia’s reopening strategy, for example, referred to the state’s “experiment in human sacrifice.”
The researchers’ experiments showed that type of COVID-19 communication significantly increased public support – by 21% – for using models to guide policy, with gains primarily attributed to people who were less scientifically literate.
But the results also indicated that public trust was diminished when models proved wrong or stark projections did not come to pass.
“Thus, in the long term,” Kreps and Kriner concluded, “acknowledging and contextualizing uncertainty may minimize public backlash should scientific projections and guidance change markedly.”
Support for this research came from the Cornell Atkinson Center for Sustainability, through its Rapid Response Fund.