Covid-19: How to Evaluate Pandemic Research
Expert advice to be a discriminating follower of Covid-19 studies.
Since the first cases of COVID-19 were reported in December 2019, the amount of related research has been staggering: Over 110,000 articles have been published on COVID-19 or the SARS-CoV-2 virus.
“We’re seeing more publications on this virus than anything any of us in the field of infectious disease have seen in our lives,” says Justin Lessler, an associate professor in Epidemiology.
Keeping up with the latest science can be daunting even for health professionals. However, awareness of some basic research standards may help even nonexperts become better-informed consumers of pandemic news.
Perhaps the most important consideration is that science is a continuous process, says Lessler. He notes that while preliminary research findings often make a big splash in top journals, more thorough follow-up studies that contradict earlier findings may surface in specialized publications—with less hype.
Some findings have made headlines even without publication in scientific journals. It’s important for readers to be aware when this news originates from preprints—publications posted to online resources like bioRxiv, which have not undergone formal peer review. “Preprints are essentially a first draft,” explains Heather McKay, PhD, assistant scientist in Epidemiology. Readers must therefore interpret the findings more cautiously.
Even with studies that have undergone expert vetting, Lessler notes that the urgency created by the pandemic—particularly in its initial months—sometimes resulted in a peer review process with “a little less rigor than normal.”
Many early clinical studies simply described what happened to small numbers of people with confirmed SARS-CoV-2 infection who sought medical care because they had clinical symptoms of COVID-19, McKay says. Such papers have value for establishing hypotheses about symptoms and outcomes, she notes. But large, well-designed studies—ideally with a placebo-controlled arm, in the case of medical treatments—are essential to confirm results. Researchers evaluating clinical studies must consider whether they include sufficiently large numbers of participants. If a study is too small, it can be impossible to determine whether a group of patients is showing a real and meaningful response to treatment.
Lessler, who specializes in epidemic modeling, says that the low technological bar to entry in his field can be problematic and contribute to flawed assertions that may appear reasonable to a nonexpert. Accordingly, readers may look to authors with established epidemiology and infectious disease modeling bona fides for more trustworthy conclusions.
Another essential research principle: Scientists—like the rest of us—are also learning about the pandemic as they go. “No one study will answer all the questions,” says McKay. “That’s just not how science works.”