It’s a simple question, and probably one which many of us have asked each other and ourselves the past 20 months. How can this world, with its advanced researchers, extensive public health protocols, and past pandemic experience, have been so unprepared for the COVID-19 pandemic? People have blamed poor communication, public noncompliance, even the scientists themselves, but none can completely cover the worsening of the pandemic alone.
For one of my graduate classes this semester, Quantitative Evidence for Infectious Disease Research, I was tasked with answering this question. Instead of limiting my response to any one of the reasons we might jump to for worsening the pandemic, I decided to focus on the scientific community, and the communication of research to the public.
If you’re familiar with scientific research and the scientific method, you’ll know that the process begins with a hypothesis. Many times, there are more than one. The null hypothesis is the hypothesis being tested. The null hypothesis can be any theory, but in epidemiology it will essentially state that there is no causal relationship between a cause and an outcome. For example, a null hypothesis about smoking and lung cancer could posit that there is no causal association between smoking and lung cancer. The data then will either support the null hypothesis or give scientists a cause to reject it. This is where an alternative hypothesis is needed. An alternative hypothesis would state that there is some causal association between cause an outcome, and could say that smoking is a cause for lung cancer.
Epidemiologists and biostatisticians will usually analyze the impact of cause and effect through something called “significance testing“. With this method, the scientists will select a certain significance level, which is the value for which a p-value less than or equal to it is considered statistically significant. The p-value is a measure of probability that the observed difference could have occurred only by chance. Usually, the significance level is set to 0.05, meaning any experiment with a probability of having occurred only by chance equal to 5% or less, would be statistically significant. This is why in some papers you might see researchers state they are 95% confident in their results. Sometimes, these p-values are accompanied by a confidence interval, which is the range of possible values of the estimate being calculated at 95% confidence. In the truest interpretation, according to frequentist statistics, the 95% confidence interval will include the true effect in at least 95% of replications of the process of obtaining the data.
These results have caveats, however. Experiments need to have a large enough sample size to accurately develop results, and the model chosen to analyze the data needs to be correct for the experiment. Unfortunately, this is not always the case in published studies, and many epidemiologists have previously published reports cautioning authors and readers against the common misinterpretations in significance testing (read more here, here, and here). Though published, the COVID-19 pandemic only highlighted this issue further, and many controversial topics over the past 20 months, including the impact of mask-wearing, the contagiousness, or R-naught of the pandemic, and the effectiveness of medications like hydroxychloroquine or ivermectin have fallen victim to these misinterpretations, exacerbating misunderstanding and mistrust of public health authorities.
In the paper attached below, I explore how key publications studying these controversial features of the COVID-19 pandemic succeed or fail in conveying the accuracy and strength of their results to other scientists and, by extension, the public at large. To quote my conclusion:
In a time when scientific results can and should be shared as quickly as possible, researchers and publishers have an additional responsibility to ensure their methods are clearly stated and all results clearly indicate confidence interval ranges and limitations in addition to the basic “significance”
While I am by no means an expert in epidemiology, I feel that this analysis can elucidate how a healthy dose of skepticism is necessary even in scientific quantitative research. Often, we may take for granted the black and white answers of pure science. We can see research as logical and absolute, but there is plenty of room for uncertainty, as the COVID-19 pandemic has shown. I hope this article helps to explain some of the confusion to any readers, and might equip you with the knowledge to approach future papers with questions and determine for yourself how reliable the results are.
Thank you.