Medical research has earned something of a bad reputation in recent years, the consequence of a growing awareness that many of its claims are “false positives,” promising results that don’t hold up under further scrutiny. But a new study (ironic, I know) hints that simple fixes, such as declaring a study’s goals and methods publicly before conducting the research, could help.
False positives are not hard to come by. Sometimes, a researcher might commit actual fraud, making up data to support a drug’s efficacy. More often, studies simply aren’t constructed properly. For example, a drug trial might not properly prevent researchers from knowing who’s taken a drug or a placebo, in which case they might inadvertently influence patients’ responses to either the drug or the placebo. And sometimes a result is just a fluke—a statistical aberration and nothing more.
Yet, according to several recent studies, negative results—that is, finding that a drug or treatment has no effect—are quite common. One study of treatments for heart ailments found a majority reached negative results; most treatments appeared to be ineffective. A study of cancer treatments arrived at similar results.
Twelve of those 25 studies reported their drugs’ benefits for medical outcomes they hadn’t declared they would study.
Researchers Robert Kaplan and Veronica Irvin wanted to know how, and why, the prevalence of negative results has changed over time. One prominent possibility was a 1997 law, implemented in 2000 as ClinicalTrials.gov, requiring researchers to register drug trials with the National Institutes of Health within three weeks of beginning a study. In particular, drug researchers had to declare their research methods and state what medical outcomes they would study. Congress meant business, too—in 2007, it added $10,000-a-day fines for failing to register drug trials on time.
To investigate the law’s impact, Kaplan and Irvin focused specifically on 55 large studies funded by the National Heart, Lung and Blood Institute, conducted between 1970 and 2012. For each study, the researchers categorized the results as beneficial, harmful, or having no effect (i.e. a negative result).
Thirty of those studies were published prior to 2000, and 17 of those found beneficial effects of the drugs under study. In sharp contrast, just two of the 25 studies published after 2000 found a medication had a beneficial effect on the outcome of interest—that is, the one they registered with the NIH. Perhaps more interesting: Twelve of those 25 studies reported their drugs had benefits for medical outcomes they hadn’t declared they would study, suggesting that pre-registration of a study’s design and aims had a substantial impact on how often researchers reported positive results.
Befitting their subject, Kaplan and Irvin point out a number of limitations and alternative explanations. Yet it’s hard to escape the implication that forcing researchers to declare beforehand details of their studies led to fewer reports of beneficial drugs. Before 2000, for example, researchers could dredge their data for some benefit—something, anything—to a drug or therapy, whether that was what they set out to study or not. “Preregistration in ClinicalTrials.gov essentially eliminated this possibility,” Kaplan and Irvin write.
Quick Studies is an award-winning series that sheds light on new research and discoveries that change the way we look at the world.