When Science Finds Nothing, It Often Publishes Nothing

Researchers are much less likely to report null results, and that’s not a good thing.

Science’s failures are sometimes just as important as its successes, especially when “failure” means failing to replicate a well-known result or failing to find evidence for an important hypothesis. But many of these so-called null results are going unreported, largely because researchers never bother to write them up and get them published.

The situation is a bit like finding Snoopy in the clouds or Jesus in a watermelon. In reality, those are the result of random chance and our remarkable ability to find patternswhere there are none. But imagine if most of us never saw plain old fluffy clouds or secular melons. We’d be forgiven for thinking that nature was dominated by comic strip characters and prophets. Likewise, if 20 teams test a hypothesis, 19 find null results, but only one positive result gets published while the others go unnoticed, a statistical fluke starts looking like a real effect.

“It’s important that the community is aware of the nulls,” says Neil Malhotra, co-author of a new study on publication bias and a professor at Stanford’s Graduate School of Business. Even if the results aren’t published in top journals, science benefits from the context that null results provide, he says.

Even if the results aren’t published in top journals, science benefits from the context that null results provide.

So how big a problem is this? In the past, studies demonstrating a bias against publishing null results were largely indirect. For example, scholars know there’s an overabundance of published papers that just barely meet certain statistical criteria. While that’s consistent with publication bias, there could be other explanations.

To search for direct evidence, Stanford political science graduate students Annie Franco and Gabor Simonovits worked with Malhotra to analyze projects run through the National Science Foundation’s Timeshare Experiments in the Social Sciences, or TESS. Since TESS keeps track of the experiments it runs, the team could track down whether they yielded positive or null results and whether they had been published, written but never published, or never written up at all.

Publication bias is real, the researchers found, but more importantly the source of that bias seems to be researchers themselves. Of 47 null results in the TESS data, 29 were never put down on paper, let alone submitted to a journal. Only 12 out of 170 positive results met the same fate. But when researchers submitted null results for review, they were just as likely to be published as statistically significant ones.

“The published literature is much more likely to contain significant findings,” the authors wrote last week in Science. “Yet null findings are not being summarily rejected, even at top-tier outlets.” Researchers simply aren’t submitting them, they write.

Malhotra says he hopes the study leads more researchers to at least write up their null results so that other scientists are aware of them. Still, he says, it will probably take institutional changes, such as new journals that encourage publishing null findings, before scientists are comfortable reporting those results.

Related Posts