There’s a fear that the science that gets published—and remember it’s still “publish or perish,” even as “fundraise or perish” moves up on the outside—is the science that shows the experiment succeeding. While on first blush that might not seem so awful, if you think about it for a moment there’s daisy chain of unwanted consequences (assuming with proper scientific reserve that this initial hypothesis is, in fact, accurate).
This publishing regime’s consequences are so unwanted that some people term them “toxic to science”:
Recent studies have shown how intense career pressures encourage life scientists to engage in a range of questionable practices to generate publications—behaviors such as cherry-picking data or analyses that allow clear narratives to be presented, reinventing the aims of a study after it has finished to "predict" unexpected findings, and failing to ensure adequate statistical power. These are not the actions of a small minority; they are common, and result from the environment and incentive structures that most scientists work within.
At the same time, journals incentivize bad practice by favoring the publication of results that are considered to be positive, novel, neat, and eye-catching. In many life sciences, negative results, complicated results, or attempts to replicate previous studies never make it into the scientific record. Instead they occupy a vast unpublished file drawer.
The preceding is a big chunk of an open letter that The Guardian posted today in which 73 major-league researchers, mostly British and mostly from the life sciences, offer one structural reform they believe will, if not solve the problem, at least make it less of an issue. They want scientists to approach academic journals with their papers before the scientists have done the experiments or committed their results to paper. The journals would then vet the proposal, and if the researchers’ peers determine the idea has merit, the journal would accept the future manuscript “in principle” then and there, which the letter’s cosigners suggest “virtually guarantees publication regardless of how the results turn out.” Oh, and the final paper also gets peer-reviewed.
If you’re a science civilian, reading a lot of stuff about experiments that didn’t show anything, that didn’t produce the cool thing the researchers theorized about at the outset, or that proves something you saw had been established already, isn’t going to be all that scintillating. And thanks to journalism, where “novel, neat, and eye-catching” are virtues to be extolled, as are both extremes of positive and negative, you probably won’t be bothered with reading these boring results. Yes, we joyfully embrace publication bias. But in science, knowing something isn’t, or that it is routinely, are really valuable things to know (even if they’re no more scintillating to that audience). So by guaranteeing placement, even of boring stuff, a journal does the community a service, even as it likely sacrifices some of the sizzle that these bone-dry journals feel they must maintain to retain their status in a marketplace habitat.
The idea of registration isn’t brand new, since it’s the basis of clinical trials, but it is of recent vintage for publishing. Chris Chambers, a Guardian columnist and one of the letters signers, last fall published his own, solitary open letter urging the journal Cortex, where he’d just been named an associate editor, to adopt this model alongside its traditional methods of accepting and publishing manuscripts. He had formulated the plea—which was accepted in March—after a year of talking “with scientists in multiple disciplines (including journal editors), science policy makers, science journalists and writers, and the Science Media Centre, as well as key blog articles (e.g. here, here and here).” Three journals—Cortex, Attention, Perception & Psychophysics, and Perspectives on Psychological Science—are now on board, and a few more have one-off projects.
Those pushing this concept are actually pretty modest in their expectations, but they’re still getting flak. Some of that is reactionary, such as fears that a journal’s rankings (known as its “impact factor”) might suffer. Then again, they might not, as Selena Koleva at the University of Southern California speculated in responding to Psychological Science suggesting it was considering accepting pre-registered replication studies:
I suspect that even overall successful replications will find a caveat or two about the effect, which is precisely the kind of thing I am likely to cite in explaining why my conceptual replication/"follow up"/"building on such and such" study doesn't conform in every single respect to the original findings or to every single one of my hypotheses. Isn't that the more likely case anyway?
Other criticisms are fairly nuanced, such as the idea that locking down the parameters of research before beginning narrows the chance that serendipity might walk in unannounced.
I think pre-registering could be beneficial, because it enforces transparency on a crowd that could certainly use some, but it should not impose the same restrictions that are applied to clinical trials, where the stakes are so high that extreme rigor is warranted. Science is often exploratory, and "exploration" is not a dirty word. Not as long you don’t pretend to have predicted something that you did not. Furthermore, it’s silly to throw away perfectly good evidence just because it is by itself insufficiently strong. When in doubt, testing additional participants is a sensible thing to do, as long as you don’t pretend to have fixed your number of participants in advance.
There’s another school of opposition, one identified by the blogger Neuroskeptic at Discover Magazine:
Pre-registration puts you at a disadvantage—insofar as it limits your ability to use bad practice to fish for positive results. It means you can’t cheat, essentially, which is a handicap if everyone else can.
All in all, registration will be an intriguing experiment in it's own right.