Is Social Media Saving Science? - Pacific Standard

Is Social Media Saving Science?

Online discussions and post-publication analyses are catching mistakes that sneak past editorial review.
Author:
Publish date:
Social count:
2
(Photo: VLADGRIN/Shutterstock)

(Photo: VLADGRIN/Shutterstock)

Last week, two big papers that described a seemingly revolutionary method to make stem cells were retracted. The retractions were no surprise to stem cell researchers. Authored by a team of Japanese and American scientists and published in Nature in January, the papers fell under suspicion within days, thanks to discussions on blogs, Twitter, and the online science forum PubPeer. It's a storyline that is frustratingly familiar: A paper is published in one of the world's leading peer-reviewed journals and widely reported in the press, only to have major flaws exposed almost immediately. Last year, readers quickly found flaws in another high-profile stem cell paper. (They turned out to be innocent, but substantial typographical screw-ups.) And just last week, the editors of the journal that published the recent Facebook social contagion study put out an official Expression of Concern, acknowledging that the study broke the journal's standards for human research subjects.

It's surprising how often this happens. Why do editors and expert reviewers, whose primary job is to vet manuscripts, miss major flaws that are so obvious to readers after the papers are published?

IN THE FALLOUT FROM the retracted stem cell papers, one factor invoked repeatedly was trust. "There are some people you 100% trust," stem cell scientist Hans Schöler told Science. For him, those people included three of the senior authors on the now-discredited work. Schöler reviewed an earlier, rejected version of these papers for a different journal, and he comments that reviewers are unlikely to go out of their way to check for misconduct in manuscripts that come from labs with a strong reputation for trustworthy work.

Peer-review is based on trust, but as the international scientific community grows, scientists won't spend their careers in the small, trusted networks of known colleagues that earlier generations of researchers were used to.

And the heads of those labs trust the junior scientists who carry out the actual experiments and data analysis. The lead junior scientist on the stem cell project, Haruko Obokata, was found guilty of misconduct, which included improperly splicing together different images, presenting different microscope pictures of one embryo as images of two distinct embryos, and some minor plagiarism. As a result, the research team's claim to have produced true stem cells with their new method fell apart. Several members of the team initially said that they had independently verified Obokata's work before publishing, but it turns out that nobody actually repeated the entire experiment. They all trusted Obokata's results.

Trust will always be indispensable in science, but perhaps scientists rely on it too much. The chief editor of The EMBO Journaltold Nature News that that roughly 20 percent of the manuscripts they receive are flagged for potentially problematic image manipulations, such as splicing two images together to make them look like one. In most cases, the changes weren't intended to be misleading, but the executive editor of another journal reported that their editors reject one out of 100 initially accepted manuscripts for improperly manipulated images.

These numbers are shockingly high. They show that improper data handling is common in science. For anyone who has published a scientific paper, perhaps this shouldn't be too surprising. The process of preparing and selecting data that was collected over several months or even years, and fitting it into a clear, concise manuscript is challenging, especially when a large team of researchers is involved. Mistakes happen and poor decisions are made, sometimes by the less-experienced scientists who actually plot the graphs and compose the figures. Editors and reviewers shouldn't simply take it on trust that their colleagues have done the right thing.

Most journals have implemented routine checks for image problems and plagiarism, but these checks have their limits. In an editorial accompanying last week's retractions, Nature's editors argued that, in spite of some errors in the vetting process, "we and the referees could not have detected the problems that fatally undermined the papers. The referees’ rigorous reports quite rightly took on trust what was presented in the papers." But if that's true, how could online commenters spot the flaws so quickly?

THIS PART OF THE retraction story is actually good news. Here, post-publication review, catalyzed by social media, worked as it should. Evaluating research after it's been published has, of course, always been a crucial element of science. Scientists will challenge published results in letters to journals and arguments at conferences. But those are typically solo efforts by established scientists. Social media and online discussion forums are changing that: they make it easier for junior scientists to participate, let readers compare notes, and, most importantly, provide a public space that is not under the control of journal editors and conference organizers. Surprisingly high-level critiques of big papers happen on Twitter each week. Discussions on PubPeer have played a big role in several high-profile corrections and retractions. And the National Institutes of Health sees enough promise in the social Web to invest in its own forum. These forums are a boon to science, because the crowd-sourced reviews of published research papers catch flaws that even the most careful editors and pre-publication reviewers miss.

Unfortunately, much of the scientific community is still very skeptical about the value of people criticizing research on the Internet. The most common complaints I hear are about time and quality control. Busy, serious scientists don't have time to waste on Twitter or message boards, where any unhinged idiot with an Internet connection can rage away against a highly technical paper that he doesn't get. The National Institutes of Health clearly shares some of this concern, and it has established eligibility requirements for participation in its forum. These concerns are understandable, but to those of us who have gone ahead and joined these online communities, it's clear that they work. And as more scientists figure out how to integrate them into their professional lives, post-publication review will only get better.

Peer-review is based on trust, but as the international scientific community grows, scientists won't spend their careers in the small, trusted networks of known colleagues that earlier generations of researchers were used to. Journals and reviewers need to step up their efforts to check for misconduct, but inevitably, papers with major problems will get through. Crowd-sourced, post-publication review through social media is an effective, publicly open way for science to stay trustworthy.

Related