How Do We Fix Bad Science? - Pacific Standard

How Do We Fix Bad Science?

A whistleblower's new study shows publicly calling out fraudulent research may lead to more corrections.
Publish date:
Social count:
(Photo: Michal Durinik/Shutterstock)

(Photo: Michal Durinik/Shutterstock)

In July 2012, Paul Brookes, an associate professor of anesthesiology at the University of Rochester, launched Science Fraud, a website where whistleblowing life scientists could anonymously submit and discuss suspicious research in their field. The site lasted just six months before Brookes was outed and the site was shut down under a bombardment of legal threats.

Because of a high volume of submissions, Brookes still had 223 articles whose alleged problems he never had time to publish—not far from the 274 the site actually had managed to review. This gave him a unique opportunity, he realized. While opening his peers to public scrutiny angered many people, he could prove it made for better science, too.

"I reviewed a paper and found fabricated data. The journal rejected the paper, and subsequently it was published in a different journal with some problem data still present."

In its short run, Science Fraud contributed to a burgeoning enthusiasm for self-policing in the sciences, which is increasingly easy thanks to blogging and social media. Little data exists to show whether or not self-policing actually leads to more corrections in the field, though. Does airing science’s dirty laundry actually compel journals and researchers to acknowledge bad data more than they would if only contacted in private?

That’s exactly what Brookes was able to find evidence for. Twenty-three percent of the problematic papers Science Fraud discussed among the scientists were either retracted from or corrected in journals, he reported yesterday in PeerJ. But only 3.1 percent of the papers the website never had time to critique were similarly addressed, even though the journals, funders, and authors' institutions involved with these papers were still notified of potential errors. Public exposure appears to have made poor research seven times more likely to be fixed.

Where is the scientific integrity and self-correction we idealize? Sadly, Brookes’ report includes a handful of anonymous testimonials that highlight the frequent politics behind scientific corrections, such as:

I reviewed a paper and found fabricated data. The journal rejected the paper, and subsequently it was published in a different journal with some problem data still present. The editor at the new journal knows about the previous rejection for reasons of data fabrication, but refuses to take up the matter with the authors unless I am willing to have my real name revealed as an accuser. I refused, because the lead author is on a panel that reviews my grant proposals.

New forums encouraging data sharing and post-publication scrutiny offer hope for improvement, according to Brookes, but many issues remain to be addressed as science's corrective system evolves. "The jury is still out on exactly what the best system is," he says in a press release, "who should be allowed to comment, will they be afforded anonymity, and of course who will pay for and police all this activity."