Science and scientists are often stereotyped as rigorous skeptics, but the truth is that blind trust plays a big role in scientific research as it works its way from data collection to publication. And when you base an increasingly competitive profession on blind trust, you have conditions that are ripe for cheating scientists.
In science, skepticism is based on a presumption that new claims are probably wrong, but not dishonest. However, opportunities for dishonesty abound. The professor running the lab trusts that the graduate student collecting the data didn't fake the results. Peer reviewers for scientific journals trust that authors really did perform the methods described in the manuscript. And working scientists confronted with a weekly deluge of new published work trust that what they read is not intended to deliberately deceive.
Scientists are required to carry out a delicate balancing act between imagination and reality.
It's shockingly easy to cheat in science, actually. On January 12, 2006, the journal Scienceretracted two high-profile papers from the lab of noted Korean stem cell biologist Woo Suk Hwang. Less than two years before, Hwang claimed to have made stem cells from cloned human embryos, a difficult feat that had previously only been achieved with livestock. He faked his data, lied about his methods, and managed to get the results published in one of the world's hottest journals, twice. Yet Hwang was actually a small-timer, as far as cheating scientists go. In 2002 and 2003, the world's most popular journals, Nature and Science, retracted a combined total of 16 fraudulent papers co-authored by the award-winning German physicist Jan Hendrick Schön. Neither Schön's co-authors, nor the peer reviewers caught the problems. In 2011, 69 papers by the German clinician Joachim Boldt and 28 papers by the Japanese virologist Naoki Mori were retracted for fraud. Fabricating your data and lying about your methods don't require sophisticated skills, and until you are actually accused of fraud, nobody is going to come into your lab and test your equipment or inspect your lab notebooks.
While fraudulent papers appear to be rare in science (slightly more than 1,000 out of more than 21 million published papers have been retracted for fraud since 1973), the existence of cheating scientists is a puzzling phenomenon because the payoff is so limited compared to other species of fraud, and the odds of being successful over even the medium-term are low. Cheating scientists are not stashing loot in offshore bank accounts, and a recent study found that most papers retracted for fraud are caught in less than three years.
So what motivates scientists to cheat? One commonly cited reason is that the incentive structure of modern science is out of whack. The scientists Arturo Casadevall and Ferric Fang have criticized the winner-take-all arrangement of today's science community. Curiosity may drive people into science, they write, but then reality quickly sinks in. "To be successful, today's scientists must often be self-promoting entrepreneurs whose work is driven not only by curiosity but by personal ambition, political concerns, and quests for funding." When the competition becomes too fierce, the result is "a compelling incentive for scientific misconduct ranging from egregious fraud to more subtle transgressions, such as selective reporting of results." It doesn't help that the science profession shows an increasing resemblance to a pyramid scheme.
The pressures of career survival are certainly real, but I believe there are deeper forces at work when scientists cheat. Scientists are required to carry out a delicate balancing act between imagination and reality. On the one hand, all of our scientific theories, principles, and concepts are, like great art, imaginative mental constructs: creative human efforts to make sense of our world. On the other hand, as the physicist Richard Feynman put it, the scientific imagination needs to work in a straightjacket. Scientists need to test the products of their imagination against the data, and then choose how to modify their ideas.
The trouble happens when good ideas turn out to be wrong. Based on a compelling idea, a scientist may have invested months or years of work, given presentations to colleagues or even the public, and published promising results in a high-profile journal. But further experiments don't turn out as expected, and it becomes clear that the cherished idea that was too good not to be true now needs to be abandoned. At this point, as physicist Bob Park described it in Voodoo Science: The Road From Foolishness to Fraud, scientists reach a fork in the road. "In one direction lies the admission that they may have been mistaken. ... In the other direction is denial. ... Few if any scientists are so clever or so lucky that they will not come to such a fork in their career." Admitting mistakes is one of the most important skills a scientist learns.
Park claims that "the scientific process transcends the human failings of individual scientists," and in the long run that's true. Many believe that this is because science is based on an ethic of "trust but verify." The Economistrecently argued that modern science is running into trouble because these days "scientists are doing too much trusting and not enough verifying." But in fact scientists have never been eager to spend much of their time and resources verifying each other's work. Self-policing is not what makes science succeed; the scientific process ultimately transcends human failings because one person's new idea or experimental results become the foundation of successful work in other labs—or it just proves to be a dead end. The mechanism of science can handle people being wrong, both honest mistakes and outright fraud, although mistakes and fraud can waste time and resources, put patients at risk, and, in high-profile cases, damage the public's trust in scientists. The scientific process will take care of itself; scientists just need to take care of the resources and trust they've been given.