Scientists, Not Politicians, Are the Biggest Threat to Science’s Credibility

Political activism for science is fine as long as science itself remains trustworthy.
Beakers.

Twice last month, thousands of people descended on Washington, D.C., to show their support for science. They came to stand up to the threat posed by the Trump administration, which has proposed draconian cuts to federal science programs, appointed an Environmental Protection Agency head who denies the scientific consensus on climate change, disbanded a commission to fix forensic science, floated a vaccine safety commission headed by a notorious vaccine “skeptic,” and threatened to withdraw the United States from the Paris Agreement on climate change. This new level of political activism on behalf of science has set off a debate over whether it might backfire by turning scientists into just another partisan group.

The concern that scientists will lose the public’s trust if they engage in politics is almost certainly overblown. In an era when trust in almost all of our society’s institutions has declined, trust in science has held steady. A recent Pew survey found that “public confidence in scientists stands out among the most stable of about 13 institutions” that have been rated in a survey conducted regularly since the 1970s. When asked which institutions are likely to act in the best interests of the public, survey respondents had the most confidence in the military, followed by scientists and medical scientists. Elected officials and business leaders ranked at the bottom.

Community members from Harvard University and the Massachusetts Institute of Technology rally before marching to the Boston Common for the Science March on April 22nd, 2017.
Community members from Harvard University and the Massachusetts Institute of Technology rally before marching to the Boston Common for the Science March on April 22nd, 2017.

(Photo: Scott Eisen/Getty Images)

The public trusts science because it is obvious that science works. You don’t have to work in a lab to directly see the results of science, which are apparent in our cars, our computers, and our cancer treatments.

But the corollary is that, to keep the public’s trust, scientists must continue to deliver on their promise to generate the kinds of reliable knowledge that lead to new technologies, better health, and more economic growth.

The question of whether scientists are publishing reliable research has been raised in recent years—not so much by politicians, but by scientists themselves. A persistent narrative that science is failing to self-correct has been widely covered in the media, and threatens to become entrenched. This narrative of trouble within science itself is the real threat to science’s credibility. If the public stops believing that scientists are reliable, the result will be more fatal to our society’s support for research than any of the Trump administration’s alternative facts.

There is a lot more that scientists could do to ensure that research is reliable, according to a recent report by the National Academy of Sciences. The report lays out what it considers “significant” threats to the integrity of science. Outright fraud is one of these threats, though it is relatively rare. More insidious and widespread are “detrimental research practices,” scientific behavior that doesn’t quite rise to the level of faking data, but, in the aggregate, corrodes the credibility of science much more. Many of these practices have been widely covered in stories about science’s so-called reproducibility crisis: Studies are often poorly designed; statistics are misused; trainees aren’t adequately supervised; and, at many journals, peer review is poorly implemented.

Perhaps the most serious of these detrimental practices is ghostwriting, which really should be classified as outright fraud. A scientist, usually one who works for a company, makes significant contributions to a study, but then isn’t listed as one of the authors. This hides the role of industry scientists in what is presented to the world as purely academic research. A notorious example is a 2001 study of the Glaxo SmithKline anti-depressant Paxil, which reported that the drug was safe for use in teenagers and was published under the names of 22 prominent academic scientists. Not only did its conclusion turn out to be wrong, but a lawsuit showed that the company had hired other scientists to write the paper. It’s unclear how common this practice is, but researchers who sign their names to ghostwritten papers are rarely punished by the academic institutions that employ them, the scientific journals that publish them, or the federal agencies that fund them.

A large crowd of protestors take part in the March for Science in Los Angeles, California, on April 22nd, 2017.
A large crowd of protestors take part in the March for Science in Los Angeles, California, on April 22nd, 2017.

(Photo: Mark Ralston/AFP/Getty Images)

For most scientists, signing their name to a ghostwritten paper would gnaw on their conscience, but that’s not true of other, seemingly innocuous research practices listed in the National Academies report. For example, scientists often fail to save all of their data. Academic research labs, with their very non-hierarchical authority structure, can be surprisingly disorganized, and it’s easy for old data to disappear. Once a study is published, the raw data often sits on a local server in a cryptically named folder, interpretable only by the graduate student who first generated it. The student eventually leaves, the server gets replaced, and the data is gone forever. Journals should require, as a condition of publication, that researchers deposit their data into freely accessible public repositories.

As the National Academy report argues, certain beneficial trends in science actually make poor research practices more of a threat than ever. Science is more collaborative, more global, and more reliant on big data. This means that work is often done by larger research teams, spread across multiple countries, and comprised of researchers from different specialties. They produce large data sets that are more difficult to store effectively. Often, no single person on a team can fully understand all of the analyses or methods of a study. Cultural differences between scientists from different countries means it can be challenging to settle on a common understanding of what constitutes proper research ethics. And large data sets pose statistical and information technology challenges that not all scientists are equipped to handle.

This means that, to preserve the integrity of science, researchers and their organizations need to police themselves more aggressively. They should embrace higher standards for statistical practice and data sharing, and they should put pressure on their employing institutions to thoroughly and transparently investigate fraud when accusations arise. Collaborations between academic and industry scientists are crucial for bringing the benefits of science to society, but these collaborations should be openly disclosed and carefully monitored by the scientists’ employers and sponsors.

We’ve already seen thousands of people take to the streets to defend the integrity of science against political threats. Scientists, off the street and in the lab, need to be just as willing to defend their work from threats within their own community.

Related Posts