How Journalists Can Help Hold Scientists Accountable

Amid the so-called replication crisis, could better investigative reporting be the answer? Maybe it’s time for journalists and scientists to work more closely together.
french lab journalists replication crisis

Last May, when This American Life acknowledged that it had run a 23-minute-long segment premised on a fraudulent scientific study, America’s most respected radio journalists did something strange: They declined to apologize for the error. “Our original story was based on what was known at the time,” host Ira Glass explained in a blog post. “Obviously the facts have changed.”

It was a funny admission. Journalists typically don’t say that “facts change”; it is a journalist’s job to define and publicize facts. When a reporter gets hoodwinked by a source, she does not imply that something in the fabric of reality has shifted. She explains that she was tricked.

With science coverage, though, the situation seems to be different—which is why Glass’ remark, while unusually blunt, wasn’t actually wrong.

This American Life had been deceived by a political science researcher at the University of California–Los Angeles, Michael LaCour. His paper, based on falsified data, had slipped past peer review and landed in the pages of Science, the country’s most prestigious scientific journal. This American Life declined to comment for this article, explaining that they might return to the incident in a future episode. But it’s not hard to read the implicit what-could-we-do? shrug in Glass’ statement. Science had spoken. Science had changed its mind. “Obviously the facts have changed.”

The LaCour study, which focused on canvassers’ ability to change voters’ minds, was an especially subtle piece of fraud. It was hard to catch. LaCour had produced a result that was unusual, dramatic, optimistic, and, as Glass noted during the episode, different from 900 other similar papers that LaCour’s colleagues had reviewed. No journalists—as far as I can tell—went looking for aberrations; in the end, a couple of graduate students caught him after they tried to replicate his methods.

What would it take to build an effective, responsible culture of investigative science journalism?

As various commentators have observed, there’s probably no field of journalism that’s less skeptical, less critical, less given to investigative work, and less independent of its sources than science reporting. At even the most respected publications, science journalists tend to position themselves as translators, churning the technical language of scientific papers into summaries that are accessible to the public. The assumption is that the source text they’re translating—the original scientific research—comes to them as unimpeachable fact.

There is little about the scientific method that supports these broadly accepted journalistic assumptions. Science is messy, and scientists are imperfect. And as scientific communities deal with rising retraction rates, a reproducibility crisis, continued influence from industry liaisons, new pressures on graduate students and postdoctoral fellows, a shaky peer review system, and a changing publication landscape, the need for watchdogs is stronger than ever.

So why is most science journalism so uncritical? And what would it take to build an effective, responsible culture of investigative science journalism?

As a genre, science writing has an egalitarian, progressive flavor. All people, the thinking goes, should have access to the technical, elite world of the laboratory. The dominant narrative is one of progress—pioneers moving into a new world. A magazine like Scientific American does not sell practical information to its subscribers, who are unlikely to use intelligence about gluons and orangutan behavior in their day-to-day lives. What it sells is a kind of hope in the future.

That progressive spirit has been there from the early days of science journalism. In the United States, science reporting took its shape shortly after World War I, when the newspaper magnate Edward Scripps founded Science Service, a non-profit wire service that would deliver science coverage to American newspapers.

From the start, Science Service struggled to define itself: Was it in the business of public relations (PR) or journalism? “Scripps pondered whether [Science Service] should act as a press agent for the scientific associations or as an independent news service,” writes the historian Dorothy Nelkin. “While hoping to avoid simply disseminating propaganda, he chose the former role.” In its first iteration, the organization was called the American Society for the Dissemination of Science.

Science Service helped drive the growth of science journalism around the country. Today, most science coverage still follows a press release model. Articles focus on individual studies, as they come out. Reporters rarely return to research years down the line. Articles provide little context or criticism, and they usually frame the story within a larger narrative of human progress.

Deepening the resemblance to PR, scientific journals often provide embargoed versions of papers to select journalists ahead of publication. Among other things, that means that the journals control which outlets cover their research first.

Today, science journalists’ motivations “align very nicely with what the scientists themselves want, which is publicity for their work,” says Charles Seife, a veteran science reporter and a professor in New York University’s Science, Health, and Environmental Reporting Program. “This alignment creates this—almost collusion, that might even be unethical in other branches of journalism.” In short, more than other fields, science journalists see themselves as working in partnership with their sources.

Does an institution’s strength come from a sense of omniscience? Or does it come from acknowledging its faults, and showing that it can address them, even as it produces useful results?

Is that a bad thing? You might think of a top-notch science reporter as a great science teacher—one given a national platform to help people understand the natural world. As the prominent science writer Ed Yong put it, “I am a scientist first and a journalist second and my concern is far less for the prevalence of investigative journalism than it is for giving the public more and more opportunities to hear about science.” Yong wrote that in 2009, but he was responding to the same questions that bothered Scripps during the creation of Science Service 90 years earlier.

The publicity-journalism culture has vulnerabilities. In 2014, John Bohannon, a writer for Science magazine, and Gunter Frank, a German clinician, set out to demonstrate the low standards of science reporting by running a flashy study with awful methods, and then waiting to see who would cover it. Their study purported to show that chocolate aided weight loss. The methods failed to meet even the most basic standards of good nutrition research—they had a sample size of 15—but Bohannon and Frank found a for-profit journal willing to publish their findings. Then they sent out a press release. A number of publications, including Shape magazine and Europe’s largest daily newspaper, Bild, published the results. A year after Bohannon revealed that the study was a hoax, only one of those publications has retracted their coverage, he says.

It’s true that, while some outlets did pick up the story, no mainstream science magazines or sections did. Bohannon, though, sees the stunt’s success as indicative of a larger trend. “Science reporters don’t approach their subject the way that most reporters do,” he says. “It just doesn’t occur to them that there’s a deeper truth that the polite veneer of their sources isn’t revealing.” Many of the scientists he interviews, Bohannon says, have “this expectation that I’m essentially an extension of their university’s press office.”

Covering science isn’t the same as covering, say, politics, of course. Politicians are competing for a limited pool of resources. They’re playing zero-sum games, and they have strong incentives to conceal and deceive. Scientists, at least in theory, have incentives that are aligned with those of journalists, and of the public. We all want to learn more about the universe. It’s a beautiful goal. Often, it’s the reality.

But approaching science as an exercise in purity, divorced from other incentives, Seife says, “ignores the fact that science doesn’t work perfectly, and people are humans. Science has politics. Science has money. Science has scandals. As with every other human endeavor where people gain power, prestige, or status through what they do, there’s going to be cheating, and there are going be distortions, and there are going to be failures.”

Here’s the uncomfortable side of this story: A substantial portion—maybe the majority—of published scientific assertions are false.

In rare cases, that’s because of fraud or a serious error. The number of scientific papers retracted each year has increased by a factor of 10 since 2001.

But even accepted research methods, performed correctly, can yield false results. By nature, science is difficult, messy, and slow. The ways that researchers frame questions, process data, and choose which findings to publish can all favor results that are statistical aberrations—not reflections of physical reality. “There is increasing concern that most current published research findings are false,” wrote Stanford Medical School professor John Ioannidis in a widely cited 2005 paper.

Social psychology is in the midst of a reproducibility crisis. Many landmark experiments don’t hold up under replication. In general, the peer review process is supposed to control for rigor, but it’s an imperfect tool. “Scientists understand that peer review per se provides only a minimal assurance of quality, and that the public perception of peer review as a stamp of authentication is far from the truth,” wrote a former Nature editor in that journal in 2006.

At the same time, the conditions in which research takes place can incentivize scientists to cheat, to do sloppy research, or to exaggerate the significance of results. Simply put, science does have politics. There’s intense competition for funding, for faculty jobs, and for less tangible kinds of prestige.

Meanwhile, corporations spend small fortunes trying to influence academic researchers. In recent years, there has been a profusion of for-profit journals that either don’t use peer-review or have a fake peer-review process, making it harder to establish the reliability of sources.

That’s an uncomfortable image of the scientific process—uncomfortable because it’s so out of step with popular presentations of scientific authority. Science magazines and sections rarely cover these issues.

Science reporters don’t usually look at research funding, nor do they critically evaluate the quality of the studies that they cover. Often, they lack the time or technical knowledge to dig into stories. In other cases, they may just be worried about challenging expert authority.

All communities require watchdogs though. And while they are rare, promising models of investigative science journalism do exist.

These models can even involve collaborations between scientists and journalists. One distinctive approach has emerged at the BMJ. After an investigation that cast doubt on the effectiveness of the antiviral drug Tamiflu, the BMJ appointed its first investigations editor, Deborah Cohen, who had trained as a doctor before moving to journalism. In her role at the BMJ, she dug into the research-backed claims of sports drink makers, and she investigated the safety of a popular diabetes drug. To demonstrate how flimsy the British government’s regulation of new surgical implants had become, Cohen created a fake company, with a fake hip implant, and got it approved for medical use in the European Union.

Cohen’s pieces, which ran in the journal, were extensively sourced, and they were often accompanied by custom-designed, peer-reviewed research from scientists that tested her subjects’ claims. “I always, always work with statisticians, epidemiologists, clinicians that I trust,” Cohen says. “I really think there’s scope for journalists and academics to work closely together.” When Cohen dug into the shaky research behind sports drinks, an Oxford University team wrote a peer-reviewed analysis paper to accompany her reporting.

Cohen’s work is in the tradition of print magazine reporting. But blogging opens up new opportunities too. There’s a rich world of science blogs, written by and for scientists, that provide unusually public glimpses into intra-community debates. Blogs can also be tools for journalistic work. At Retraction Watch, started in 2010, Ivan Oransky and Adam Marcus track retracted papers across disciplines. Oransky describes “this ecosystem that … likes to paint the shiny, happy, new, novel, amazing breakthrough narrative onto everything. And retractions don’t fit into that narrative, because they’re about when things go wrong. And nobody likes admitting it.”

With the flexibility of the blog format, Oransky and Marcus have helped draw public attention to retractions. Taken as a whole, Retraction Watch functions as an investigative project into the nature and scope of scientific breakdowns. It’s also a service to scientists, offering a running database of withdrawn and corrected papers.

More traditional investigative reporting techniques apply here too. Take the use of anonymous sources. In order to check the quality of new studies, diligent science reporters will call up other people in the field and ask for their opinions on the research. In small, close-knit scientific communities, though, people have strong incentives to speak positively about their colleagues’ work. After all, the person you criticize in the New York Times today may be peer-reviewing a submission of yours tomorrow.

John Bohannon, the Science reporter, sometimes offers sources anonymity. The rarity of that practice, Bohannon says, “is an indicator of how different science journalism is from the rest of journalism.” In an essay for Nature, former BBC science correspondent Toby Murcott argued that journalists should be allowed to access anonymous comments from peer reviewers, which can give a more rounded view of the paper’s strengths, weaknesses, and significance in the field.

And then there’s the money. Science journalism usually focuses on the end result, and almost never refers to finances. But funding decisions affect everything from study design to the shape of entire research programs. Often, troubling data is sitting out in the open. Charles Seife, the NYU professor, has uncovered malfeasance by cross-referencing lists of federal grant recipients with lists of doctors who receive money from drug companies.

What’s at stake here, of course, isn’t just journalistic technique. It’s a particular stance toward scientific authority. In an era when science denial poses genuine risks to public health and the environment, it’s fair to ask whether challenging that authority so publicly is really a wise move.

“Scientists don’t like talking about [retraction]. Journals don’t like talking about it,” Oransky says. “They’re afraid it’ll lead to cutbacks in funding and people will use it as a political weapon—which, to be fair, they do.” But that’s “short-term thinking,” Oransky says. “The thing that’s worst for trust in a particular public enterprise, or particular human endeavor, is when people who do that thing, the people in the profession, keep saying that everything is fine, everything is great, and it turns out that they were either lying or not telling the whole truth.”

Does an institution’s strength come from a sense of omniscience? Or does it come from acknowledging its faults, and showing that it can address them, even as it produces useful results?

“I think that science is robust enough of a worldview and a method for truth-finding that you can beat it up as much as you want. It’s going to come out just fine,” Bohannon says. “You don’t have to tiptoe around that thing.”

Related Posts