Many people, whether they realize it or not, make personal health decisions based on the studies that appear on their social media feeds. But social media can’t always be trusted; rarely is that research properly vetted—a fact that can lead to dangerous results.
In 2013 and 2014, following reports of two academic studies linking statins-drugs (which treat hypertension) to various adverse effects, about 200,000 patients in the United Kingdom stopped taking their statins-based medications. But that research turned out to be flawed, and the researchers behind the studies eventually retracted their conclusions. Their causal inferences were, upon further investigation, overstated. But the damage endured, with rates of those quitting the medication not leveling off for another six months.
One might think this example an isolated one. But a systemic review recently published in PLoS One by a team of scientists suggests otherwise. Researchers studied how accurately the 50 most shared health-related academic studies in 2015 were characterized in articles that circulated on Facebook and Twitter.
Throughout the information chain, they found consistent overstatement about causation. Thirty-four percent of academic studies and 48 percent of the media articles about those studies “used language that reviewers considered too strong for the strength of causal inference”—thereby overstating the connection between, for example, statins and adverse effects. Fifty-eight percent of media articles, moreover, “inaccurately reported the question, results, intervention, or population of the academic study.”
Acknowledging that their sample—all having to do with health claims—may not reflect all academic and media articles, the authors nonetheless found, in the end, “a large disparity between the strength of language as presented to the research consumer and the underlying strength of causal inference among the studies most widely shared on social media.”
What to make of these findings? Well, they are not evidence of either “fake news” or “bad science”—the twin bogeymen that conspire to create the bombast infecting today’s political culture. Appropriately, the study’s authors explicitly urge readers to avoid both conclusions. Instead, the researchers encourage a closer look into the structural factors that foster the “selection and spin at each step on the pathway from research generation to research consumption.”
When it comes to the scientists, the authors suggest that, “because producing high-impact work is crucial for career continuity and advancement,” there may be incentives to overstate causal claims in academic write-ups. Noting that there are several factors “that can render studies inaccurate,” they site “use of cherry-picked data and methods to achieve statistically-significant results” as high among them. (To avoid being accused of doing my own cherry picking, I should also note that other factors include improper statistical methods and uncontrolled variables.) Given that 70 percent of the academic studies looked at by the reviewers had “low or very low strength of causal inference,” and only 6 percent had “high or very high strength of causal inference,” the motivation to gin up results might indeed be significant. Still, the authors stress, the academic articles they evaluated “tended to slightly overstate” their causal case.
When it comes to journalists, the authors present more damning evidence of overstatement. They write that “media outlets may spin and encourage dissemination of eye-catching, potentially overstated, inaccurate, and/or misleading headlines in order to gain larger audience size”—basically, health-related clickbait. They question the compatibility of media’s reliance on advertising revenue with “media rigor and objectivity,” and note how revenue sources “may incentivize the production of potentially inaccurate health news.” They site two other relevant studies on the media’s distortion of health research, one in which 50 percent of the studies made a causal claim that a meta-analysis of related research later refuted. So, overall, not the prettiest picture, with the journos bearing the brunt of the bad news.
What to do about this situation? While the article presents an admirably thorough overview of the problem of overstatement between health-related cause and effect, the authors have little to say about possible solutions. Although an email to the lead author asking about such solutions has yet to be answered, an obvious starting point is the communication nexus between scientists and journalists. In this study, the researchers do something unusual: They explicitly reach out to journalists with a table that helps them understand what conclusions their data does and does not support. This seems like a reasonable form of outreach—a good start.
But, generally speaking, scientists have traditionally seen journalists and the general public as part of the problem. According to a 2014 survey undertaken by the Pew Research Center, 84 percent of scientists surveyed agreed that the “public does not know much about science,” 79 percent thought “news reports don’t distinguish well-founded findings,” 52 percent believed “news media oversimplify findings,” and 49 percent thought “the public expects solutions too quickly.”
Addressing these numbers in 2015, Sense About Science, a U.K. non-profit that aims to enhance popular understanding of scientific research, asked, “whose responsibility is it to increase the accuracy of science reporting?”
The answer it gave was interesting. “Although it is clearly important for scientists to develop good lines of communication with journalists,” it went on to suggest that scientists should be going directly to the general public. “Sense About Science believes that scientists have a responsibility to communicate and share their research with the public,” writes Victoria Murphy, a spokesperson for the organization. It is a goal, moreover, consistent with the PLoS One review’s premise that “the pathway from evidence generation to consumption contains many steps which can lead to overstatement or misinformation.” The idea here, bluntly put, is to cut one of those steps: journalism.
There are certainly pre-existing models for what this expert-public interaction could look like. Many consider the work of Neil deGrasse Tyson and Brian Greene to be exemplary cases of experts connecting directly with a lay audience. Likewise Sabine Hossenfelder’s foray into online consulting for aficionados of physics has enhanced her reputation as a scientist willing to connect with the public—if only a self-selecting group of physics geeks—without the assistance of journalism.
Resources such as Understanding Health Research, which allows consumers to be taken through studies by working scientists, demonstrate one way that scientists who do health-related research are trying to connect with a general audience to help us evaluate the implications of academic study. Whether these sort of services will grow more popular over time is anyone’s guess, but the PLoS One review indicates that we need them. Meanwhile, as we go about getting our health-based news from Facebook and Twitter article postings, we’d be wise to take them with a dash of responsible skepticism.