Most of us who take a medication expect our doctor to prescribe it based on evidence. But it turns out that basic assumption is often incorrect.
In fact, many clinical trials of medical treatments—particularly negative ones—never make it to publication in academic journals, which doctors consult to make medical decisions and the media publicize in their health reporting. According to a 2014 systematic review in PLoS, more than half of trial results are not published, and those that are published are three times more likely to come out with positive rather than negative or null results.
“I think your average consumer thinks that their treatment is based on data or research—that it’s odd that it is not,” says Dr. Kay Dickersin, the director of the Center for Clinical Trials at Johns Hopkins Bloomberg School of Public Health.
Even studies that are published may over-emphasize positive results—a kind of spin that we are conditioned to expect from politicians but not from clinical researchers. All of these practices—and many more variations of misleading—are known as publication bias, and they can seriously skew the evidence doctors and patients use to make health decisions.
"If you start to dig down ... you sort of wonder what this is like for every drug. Is this really a problem across all classes of all drugs? What can we really believe? And very quickly you’re sort of down a rabbit hole."
Yet in addition to misleading the public and doctors, publication bias has a whole other cascade of negative effects. It betrays the trust of the patients who participate in clinical trials. And it biases the findings of systematic reviews, analyses of all available research on medical treatments. Indeed, systematic reviews are conducted precisely to do what publication bias prevents: provide the most accurate information about how effective and safe a treatment is based on all of the available evidence.
“We cannot know the true effects of the medicines we prescribe if we do not have access to all the information,” Dr. Ben Goldacre, a physician and science writer said in a 2012 TedMed talk, which became an opening salvo for a science transparency group he founded called AllTrials that goes after this problem.
In recent years, a variety of governmental and non-governmental groups are forming or stepping up efforts to bring transparency to medical research. What remains to be seen is whether these efforts can attack a problem that has persisted for decades.
TAMIFLU AND ANTIDEPRESSANTS
The story of Tamiflu is perhaps one of the most headline grabbing cases of publication bias. This anti-influenza drug, also known as oseltamivir—along with a similar drug called zanamivir, marketed as Relenza—came under scrutiny by the Cochrane Collaboration, an independent NGO that works to acquire data to conduct accurate systematic reviews. After engaging in a drawn out battle for the regulatory documents that formed the basis for approval of the two drugs, Cochrane “came to the conclusion that there were substantial problems with the design, conduct, reporting and availability of information from many of the trials,” according to a statement published last year. Cochrane concluded from its analysis of the trials that the drugs did little to prevent flu symptoms beyond reducing the duration of the virus by half a day. The report called into question the billions of dollars governments including the United States have spent stockpiling Tamiflu to prevent a flu outbreak and the lack of easy access to important regulatory data.
In another widely covered story, a 2008 New England Journal of Medicine study of 74 Food and Drug Administration-registered studies of a dozen popular antidepressant drugs, like Prozac, Zoloft, and Paxil, found 94 percent in the medical literature were positive. But when the researchers filed Freedom of Information Act requests for FDA review documents, they found a good chunk of those trials were not published. Of 33 trials the FDA had perceived as having negative or questionable results, 22 were not published, and 11 were published but communicated positive results. (One unpublished trial was positive.) The study was led by Dr. Erick Turner, a former FDA medical officer who had begun to question the veracity of the medical literature while working on the drug approval process at the agency.
In 2012, GlaxoSmithKline, the maker of Paxil, even pled guilty and paid a $3 billion fraud settlement in part for concealing negative information about the effects of the drug on children and teens.
Both the Tamiflu and the antidepressant studies have faced criticism, as studies on hot button issues often do. However, they have drawn attention to the issue of withholding crucial studies—whether intentionally, out of ignorance, or an inability to publish the studies. Meanwhile, new revelations continue to document cases in which potentially negative information about medical treatments was withheld from the public at various stages of the clinical trial process.
“If you start to dig down ... you sort of wonder what this is like for every drug. Is this really a problem across all classes of all drugs? What can we really believe? And very quickly you’re sort of down a rabbit hole,” says Dr. Joseph Ross, an associate professor of internal medicine at Yale University.
As the problem has become more blatant, the U.S. government has tried to catch up by passing regulations requiring more transparency. In 1997, Congress passed a law requiring all trials file public information at their outset. In 2000, the National Institutes of Health launched a website called ClinicalTrials.gov where this information would be made available. “Before clinical trial registration, no editor could have known what data was being collected as part of a trial,” Ross says.
In theory, academic journals could now use the registry to confirm the claims of an article they were planning to publish—to make sure a trial reported on what it originally set out to measure.
In 2007, Congress went even farther, passing the FDA Amendments Act, which mandated reporting final study results of a drug, biological product, or device to ClinicalTrials.gov within a year after the trial concluded. The rules apply to drugs that are being studied, manufactured, or seeking new drug status in the U.S.
Today, over 178,000 clinical trials are registered in ClinicalTrials.gov—the largest registry in the world—and 15,000 report results, according to the National Institutes of Health. To the credit of the 2007 law, trials registered increased significantly from three years before that year to three years after, according to a 2012 study in JAMA, and the number of missing data elements declined overall.
The realities of the current incentive structure to publish positive results in top journals make spending time improving manuscripts a tall order.
Dickersin, an early advocate for trial registries, believes ClinicalTrials.gov has provided a good picture of where the problems are. “It’s clear that there is failure to report,” she says. For instance, fewer than half of registered studies made it to a journal publication, according to a study published in 2012 by Dr. Ross and colleagues in the British Medical Journal. Another BMJ study from 2013 found that nearly 30 percent of trials of at least 500 participants registered in ClinicalTrials.Gov remained unpublished three years after they were completed.
ClinicalTrials.gov also may be providing a check on spin. A study published last year in JAMA found cardiovascular trials that had registered were less likely to report positive findings than those not registered. According to a 2013 study in PLoS, “serious adverse events,” were reported only 63 percent of the time in journal articles compared to 99 percent in the registry.
And studies published last year in Annals of Internal Medicine and JAMA found more accurate information in ClinicalTrials.gov than published papers. “This is a problem,” says Dr. Philippe Ravaud, director of the Centre of Epidemiology at the Hotel-Dieu in Paris, adjunct professor of epidemiology at Columbia University’s Mailman School of Public Health, and the senior author of the PLoS study. “It questions the narrative form of published articles.” (More on that later.)
Many trials delay reporting on ClinicalTrials.gov. Nearly 80 percent of trials had not reported their results within a year of concluding, according to a 2011 study in the British Medical Journal. Only 11 percent of obstetric studies completed over two years ago had reported their results after two years, according to a 2014 study.
The lateness points to the lack of enforcement of the rules for reporting to ClinicalTrials.gov. Several sources interviewed for this article say they are not aware of the FDA ever fining an organization for failing to report trial results, even though the agency is authorized to collect civil penalties for violation of the regulations.
One deterrent to posting on ClinicalTrials.gov may be that it is a challenging website to use. A 2011 study found it takes about 38 hours to submit basic results on the site, and an additional 22 hours to collect the data and information required to register. Still, one wonders whether studies that are not being reported are more negative studies.
Even a little bit of enforcement—such as an email reminder—could improve reporting. Ravaud and his team put this to the test when they sent emails to investigators in 190 studies that had not posted results on ClinicalTrials.gov. The researchers disguised the reminders as surveys notifying recipients of their lateness and querying them about why they had not posted their results. They compared them to a control group that did not receive emails. After three months of receiving the email, there was little difference in the number of studies posted by the control versus intervention group, but after six months, there had been an increase in those who posted their study results among the intervention group. The authors noted that the message might be more powerful if it came from regulators and threatened some kind of fine or sanction.
TARGETING THE JOURNALS
Because journals are the gateway through which medical research is publicized, some experts believe they are the best hope for cracking down on publication bias. “The best enforcement is really going to be the journals refusing to publish,” Dickersin says.
The goal of the international EQUATOR network, short for “enhancing quality and transparency of health research,” is to improve the standards of what is published in medical journals. EQUATOR helps journals and medical researchers use what are called reporting guidelines in the writing and editorial process. The guidelines, created for many different kinds of studies, are written by experts in study design.
“We try to improve the quality of reporting after the submission of the papers. We ask editors to check if the papers follow the reporting guidelines,” Ravaud says.
While most journals endorse the guidelines, only a few require authors to submit a checklist ensuring they have met them. “If a journal is going to be really tough, they’ve actually got to pay a technical editor to do that. Some people say, can peer reviewers do that? But peer reviewers are not paid,” says Dr. Elizabeth Wager, who consults editors, scientists, and writers on medical publishing and is a visiting professor at the University of Split School of Medicine in Croatia.
Wager suggests journals require articles follow a very structured template, similar to trial registry requirements, but she acknowledges it would not be popular with academics. “I think [journals] know authors don’t like that. [Authors have] got a funny idea that academic writing should be like creative writing,” she says.
TARGETING THE INVESTIGATORS
Dr. Iveta Simera, who heads program development for EQUATOR based at the Centre for Statistics in Medicine at Oxford University, believes the push for accuracy and transparency should take place at research institutions. “At the end it’s researchers, scientists who are ultimately responsible for what they produce. You can say: editors, peer reviewers, they should spot the mistake. But it’s the manuscript that should be already good enough that things are not missing,” she says.
Ravaud, director of the French EQUATOR Center notes that to ensure better publications, “we have to move to be able to intervene during the process of writing the first draft of the manuscript.”
The movement toward sharing data will be a big undertaking for many and is not going to solve the publication bias issue overnight.
The realities of the current incentive structure to publish positive results in top journals make spending time improving manuscripts a tall order. Academics may be at work on multiple studies as well as trying to write new grants. It may not make sense for them to spend their time trying to publish a null or negative study when positive studies are more likely to get published in journals. “People are rewarded for publishing in well known, high impact journals, not for producing well designed, well reported, well conducted papers,” Simera acknowledges. “Competitiveness in research is rising. People are rushing a lot more.”
Sometimes, Wager points out, medical researchers are also ignorant, especially those who may not have been trained in a discipline like epidemiology that emphasizes study design. “I do a lot of training with doctors and it surprises me how unaware they are of reporting guidelines,” she says.
Universities lack a single compliance office that can guide medical academics, some of whom may not be trained in study design, Ross says. “No one has the resources to do it, but academics are worse off.”
INCENTIVIZING DATA SHARING
Clinical trial registries require publication of the results of studies, but a large portion of data from clinical trials is never published or made available. Concerns about publication bias, among other things, has driven a movement toward data sharing. “Almost all other professions share a lot more data under much more liberal circumstances than we do,” said Dr. Andrew Vickers, an attending research methodologist at Memorial Sloan Kettering Cancer Center, during a Columbia University Epidemiology Scientific Symposium about health research outcomes in February.
There are many barriers to sharing of clinical trial data, such as issues surrounding privacy of patient health information and the extensive technology infrastructure it may require. But as suggested by an Institute of Medicine report released in January, clinical trial data sharing is the future. The report outlines a framework for developing “a culture, infrastructure, and policies” to foster data sharing among the multiple stakeholders involved. “We think responsible sharing of clinical trial data will advance the science that underlies as the foundation of good clinical care,” said Dr. Bernard Lo, president of the Greenwall Foundation and the chair of the committee that published the report, at a press briefing.
The movement toward sharing data will be a big undertaking for many and is not going to solve the publication bias issue overnight. “Moving from a sharing-optional to a sharing-required environment is a fundamental change that requires modifying a complex ecosystem of incentives and controls involving a network of industry, academia, regulators, journals, funders, providers, and patients. Changing any element affects all of them,” Dr. Steven N. Goodman, a professor of medicine and health research at Stanford who was on the report committee, wrote in the Annals of Internal Medicine.
Goldacre of AllTrials refers to journal guidelines and trial registries as “fake fixes,” a term he used in his 2012 TedTalk and still stands by today.
“There’s still no routine audit of whether registration and reporting are enforced, so there’s no accountability, and no way of knowing the levels of compliance,” Goldacre wrote in an email in January. He also points out that ClinicalTrials.gov only requires registration of trials that were ongoing during or after the 2007 FDA Amendments Act was passed.
Goldacre doesn’t have much faith that stepped up enforcement of registry requirements will happen anytime soon. Thus, AllTrials has tried to “take the bull by the horns” and directly audit company’s public statements and their actions. They plan to publish the results.
“That way,” he writes “doctors, patients, researchers, journalists and policymakers can all see for themselves who are the worst offenders, but also, crucially, who is showing leadership.”