Speaking fees and other financial ties to pharmaceutical companies—not just direct funding—could bias a drug trial’s results.
By Nathan Collins
(Photo: Joe Raedle/Getty Images)
It’s well-known that food and drug companies put pressure on researchers to produce work that acts to the industries’ advantage. In one of the more egregious examples in recent memory, sugar industry representatives paid Harvard University researchers the equivalent of $50,000 to write a review paper that minimized the effects of sugar on coronary heart disease. But industry representatives don’t have to directly fund a study tailored to their needs to get results they like, according to a new study—speaking fees, travel costs, and the like are probably enough.
“We found that more than half of principal investigators of RCTs [randomized controlled trials] of drugs had financial ties to the pharmaceutical industry and that financial ties were independently associated with positive clinical trial results even after we accounted for industry funding” of those investigators’ research, Oregon Health and Science University graduate student Rosa Ahn, San Francisco Veteran’s Affairs Medical Center researcher Salomeh Keyhani, and their colleagues write today in The BMJ.
“These findings may raise concerns about potential bias in the evidence base,” they add.
A number of studies in recent years have looked at the effect industry funding has on research, but, Ahn, Keyhani, and their team point out, there are other ways for drug companies to potentially influence researchers—by paying them to either speak at events or provide advice on their drugs, for example. Unlike research fundings, those payments go directly into a researcher’s pockets, yet there are relatively few studies that look at how those payments influence results.
To find out, the researchers first chose 195 drug studies at random from a search of the National Institutes of Health’s Medline database, from which they identified the principle investigators (PIs) who led each study, and whether each study was good news or bad for a drug’s manufacturer. Finally, they searched the publications themselves—Medline, Google, ProPublica’s Dollars for Doctors project, and Patent Office records—for financial ties between researchers and industry. Specifically, they focused on advising and consulting fees, honoraria, employee relationships, speaking fees, stock ownership, and travel and meal expenses.
Three in five studies declared some kind of financial tie somewhere in the report itself, the team found, while 68 percent had a financial tie—suggesting that at least 8 percent had a connection they decided not to tell anyone about. But what’s really interesting is that, among studies that reached an industry-favorable conclusion, 76 percent had at least one PI with financial ties to the manufacturer of the drug under investigation. Among studies with unfavorable results, that number was just 49 percent. That result held up even when the manufacturer didn’t fund studies of its drugs—PIs’ financial ties were enough.
The results don’t mean drug companies are bribing researchers to sway a study’s results, Ahn, Keyhani, and their colleagues point out. For one thing, the nature of their study is such that it can only establish associations, not causes. It’s also hard to disentangle the underlying purpose of, say, a speaking engagement—it could be an attempt to influence someone, or it could be one friend (the researcher) doing a favor for another (the industry figure).
Either way, it’s a problem. “Given the importance of industry and academic collaboration in advancing the development of new treatments, more thought needs to be given to the roles that investigators, policy makers, and journal editors can play in ensuring the credibility of the evidence base,” the team writes.