Animal Research Falls Short on Experimental Procedures

A new study suggests two in three animal studies don’t report taking even the most basic precautions against biased results, making many of the results unreliable.

For scientists to take their peers’ experimental research seriously, said research must meet some basic requirements—say, randomizing who’s in the treatment and control groups. But despite such ideals, a study out today in PLoS Biology reports, a surprisingly small number of experiments on animals actually follow those guidelines, calling in to question a swath of research that often marks the first steps toward designing the latest medical treatments for use in humans.

Over the years, researchers have taken a number of measures to try to ensure their research is unbiased. In a drug trial, for example, doctors should choose at random who gets the drug and who gets a placebo, lest they bias the results; if the treatment group comprised healthier individuals to begin with, then they’ll most likely be healthier after treatment too. In that case, researchers could erroneously conclude a drug worked, when in fact it had no effect. For similar reasons, researchers should be blind to who gets the treatment and who doesn’t, and they should always state any potential conflicts of interest in published research.

Between 2008 and 2012, just one in three studies that ought to have used random assignment actually reported doing so.

Those are the ideals, but researchers led by Malcolm Macleod, a professor of neurology and translational neuroscience at the University of Edinburgh, wondered how many studies actually measured up to such standards. Appropriately, the team started its investigation with a random sample of 2,000 papers from prominent medical and biological research database PubMed. Of the 146 reported experiments involving animals, only 20 percent of those that should have used random assignment actually did so. Just three percent reported using blind assessments, and only 15 percent reported conflict of interest statements.

Those rates have increased over time, but they still haven’t achieved satisfactory levels, the researchers report. For example, prior to 1978, fewer than one in 10 studies that should have been randomized actually were. That number is better now, but still too small: Between 2008 and 2012, just one in three studies that ought to have used random assignment actually reported doing so.

Perhaps more alarming, studies that met high reporting standards weren’t any more likely to land in top journals. Using a (non-random) sample of 2,671 papers indexed by the CAMARADES project, which collects medically focused experimental animal studies, Macleod and his colleagues found that papers reporting potential conflicts of interest were more likely to appear in the most-cited academic journals. Those that reported proper randomization protocols, meanwhile, were slightly less likely to show up in the same journals.

Still, there is hope. “A number of leading journals have taken steps that should over time improve the quality of the work they publish” Macleod and his team write. “Such measures do not have to be expensive—Yordanov and colleagues have estimated that that half of 142 clinical trials at high risk of bias could be improved at low or no cost, and the same may well be true for animal experiments.”

Quick Studies is an award-winning series that sheds light on new research and discoveries that change the way we look at the world.

Related Posts

Cracking Tor

Computer scientists devise a method to unmask users of the world's leading Internet anonymity software—and how to fix the problem.
See More