What Was Said
“Berries, Apples May Aid Weight Loss,” “Vitamin E Could Protect Against Liver Cancer,” “Eating Chocolate ‘Improves Brain Function’” — you’ve seen the headlines; maybe you’ve clicked on some of them. These stories often cite studies from observational epidemiology that use one of a few large-scale surveys tracking people’s diets alongside outcomes like weight change.
The Problem With That
These data sets, like the National Health and Nutrition Examination Survey and the Nurses’ Health Study, routinely get mined by hundreds of scientist teams. John Ioannidis, a doctor who studies research accuracy at Stanford University, thinks this approach produces too much fool’s gold. “You’re practically doomed to get false positives, if you run enough analyses,” says Ioannidis of so many scientists quarrying in the same data. If 100 teams analyze the same large survey for associations between a single alleged superfood (Kale! Açai berries!) and weight loss, for example, the standard 5 percent margin of error used for many science studies means 5 positive results will be the result of chance.
What to Do About It
Ioannidis proposes a bold solution. Instead of rewarding survey projects with funding for spawning hundreds of studies, stop funding one-nutrient observational epidemiology altogether. We’d be better off devoting more research dollars and time to fewer, better-designed studies, he argued in an op-ed published in the journal Obesity in April. It’s hard enough to figure out what works to lose weight and keep it off. Trimming the research of barely helpful, potentially flawed studies could make this journey a little easier.