Science needs to diversify and take more risks to achieve its maximum potential, researchers argue.
By Nathan Collins
A scientist examining cells in a 96-well plate. (Photo: Dan Kitwood/Getty Images/Cancer Research UK)
In an era awash in data, scientists have begun to analyze something they’ve never really looked at before: science itself. Abstract though that may sound, the science of science could have an oddly practical application, at least in theory—namely, providing funding agencies like the National Science Foundation with a better idea of which research proposals will work and which won’t. That objective takes on special significance, what with the future of science in the United States decidedly uncertain—but it probably won’t work, a new essay argues. Indeed, insisting otherwise could hinder the progress of scientific research.
“If we want to make science more productive for society … then we should be using science itself to see how to achieve that goal,” says Aaron Clauset. Unfortunately, he and co-authors Daniel Larremore and Roberta Sinatra argue, our hopes for predicting the future of science do not necessarily match our abilities.
“Science is very unpredictable,” Clauset says, at least in some respects. The distribution of citations, for example, turns out to be quite predictable: It’s a power law, meaning only a handful of papers get cited thousands (or even hundreds of thousands) of times, while the vast majority are cited only a few times, if at all. To be blunt: Most published research has little to no influence on the future of science.
Another predictable—maybe deceptively predictable—feature of science: “Chances are, your best-cited paper happens early in your career,” Clauset says. The usual (but rarely tested explanation) is that young people are just smarter, but that’s wrong; really, they just publish more than older researchers. While that means more researchers will hit upon their most influential ideas when they’re young, some won’t do so until later in their careers, and there’s no way to know who’s the next wunderkind or even which scientific endeavor will pan out.
“Chances are, your best-cited paper happens early in your career.”
Not that there aren’t people trying to sort it out. With the explosion of data on scientific research, there are those who try to predict which projects will prove successful. There’s even a service now that aims to predict a study’s importance before it’s published — with the idea that reviewers could incorporate that knowledge into their reviews.
But a service like that could be dangerous, Clauset says. Such predictions are based on what a scientist has done in the past, especially the recent past, meaning that whether a paper gets published or whom the National Science Foundation decides to fund depends in part on whether a researcher has already publishedsomething influential. The assumption underlying those choices, however, is that people who’ve already done influential work will continue to do so—and that flies in the face of what we actually know about scientific progress.
Worse, algorithms of that sort could create feedback loops that hurt young scientists, researchers from underrepresented groups, and those with high-risk but potentially high-reward ideas. In fact, there’s already been a trend toward funding lower-risk projects, which Clauset likens to an overly safe investment portfolio, and, just as in investments, the better alternative is to diversify: Weed out the proposals that don’t meet certain basic requirements, then pick at random which of the rest to fund.
“If we accept the fact that science is inherently unpredictable, it will open up room for individuals to be different from each other,” Clauset says. “If we buy into this, it suggests an entirely different approach to funding science”—one that might be better not just for scientists, but for all of us.