Crowds are excellent at finding answers to problems, but we still need scientists to work at defining the questions.
By Michael White
We tend to believe that progress in science is achieved by brilliant minds working on their own. Think of the popular image of Albert Einstein: a disheveled, iconoclastic genius who overturned centuries of physics with the sheer power of his exceptional brain. The history of science, as it is most often told, is a history of standouts: Galileo Galilei, Isaac Newton, Charles Darwin, and James Watson and Francis Crick. Our most famous scientific prizes, the Nobels, reinforce this by singling out exceptional individuals who make major discoveries.
This popular image of science is a caricature, of course. Science has always been a community activity, and even geniuses like Einstein exchange ideas and criticism within a circle of colleagues. Yet science is often practiced as if the myth of the lone genius were true. Scientists at universities and research institutes often work as individuals rather than teams, with an enormous amount of autonomy to set their own intellectual agenda. They choose what research questions to pursue, and how to pursue them, without being dictated to by their department chairs. In other words, science is organized on the premise that progress happens when smart individuals are left alone to chart their own path toward discovery.
But recently, an alternate way of organizing science has become increasingly popular: crowdsourcing. Inspired by trends in social media, many institutions that sponsor science are turning to the wisdom of crowds, rather than relying on the brilliance of autonomous individuals. According to a recent paper by a group of computational biologists, led by Julio Saez-Rodriguez, of Aachen University in Germany, and Gustavo Stolovitzky, of IBM’s Watson Research Center, crowdsourcing “leverages the prompt feedback, ease of access, communication and a participatory culture that is fuelled [sic] by the Internet,” to make communities — and not just individuals — the engines of innovation.
Science has always been a community activity, and even geniuses like Einstein exchange ideas and criticism within a circle of colleagues.
Crowdsourcing in science works much as it does elsewhere in society, from Wikipedia to election-betting markets. As Saez-Rodriguez and his colleagues put it, “crowdsourcing combines the bottom-up creative intelligence of a community that volunteers solutions with the top-down management of an organization that poses the problem.” The premise behind crowdsourcing scientific and technological problems is that a community of volunteers can explore a much larger range of possible solutions than a handful of individuals, no matter how brilliant. Organizing such large volunteer communities wasn’t feasible before the modern Internet. But now that it is easy to share large amounts of data online, crowdsourcing is a viable way to tackle challenging scientific problems.
Scientific data today is not only easier to share — there is also a lot more of it. Saez-Rodriguez and his colleagues argue that this means crowdsourcing will become increasingly important in science, since “it is highly likely that the methods and breakthroughs that get the most useful signal from big data may reside with groups other than the data generators or the most famous and best published groups in the field.” This is one reason why crowdsourcing can be successful: Rather than make assumptions about which experts are best suited to solve a problem, crowdsourcing brings in “a wide range of sources without a priori expectations as to who may be best positioned to solve the problem.”
In fact, sometimes non-experts are the best option. Some researchers have “gamified” their work, by turning their research problems into a game and using the brainpower of thousands of non-scientist volunteers to find solutions. To understand why certain critical biological molecules fold up the way they do, for example, Stanford University’s Rhiju Das created eterna, an online puzzle-solving game that essentially outsources a complex problem in computational biology to 150,000 non-scientist players. The two million hours of human brain power contributed by eterna’s players have turned out to be more effective at finding principles that explain molecular folding than the best expert-designed computer algorithms.
More often, science is crowdsourced to expert communities in the form of so-called challenges, in which an organization poses a problem and offers a prize for the best solution. Netflix famously took this approach to improve its film recommendations, when in 2009 it awarded $1 million to a team of professional researchers who came up with the best algorithm to predict how users would rate shows they watched.
Challenge prizes are, in a sense, the mirror image of the Nobel prizes. The Nobels reward individual excellence by recognizing discoveries that, in hindsight, turn out to be pathbreaking, and almost always go to eminent researchers who are leaders in their field. Challenge prizes, on the other hand, reward solutions to specific problems that are defined not by scientists themselves, but by the organization offering the prize. Anyone who comes up with the best solution can win, whether or not they are leading experts.
The two million hours of human brain power contributed by eterna’s players have turned out to be more effective at finding principles that explain molecular folding than the best expert-designed computer algorithms.
For the past five years, the federal government has turned to challenges as a way to efficiently find solutions to important scientific and technological problems. Under the mantra of “open innovation,” the government established an online platform in 2010 called challenge.gov, where any government agency can pose a problem and offer a reward for the best solution. The site has hosted over 700 challenges from more than 80 different federal agencies, and more than $220 million in prize money has been awarded since the site went online. Currently, the Bureau of Reclamation is looking for ways to prevent rodents from burrowing into levees, the Internal Revenue Service would like someone to “reimagine the taxpayer experience of the future,” while the National Aeronautics and Space Administration is offering $500,000 to anyone who can make lab-grown human organ tissue that survives for “30 calendar days.”
Crowdsourcing science offers important advantages: It encourages sharing, builds communities, and provides incentives to solve well-defined problems. But crowdsourcing also poses risks. While crowdsourced challenges may make science more open and participatory, they also define scientific problems from the top down. This means that the scientific agenda reflects the needs of the businesses, government agencies, or philanthropic foundations, rather than what scientists themselves see as the most important scientific problems.
There is some truth to myth that science is driven by brilliant individuals: The most successful scientists make discoveries not only by finding answers, but by posing new questions, or old ones in innovative ways. This was why Einstein succeeded where others failed. The wisdom of crowds can solve important problems, but it often takes individual scientists to define those problems in the first place.