In The Signal and the Noise, Nate Silver argued that, while statistical analysis was a game-changer in sports and politics, it was ultimately no substitute for wisdom gained through experience—mainly because we don’t actually understand things like baseball and presidential elections well enough to model them accurately. A new study, however, reaches the exact opposite conclusion: People making environmental management decisions typically make worse choices than statistical models would—even when the models are wrong.
The study “was born out of an existential crisis of mine,” Matthew Holden, a postdoctoral research fellow at the University of Queensland’s ARC Centre of Excellence for Environmental Decisions, writes in an email. He had been working on mathematical models aimed at helping others make better decisions about invasive species, organic agricultural pest control, and fisheries management. But, Holden writes, “there was always this little devil in the back of my mind saying, ‘Is what you’re doing really providing useful advice to managers, given that we know that your models are simplifications of the real world and hence wrong?'”
Could intuition trump scientific models when it comes to complex ecological systems?
In other words, could intuition trump scientific models when it comes to complex ecological systems? To find out, Holden designed two commercial fisheries’ management games, one for herring and one for salmon, built on standard mathematical models and typical estimates of parameters such as the average number of offspring a salmon produces.
Then, he pitted 198 undergraduate and graduate students taking biology courses with environmental management components at Cornell University and Ithaca College against a computer that knew the underlying mathematical models, but not the parameters—those, the computer had to estimate based on observations it made during the game.
Importantly, Holden knew what the best decisions were, and not just because he’d designed the game. In fact, Holden himself calculated the best decisions for the salmon game in a paper published last year, and the best herring choices have been known since the 1970s, so he had two measures to judge students’ performance—the computer’s decisions, and the best possible decisions.
By either measure, students did not do well. In the salmon game, for example, students earned 63.6 percent of the maximum possible profits, while the computer earned 98.4 percent of the maximum.
But it gets worse. Even when Holden programmed the computer to make decisions based on an incorrect model—a simplified model that didn’t match the game’s underlying dynamics—the computer still earned 78.9 percent of the maximum possible profits, 15.3 percentage points more than students had earned.
“We pitted University students in introductory environmental science courses against the models and found that even when the models were completely wrong they still on average made better decisions than the humans did,” Holden writes, perhaps because the models force environmental managers to think more clearly about their assumptions. “Without the aid of modeling, decision maker assumptions become less transparent, and it is easier for them to make biased decisions.”
Quick Studies is an award-winning series that sheds light on new research and discoveries that change the way we look at the world.