What Happens When Social Scientists Critique Their Peers’ Work?

Polite entrenchment, at least in one recent case.

In January, a team of researchers published a study that found that countries with greater gender equality also won more Olympic medals in both men’s and women’s sports. Gender equality isn’t a zero-sum game, the scientists concluded in their research, published in the Journal of Experimental Social Psychology. Maybe you remember the study; it received attention from several national news outlets, including Pacific Standard.

“As a research team, we’re not really interested in the Olympic medals, per se. We’re interested in the social consequences of gender equality,” says Feng Bai, a doctoral student in organizational behavior at the University of British Columbia, who worked on the study. Olympic medals were just one, easy-to-tally way of measuring those consequences, he tells me.

To Toon Kuppens, however, Bai and his team overstepped in their conclusions. After re-analyzing the original data and concluding that gender equality wasn’t associated with more Olympics wins, Kuppens and a colleague published a paper in June critiquing Bai’s work. Kuppens claims that Bai and his team made mistakes in their statistical analysis. (Bai says Kuppens made mistakes in his re-analysis.)*

“As a person who values equality, I would welcome any positive effects of gender equality, but sometimes, it’s there; sometimes, it’s not,” says Kuppens, a behavioral science professor at the University of Groningen in the Netherlands. “In this case, we think maybe it’s not there.”

The exchange between Bai’s and Kuppens’ teams illustrates the very kind of scientific debate and critique commentators have called for in the social sciences.

At the outset, the debate seems a bit silly. No country is going to decide to enact gender equality reform on the chance that they’ll earn a few extra tenths of a point in figure skating. But the study is just the sort of thing that’s commonly used to help build evidence for questions around the effects of gender equality. And the exchange between Bai’s and Kuppens’ teams illustrates the very kind of scientific critique commentators have called for in the social sciences. It shows us what can happen when that debate actually occurs.

In recent years, psychology and other social sciences have undergone intense criticism from researchers as well as folks outside of academia. Journalists and whistle-blowing researchers have uncovered instances of social scientists making up data, as Jerry Adler reported in our May/June 2014 issue. Even studies performed with good intentions can seem shaky. In fact, one recent project that tried to re-produce the results of 100 noteworthy psychology studies was able to replicate only about half. Other fields of science deal with similar uncertainty, but social science has garnered a lot of attention because it chips at questions that many find fascinating: Does this social welfare program work, or that one? How do you get people to care about climate change? Does giving more rights to women hurt men?

As a fix, experts often suggest that scientists publish their data publicly and analyze each other’s results after publication. That’s what happened with Bai and Kuppens. It’s important to note that, in this case, there’s no evidence of fraud. The exchange between the researchers has been civil. Along the way, both teams suggested improvements to the other’s analyses—a great service. In the end, however, there were no easy answers.

Bai compares countries directly with one another, while Kuppens groups countries into regions such as “South and Southeast Asia” and “Northern and Western Europe.”

Both camps are certain they’re right, even as they acknowledge that other studies may present some uncertainty. Bai and a separate team of social scientists ran a fascinating experiment recently, inviting researchers from various fields and universities to analyze one data set about which soccer players are more likely to receive red cards from referees. Some researchers found darker-skinned soccer players received more red cards. Others found no difference between fairer and darker players. “The data are ambiguous here,” Kuppens concedes. Yet when it comes to Bai’s data, he doesn’t think it’s ambiguous at all, despite his and Bai’s disagreement.

Although it may not always seem that way in school, science is a way of thinking more than it is a single subject, like biology or chemistry. It depends on the scientific method, which strives to give people an objective means for discovering truths about the natural world. Nevertheless, the hand of humanity weighs heavily upon it. In the case of the Olympic-medal analyses, one of the major differences between Bai’s and Kuppens’ analyses is that Bai compares countries directly with one another, while Kuppens groups countries into regions such as “South and Southeast Asia” and “Northern and Western Europe.” Both teams think the other’s method is invalid. Let’s just re-iterate here: It’s this one decision they’ve made about how to analyze their data that gave them differing conclusions.

Doesn’t it all sound like we can never really know if a scientific conclusion is true? “I guess, for individual studies, that may be the case,” Bai says. “But if you look at the scientific literature as a whole, then I think we are more confident.” A recent investigation by FiveThirtyEight into uncertainty in the field of science came to the same conclusion. To look at Bai’s research into a win-win effect for gender equality in particular—that’s a pretty new idea that’s fairly untested, so it’s not unexpected that different research teams will have different opinions on it. (If you’re looking for a reason beyond the moral imperative to improve gender equality, numerous studies have found gender equality drives economic growth for countries.)

One thing both Kuppens and Bai agree on is that a potential solution could include more projects like the red-card one. Just don’t expect them to give you clean answers every time.

Since We Last Spoke examines the latest policy and research updates to past Pacific Standard news coverage.

*UPDATE — September 29, 2015: This article has been updated to more accurately reflect Bai’s response to Kuppens’ analysis.

Related Posts

Inside the Shady World of Sober Homes

Homeless and struggling with sobriety, Lillian Imbert faced a choice: Go to useless counseling sessions at New York Service Network or be evicted from her “sober” home. Her story shows how drug treatment clinics and landlords traffic in indigent alcoholics and addicts, all at taxpayer expense.
See More

Research Undiscoveries

The book "It's Great! Oops, No It Isn't" explains how different research and analysis methods lead to "definitive" studies with opposite conclusions, offers remedies for the problem and remains, throughout, a surprisingly good read.
See More