Social Networks Can Boost Knowledge, Even on Polarizing Issues

But only if they are structured in a way that deemphasizes users’ partisan identities.
In this photo illustration the Facebook logo is reflected in the eye of a girl.

As we are reminded by this week’s contentious congressional hearings, online social networks such as Facebook have been credibly accused of contributing to political polarization, spreading disinformation, and the dumbing down of our national discourse.

But a newly published study suggests interacting on such sites can—under the right circumstances—actually increase factual understanding of a divisive issue.

Participants were more likely to accurately predict a key consequence of climate change—the continuing loss of Arctic sea ice—if they were exposed to varied viewpoints on a non-partisan online platform. However, even subtle reminders of politics or ideology were enough to negate this positive effect.

“New scientific information does not change people’s minds. They can always interpret it to match their beliefs,” lead author Damon Centola, a University of Pennsylvania sociologist, said in announcing the results. “But if you eliminate the symbols that drive people into their political camps and let them talk to each other, people have a natural instinct to learn from each other.”

The study, published in the Proceedings of the National Academy of Sciences, featured 2,400 Americans recruited online. After revealing their political ideology, all were presented with an official NASA chart that tracked the average monthly amount of Arctic sea ice over a 34-year period. While the level varied from year to year, the trend was clearly downward, although there was an upward spike in the final recorded year (2013).

After examining the chart, participants were asked to forecast the amount of ice in 2025. Researchers noted whether their estimates “corresponded to the correct (downward) trend identified by NASA.” They found 74 percent of liberals, but only 61 percent of conservatives, accurately predicted this continuing loss.

Participants were then divided into four groups. Those in three of them saw the estimates of four peers, and were then invited to revise their initial responses. Members of the fourth “were permitted to revise their answers using only independent reflection.”

For those in the first group, the political ideology of their peers was noted along with their initial answers. Those in the second group were given only the answers. That was also true for members of the third group, but they received a more subtle reminder of politics: The logos of the Democratic and Republican parties were shown at the bottom of the Web page.

The researchers report those who interacted anonymously, having no knowledge of their peers’ politics, used the chat to clear up their misunderstandings, and came to something close to consensus. Among members of that group, 86 percent of Democrats and 88 percent of Republicans correctly identified the trend.

This impressive shift did not occur for any of the other groups—including those who simply were exposed to party logos. It appears even a subtle reminder of partisanship was enough to provoke adherence to ideology-based misinformation.

Of course, in the real world, social networks tend to reinforce our partisan identities, as our “friends” forward material that reinforces our mutual belief systems. But this research suggests that, if we can shake off our partisan identities for a few minutes, we can actually absorb data that doesn’t conform to our prejudices.

Perhaps Mark Zuckerberg and his colleagues should give serious thought as to how such a non-partisan network might be created and sustained.

Related Posts