It's now clear that social media played a significant role in spreading disinformation during last year's election campaign. As many post-mortems have noted, it can be difficult to discern whether a "news" item that arrives on your Facebook feed comes from a reputable source.
Readers would be wise to approach such information with skepticism, and check out questionable claims. But the structure of social media inspires the opposite attitude, effectively lowering our level of vigilance.
That's the disheartening conclusion of newly published research. In a series of experiments, a research team led by Columbia University's Youjung Jun found people given cues they were part of an online group—including a social media network—were less likely to fact-check questionable news headlines.
This was in spite of the fact that participants had a monetary incentive to accurately discern real from fake news.
"Social contexts may impede fact-checking by, at least in part, lowering people's guards in an almost instinctual fashion," Jun and her colleagues write in the Proceedings of the National Academy of Sciences. Apparently we assume that if a lot of people are reading or sharing a story, it has probably been verified somewhere along the line.
The study featured eight separate experiments, all conducted online. In the first, 175 participants "logged onto a simulated news website, where they evaluated 36 statements described as headlines published by a U.S. media organization."
They were told that factually false statements were embedded among the accurate ones, and asked to mark each as either true, false, or questionable. They were awarded points for correct answers, which were converted into small monetary rewards at the end of the session.
Approximately half of the participants "saw their user name displayed by itself on each screen," the researchers note, "whereas others saw the names of 102 other 'currently logged on' participants beneath their own."
We assume that if a lot of people are reading or sharing a story, it has probably been verified somewhere along the line.
That piece of presumably irrelevant information had an impact on readers' reactions. Those who saw multiple names "flagged, or fact-checked, fewer statements" than those who believed they were reading alone.
"People did not seem to have much insight into their own behavior," the researchers add, "with those in the group condition steeply discounting how much they would be swayed by others' presence."
This pattern was repeated throughout the follow-up experiments, including one in which half of the 371 participants replicated the above experiment, while half took part in a modified version. They viewed the same headlines "designed to appear as Facebook posts."
Among those who were looking at a purported news website, the now-familiar dynamic emerged: Participants who saw the names of others engaged in less fact-checking. But so did those who received their news in the Facebook format—whether or not the names of others appeared on their screens.
In other words, the mere concept that they were on Facebook was enough to implant the idea that the information has been widely read and implicitly validated. This notion of "safety in numbers" leads us to trust more when we should be trusting less.
"Even if we cannot exercise constant vigilance," the researchers conclude, "we would do well to question our wisdom in crowds." That very much includes the motley crew you call your online friends.