Falsehoods spread faster and deeper into the Twittersphere—and humans, not bots, are to blame.

It's been a banner couple years for fake news. With social media ascendant, the notion that a lie can travel halfway around the world while the truth is still lacing up its boots turns out to be, well, true. The intelligentsia once wrung its hands about the death of old-fashioned print media because it meant losing a Sunday morning ritual. Now, it seems as though our very democracy is at stake.

A study published last week in Science offers some insight into the virality of fake news—and suggests that, more than social media itself, we humans are to blame for the rise of this new Fifth Estate.

According to researchers at the Massachusetts Institute of Technology, fake news spreads not only farther on Twitter than true news but penetrates users feeds' "faster, deeper, and more broadly." The effect was observable across a range of topics, but was "more pronounced for false political news than for false news about terrorism, natural disasters, science, urban legends, or financial information."

In the first study of its kind and scope, the trio of researchers, led by Soroush Vosoughi, analyzed every verifiably true or false news story to hit Twitter between 2006 and 2017, some 126,000 distinct stories tweeted by three million users. (Stories were classified as "true" or "false" when six independent fact-checking organizations could agree on their veracity.)

The authors then mapped the dynamics of each story's virality, including how many "cascades"—separate threads by separate users—it caused, how frequently it was retweeted, and how quickly it reached viral milestones, such as engaging 1,000 users.

When the team gauged the "diffusion dynamics" of the 126,000 stories, they found that "falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information."

False stories didn't just attract more interest but also prompted more retweets. "Whereas the truth rarely diffused to more than 1,000 people," the authors write, "the top 1% of false-news cascades routinely diffused to between 1000 and 100,000 people."

What's more, when it comes to virality, all fake news isn't created equal. False political stories traveled deeper into the Twittersphere and reached more people than any other kind of false story. According to the researchers, "False political news also diffused deeper more quickly and reached more than 20,000 people nearly three times faster than all other types of false news reached 10,000 people."

Much has been made about the role of fake news in the outcome of the 2016 presidential election. Last fall, the titans of social media were hauled across the country to appear before Congress, where their testimony did little to appease lawmakers' righteous, camera-ready anger. Mark Zuckerberg, who initially called the idea of Russians using Facebook to meddle in the election a "crazy idea," was forced to admit that 126 million Americans were exposed to incendiary posts, while Twitter announced that 36,000 automated accounts linked to Russia had churned out 1.4 million election-related tweets in the three months leading up to voting.

But according to Vosoughi and his colleagues, when it comes to the spread of fake news, social media itself isn't to blame. We humans are.

After mapping the viral trajectories of false news stories, the researchers turned their attention to the humans behind their spread. Contrary to popular belief, the team found, those users spreading false news were not mass influencers or power-users. They didn't have more followers, tweet more often, or have "verified" status more frequently. Just the opposite, in fact: Fake news was spread by users with fewer followers, who tweeted less frequently, and were less likely to be verified. "Falsehood diffused farther and faster than the truth despite these differences, not because of them," the authors write.

Just as remarkably, the researchers argue, bots are not to blame for our current fake news epidemic. When, using a sophisticated bot-detection algorithm, the team scrubbed bot-related tweets from their data set, the results changed little. It wasn't that bots had no effect on the virality of tweets whatsoever—it was that they amplified fake and true news equally.

"Although the inclusion of bots, as measured by the two state-of-the-art bot-detection algorithms we used in our analysis, accelerated the spread of both true and false news, it affected their spread roughly equally," the authors write. "This suggests that false news spreads farther, faster, deeper, and more broadly than the truth because humans, not robots, are more likely to spread it."

Ultimately, our fake news problem is us. Falsehoods are 70 percent more likely to be retweeted than truths, the researchers conclude, and although why that's the case isn't precisely clear, they offer a few ideas.

Fake news tends to be more novel than true news, and novel information attracts human attention because it "updates our understanding of the world." Novelty, then, is not just surprising but useful. "Although we cannot claim that novelty causes retweets or that novelty is the only reason why false news is retweeted more often," the write, "we do find that false news is more novel and that novel information is more likely to be retweeted."

Big-data research of this kind can help us see the basic flaws in the human operating system, but, unfortunately, it can't help us patch them. As any student of American history knows, fake news plagued our democracy long before Facebook and Twitter came along. It's not going to disappear because we stop telling lies, but because we stop believing them.