Memes divide and replicate on the Internet in a way they never did through old-fashioned media or word of mouth. In a matter of mouse clicks, the government is planning death panels. The president is a Muslim. There are headless bodies in the desert, medical microchips under our skin and IRS agents are coming for our guns.
None of these are true. The question is, who floated these ideas in the first place (presumably while knowing that)? And would it make you feel any better if you knew?
“Right now, from your cell phone, pretty much anywhere in the world, you can just push a button and send a message to potentially thousands of people,” said Filippo Menczer, an associate professor of computer science and informatics at Indiana University. “That’s amazingly easy. The cost is very, very low. But that also means that the system is more vulnerable because the cost is lower, so abuse can also happen more easily.”
Menczer and several of his Indiana colleagues are not particularly interested in politics, but they know that these days, in the final stretch of a national election, some of the worst such abuse in cyberspace is coming in the form of political smears and “Astroturf” campaigns. And so they have set out to help the public uncover the original misinformers, all in service of studying disingenuous ideas that spread a lot like disease.
Last week, Menczer and his colleagues (including Alessandro Vespignani, who studies epidemics of the informational and biological kind) unveiled the website truthy.indiana.edu.
“Swiftboaters beware!” they announce on the landing page.
The “truthy” tool is designed to allow us to track the spread of ambiguously honest revelations migrating around Twitter. The site mines the data collected — much of it through Twitter’s publicly available application programming interface — for several “truthiness factors,” including the number of Tweets about a meme and the identity of its “top broadcaster.” This information is then modeled in timelines and animated diffusion networks.
The researchers have set the system to filter for political content, drawing on URLs, hash tags and mentions related to politically active groups and candidates for every office up for grabs this fall. From there, they’re looking for the suspicious: memes that suddenly surge in popularity or those drawing a significant fraction of Twitter’s traffic.
“When we do a history backward in time, we can see early in the epidemic who is the user who has produced the most Tweets in the first phase, who is the user who has done the most re-tweets.” Menczer said.
They can uncover a surprising amount of information that way.
“In fact, we find cases where there’s a lot of traffic between a small number of users, and sometimes when we look at them, we find out these are fake accounts — you can guess these are maybe computer-generated names. Those are clearly suspicious types of patterns,” he said. “Or you can take the 10 users producing the most traffic, and look at how long ago these accounts were created. If they were all created in the past six hours, then that’s a very suspicious type of behavior, right?”
(Investigating the origins of the meme “truthy” itself is not so difficult — Menczer and company happily acknowledge that they’ve borrowed the term from comedian Stephen Colbert, a pioneer of the nonfact stated as truth with real conviction.)
The Truthy algorithm additionally rates each of the memes on a “sentiment analysis,” a scale between two points of contrasting emotion: hostile-kind, anxious-calm, depressed-happy. Eventually, the researchers hope to study whether certain types of memes — say, the ones that really tick us off — spread faster or differently than others.
Twitter users are invited to help with the effort, flagging suspicious memes into the system themselves through their Truthy website. The idea is to create a kind of nonpartisan public service, a tool for people who want to better understand where their information comes from.
As for whether Truthy could also have another impact — shaming the misinformers until they stop misinforming — Menczer said, “Of course that’s very desirable, but it’s way too early to say if we will be able to do that.”