If you want to strike a blow to the core of a society, there may be no better way to do it than to hack social media, a place where even the most dubious information spreads like a misinformation wildfire. Unfortunately, countering such attacks is a largely manual and potentially time consuming process. Now, computer scientists have devised an automatic defense system capable of rapidly detecting attacks—a method, they argue, that could have averted high-profile assaults on the Associated Press and other news outlets.
While awareness of Internet security issues has grown over the years, so too have successful electronic attacks, as illustrated by the Syrian Electronic Army's successful efforts to take control of websites belonging to the Washington Post and the United States Army, among others. In one particularly amusing instance, SEA took over the Onion's Twitter account with nothing more than old-school phishing.
The faked Associated Press tweet stood out because it was posted directly through Twitter and didn't include any links.
But the amusement ends there. In late 2013, SEA hijacked Skype's social media accounts and used them to denounce Microsoft, which owns the video chatting platform. Earlier that year, a compromised AP account announced President Obama had been injured in an explosion at the White House, sending Wall Street traders into a minor panic.
"A wealth of research was proposed in the last years to detect malicious activity on online social networks," write a team of computer scientists led by Manuel Egele, an assistant professor of electrical and computer engineering at Boston University, in a new paper. But most of those proposed methods for detecting nefarious online activity wouldn't have noticed the AP attack, because they focus on identifying patterns linked to more extended attacks, malicious accounts, and suspicious Web links, according to the researchers. In contrast, the AP hack, and similar attacks on Fox News and Yahoo! News, involved just one tweet with no embedded links and came from genuine accounts—meaning there's no pattern of malicious behavior to detect.
The trick, then, isn't to find patterns of malicious behavior, the researchers write, but rather patterns of genuine behavior. Their system, dubbed Compa, builds a profile of authentic posts based on features including the timing and topics of posts, as well as the history of interactions with other users. Compa then compares each new post against an account's profile and flags posts when they don't line up well enough.
Egele and his colleagues tested Compa using the tweets leading up to and including attacks and found it had little trouble identifying malicious use—at least when genuine posts followed a consistent pattern. The faked AP tweet, for example, stood out because it was posted directly through Twitter and didn't include any links, while genuine AP tweets always come through social-media management system SocialFlow and always include links to news stories. On the other hand, the Guardian's genuine tweets are sufficiently variable that Compa raised the alarm on nearly half of them—meaning there's still a ways to go before the approach is reliable enough to keep social media safe.
Quick Studies is an award-winning series that sheds light on new research and discoveries that change the way we look at the world.