Skip to main content

How Social Media Helped Organize and Radicalize America's White Supremacists

The white nationalist organizers in Charlottesville used the same social media tools as everyone else. One professor argues that means we need to rethink how we approach the First Amendment.
White nationalists, neo-Nazis, and members of the alt-right exchange insults with counter-protesters during the Unite the Right rally on August 12th, 2017, in Charlottesville, Virginia.

White nationalists, neo-Nazis, and members of the alt-right exchange insults with counter-protesters during the Unite the Right rally on August 12th, 2017, in Charlottesville, Virginia.

White supremacy groups have a long history in the United States, yet some of the news around the groups' most recent activity has featured a decidedly Millennial flavor. The leaders of this weekend's demonstration in Charlottesville, Virginia—during which a man drove a car toward counter-protestors, setting off a chain reaction that injured at least 19 people and killed one—had organized using a Facebook event

Meanwhile, the Daily Stormer, a neo-Nazi news site, used memes to drum up enthusiasm for the demonstration. (The Daily Stormer's domain name was originally provided by GoDaddy, before moving to Google; both companies eventually revoked the website's registration.) And, just as the violence was unfolding in Charlottesville, the Guardian published a story about how young, white men are becoming radicalized through YouTube.

In other words, the tools of the Internet Age have helped white supremacists and other bigots to share ideas and organize. That's the dark side of the rise of the Internet and social media, as University of California–Irvine political scientist Richard Hasen argued in a paper he posted publicly last week. Not yet peer-reviewed, the paper talks about how free communication over the Internet has undermined democracy in the U.S., in part by empowering extremist groups. Pacific Standard talked with Hasen about how violent racists—like the Ku Klux Klan and neo-Nazis—use online tools, and how we might curb those extremist groups without hindering free speech.


How does the Internet, specifically social media, assist hate groups?

There certainly were hate groups before the Internet and social media. [But with social media] It just becomes easier to organize, to spread the word, for people to know where to go. It could be to raise money, or it could be to engage in attacks on social media. Some of the activity is virtual. Some of it is in a physical place. Social media has lowered the collective-action problems that individuals who might want to be in a hate group would face. You can see that there are people out there like you. That's the dark side of social media.

On the other hand, social media has also lowered the cost for people to organize for civil protests. Social media has been really important as a check on repressive governments, which in the past have been able to shut off communication with the outside world. Now, with satellite phones and other technology, even in the most oppressive countries, it is often possible for news to get through.

What did groups like the Ku Klux Klan and neo-Nazis do in the 1990s, before Internet access was widespread in America? Were there large demonstrations then?

There have been Nazi marches. There was a famous Nazi march in Skokie, Illinois, in the 1980s. There have been newsletters. There have been radio programs. There have been other ways of communicating. What's changed is that the cost of communicating has fallen to nearly zero for anyone with an Internet connection.

How else do you think easy communication has undermined democracy in America?

The Internet and social media have killed the old business model for newspapers and without news sources that people trust, there are a number of social problems that can arise, one of them being the difficulty that people have discerning the truth from so-called fake news, from propaganda that is distributed either for political reasons or profit.

Richard Hasen.

Richard Hasen.

The decline of local news has raised the risk of government corruption. Local newspapers have done the best job in tracking city-hall kinds of scandals: pay-to-play problems, contractors bribing elected officials. We know that, as newspapers disappear, or as journalists are further away, the amount of corruption goes up.

Does the fake news phenomenon intersect with hate groups' ability to organize over the Internet?

I would say the main intersection is that some people's world views may be warped by propaganda. False information can help people form incorrect beliefs about the state of the world, and that can reinforce whatever prejudices they might have. I don't think they're the same phenomenon at all, but they could build on each other.

What do you think America should do about fake news and hate groups organizing over the Internet? How can the country stem these problems without curtailing free speech?

I think that we might need some government regulation of Facebook, Google, and Twitter to make sure they're doing more to separate out false advertising for readers. This is where the First Amendment comes in. It's very dangerous when you have the government deciding directly what is true and what is not true. We wouldn't want there to be a government law that bans whatever the government deems to be false.

That's why most of the solutions I propose don't rely upon government action, but instead rely on private action, such as subsidies for local news and consumer pressure on entities like Facebook and Google to self-police the fake news problem. Facebook and Google have been taking some steps in that direction since the 2016 election, but I think much more is going to need to be done and there's going to be some pushback by people who don't want there—even in this private area—to be any kind of channeling of the firehose of speech.

I'm concerned that some of the First Amendment theories that have been accepted by a conservative libertarian majority at the Supreme Court might stand in the way of some laws that would help deal with the fake-news problem. For example, we're starting to see some people raise arguments that there might be some First Amendment right of individuals to receive foreign information and propaganda that would be, say, sent by the Russian government to interfere with our election.

It hasn't happened yet, but we are seeing some signs that the same judges are expressing concern about laws that tell us who is spending money on elections. There could come a time, five or 10 years from now, when a Supreme Court majority says that the First Amendment protects the right to anonymous speech in the campaign finance context, where somebody might be spending millions of dollars to try to influence how people might vote in elections and we wouldn't know who is behind that. That would make the fake news problem and related problems much worse. Part of the way that voters are able to decide what's true and what's not is knowing who the speaker is and whether the speaker is credible.

You actually posted this paper just one day before the violence in Charlottesville. What did you think once you saw the news out of Virginia?

It's just another example of the dark side of what cheap speech has done for the United States. When you see dark moments like this, you realize that the lack of media institutions and the lowering of collective-action costs for hate groups are problematic to our democracy.

This interview has been edited for length and clarity.