Understanding Facebook’s Failure to Deal With Hate Speech

A conversation with Siva Vaidhyanathan about why social media platforms aren’t doing a better job at removing bigotry and misinformation.
Facebook's Mark Zuckerberg departing from the Tech for Good Summit at the Élysée Palace in Paris on May 23rd, 2018.

Last week, Alex Jones of Infowars went on Facebook and YouTube and claimed that former Federal Bureau of Investigation director and current special counsel Robert Mueller was a pedophile and fantasized about shooting Mueller at “high noon.” Unsurprisingly, neither platform took any steps at first to remove the video. As we’ve covered at Pacific Standard, our major social media platforms are still failing to address hateful and violent speech. Facebook did eventually give Jones a 30-day ban, but his media company, Infowars, is still streaming as though nothing has changed. Facebook chief Mark Zuckerberg also recently gave an interview in which he said he wouldn’t consider banning pages that promoted Holocaust denial because, while Facebook does restrict users who make direct calls for violence, it does not restrict users who merely spread falsehoods. On Twitter, I recently encountered two accounts spreading dangerous misinformation. One is a white supremacist account engaged in spreading Holocaust denial and various white supremacist myths about North America, the other an anti-vaccination account promoting using bleach enemas as a cure for autism. Twitter refused to take any action.

How do we make sense of corporate inaction in the face of the mainstreaming of online hate?

I spoke over the phone with Siva Vaidhyanathan, a professor of media studies at the University of Virginia. Vaidhyanathan is the author of the new book Antisocial Media: How Facebook Disconnects Us and Undermines Democracy, in which he looks at how these platforms are shaping society—often for the worse. He’s read every post and statement that Zuckerberg has ever made (compiled at the Zuckerberg Files) and has been writing and reporting about the societal consequences of the Internet for decades. Vaidhyanathan spoke to Pacific Standard about the current challenges we’re facing. He’s not optimistic.

What can we do about misinformation and lies and hate speech on our social media platforms?

There is nothing “we” can do, as in you and me, and nothing that Twitter or Facebook would do to fix the situation for two reasons. One, anything that would effectively cleanse Facebook and Twitter of harmful material would put such a heavy burden on what are fairly small staffs [relative to the companies’ size] and violate their core founding principles. If they violate their core founding principles they are no longer themselves.

What are those principles as you see them?

One, to provide everyone a voice, and two, to remain neutral between contested positions.

This problem [of hate speech online]—and that’s too light a word—reveals that social media was a bad idea in the first place, and there really is no fixing it! If we want social media as it’s been imagined and constructed, we have to live with this garbage. And we have to accept that Facebook and Twitter will be vulnerable to sabotage and pollution as long as we have it.

So what about the reforms that Facebook keeps touting?

What we’ve been seeing in the last year or two are these feeble attempts to create and enforce standards and generate public confidence in [the company’s] ability to filter. But neither the humans who work for these companies nor the artificial intelligence systems they build can anticipate the varieties of human craziness and cruelty.

Let’s say you build an algorithm that steadily predicts where white supremacist content comes from. You can’t build moral and ethical judgment into that algorithm or ask it to critically weigh when something is merely goofy or potentially harmful. And we all know that! Anyone who spends any time thinking about manners, let alone community safety, knows that these are hard choices, so we make them at the smallest possible scale. We have become adept at policing appropriate speech within churches, classrooms, and work spaces. But imagine doing that for 2.2 billion Facebook users or 300 million Twitter users in more than 100 languages. It’s absurd to expect these companies to do anything but a set of cosmetic reactions.

Is there value in cosmetic reactions?

Good question—value to Facebook? Or to humanity?

Facebook doesn’t see a distinction. Facebook leaders see what’s good for Facebook is good for humanity. The original sin of Facebook is this hubris that Facebook speaks for, and should be in the business of engineering, our social experiences.

So what is Facebook actually doing about hate, about groups like Infowars, on its platform?

Facebook has a social engineering solution [to the problem], and people at Facebook are baffled by the fact that rest of us don’t see it as a solution. Facebook has said that when there are posts that are troublesome, it will decrease the frequency of those posts on people’s feeds. It will make sure that those posts don’t travel as far. That doesn’t make them invisible—they can still go to the Infowars page—but you are less likely to come across [them] just by looking at your newsfeed. There’s no accountability in that system. Nobody knows what everyone else is seeing, and we have to take Facebook’s word for it.

So if Facebook can’t fix it, what are we to do?

The only effective protest is to shame its advertisers. Limit its distribution. Shaming advertisers has been really effective [with media companies].

I don’t understand why social media platforms can’t just ban Holocaust denial, for example. I keep hearing that, in Europe, especially in Germany where it’s illegal, the filters do just fine. Why can’t we have them here or everywhere?

There is no reason why Zuckerberg couldn’t decide tomorrow that Holocaust denial is so out of bounds, and so against his social engineering project—which is to make our species nicer to each other—to say: This one thing shall not stand!

So why doesn’t he?

His view of Facebook’s ideal role in the United States and thus the world is to reflect the free-speech calculus that Americans go through. Which is very much the John Stuart Mill and Oliver Wendell Holmes balance: It’s cool unless it does direct harm to someone. Holocaust denial doesn’t sound like “kill all the Jews”; it’s “what do you mean, we didn’t kill all the Jews.” So while it justifies anti-Semitism in the worst ways, it also denies that very violence of anti-Semitism.

So it’s easy for Zuckerberg to say a page called “kill all the Jews” shouldn’t exist on Facebook: It’s “fire in a crowded theater.” But he doesn’t want to think about the harm process that Holocaust denial creates. [Instead of looking] at the deep and troubling collection of ideas that influence hatred in this world, Facebook’s move was to reduce it to a true/false binary, then dismiss the problem. Zuckerberg made it an equivalency between something he would say erroneously in a speech and an elaborate discourse on how Treblinka was staged.

Really a stunning thought if you actually think about it! He would rather not think about it.

You are pretty harsh on Zuckerberg in particular.

Look: Zuckerberg has certain basic values and he has been pretty clear about it. He is a cosmopolitan, libertarian-leaning liberal. And he abhors discrimination against LGBT people. Abhors limitations on immigration, especially racial and ethnic limitations. He has voiced concern for the status of women in his industry. Has clear values about good versus bad. Built into his company is the vision that we would all understand each other and treat each other better if we communicated more.

Given all that, I don’t understand how anyone at Facebook can sleep at night when they actively helped Donald Trump become president [after] he said he would promote Muslim discrimination, he said he’d build a wall, he said Mexicans are rapists, he’s allegedly sexually assaulted so many women. Why embed staff in his campaign?

Is the answer money?

[Facebook] didn’t make that much money [from the campaign], but they want to be a political player. But they don’t have to help Trump. Rodrigo Duterte said he’d allow people to gun down suspected drug dealers on the street of the Philippines, and Facebook staff said, what can we do for you? Once again, they say they helped both major parties, but so what?! Help the one that doesn’t want to slaughter people in the streets, that’s less likely to put babies in cages. How can you improve the world when you actively help Trump or Duterte? It makes no sense.

What are the distinctions among the major platforms—Twitter, Google, Facebook?

Scale. Globally, Facebook has 2.2 billion users and YouTube has 1.6 billion. YouTube is the second-biggest propaganda platform in the world after Facebook. It may be more effective as a propaganda platform because it’s a broadcast model, not a narrowcast model. Twitter [has] only 300 million users around the world, most of them concentrated in North America and Europe. It’s relatively harmless in a global sense, and if they would get rid of the president of the United States it would be a lot lot less harmful.

This conversation has been edited and condensed for clarity.

Related Posts