Millions of Mosque Shooting Videos Were Uploaded to Facebook. Who’s to Blame?

Facebook and YouTube rushed to remove violent videos. An expert discusses why we need a “reckoning” for online content moderation.
A Facebook Live video of New Zealand Prime Minister Jacinda Ardern speaking at a press conference on March 20th, 2019, in Christchurch, New Zealand.

At least 50 people were killed in an anti-immigrant and anti-Muslim shooting at a mosque in Christchurch, New Zealand, on Friday. While the shooter livestreamed 17 minutes of the attack on Facebook, online supporters made copies of the video and proceeded to disseminate them on social media platforms, including Facebook, YouTube, and Reddit, according to Wired.

After failing to detect the livestream until after it had finished, Facebook scrambled to stem the video proliferation, later self-reporting it had “removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload.” Similarly, a YouTube spokesperson told CNBC that it had removed “tens of thousands of videos” of the attacks and disabled “human review” from its content moderation procedures, in an effort to streamline the deletions. (Usually, content deemed to be in violation of YouTube’s rules is removed from the platform by human workers. This process was bypassed in the wake of the shooting.)

In order to make sense of these numbers, the companies’ responses to the shooting, and the role social media platforms may have played in creating the shooter’s ideology, Pacific Standard spoke to Sarah Roberts, a commercial content moderation scholar at the University of California–Los Angeles.

Platforms like Facebook and YouTube received a lot of attention for the number of videos of the livestream they reported were removed and/or blocked at upload. One-and-a-half million videos removed sounds like a lot in abstract, but what is the context for those numbers.

Big numbers, right? But the kinds of stats or metrics that get touted don’t really account for the full gamut of things going on when it comes to disturbing material. It may be that [a video] is removed 10,000 times and the 10,001st time it gets uploaded and doesn’t get removed, maybe that’s when someone is negatively impacted irrevocably.

I don’t mean to suggest that the work to take material down “at scale,” as they would say, isn’t important. It is important. But it’s not an adequate rendering of what’s really going on. Millions of videos taken down are against a backdrop of millions upon millions of other videos that aren’t taken down—many of which will never be reviewed simply because of the impossibility of doing so at this scale. And that problem is not just specific to the New Zealand incident.

So, are those numbers of removed videos meaningful to you?

I often feel like I’m talking in another language when I’m talking to people who work inside the tech industry and operate strictly in these paradigms of data aggregation and numbers, without considering the nuances of what we’re talking about. That isn’t to say that they don’t think about these issues, but they’re trying to quantitatively measure things and sometimes that’s not the right measuring stick.

They’re focused on on finding, isolating, and removing millions of pieces of individual content. What if we took a step back and asked “OK, what is it about the platforms that seem to invite this kind of material to be uploaded?” But that’s not the question that’s usually being asked.

YouTube says that it disabled human review of flagged content in the wake of the shooting, allowing video uploads of the violence to be immediately rejected. Does that indicate that the platforms could easily have a better handle on dangerous content?

What I’m guessing that meant is: When there is a piece of known bad content, such as the video in this case, the platform has decided there’s no place for it on its platform. They want all instances of it gone. That is actually a fairly easy problem computationally to solve. Using A.I. technology, “known bad” images and videos can be kind of numerically identified—in a process called hashing. So the video could be hashed and placed in a “known bad” bank, and anything uploaded or already on the platform can be checked against that. That’s a very effective way to deal with this particular piece of content.

But the problem is that the content exists at all. I think we’d all agree that we’d like to have fewer videos of that type ever in the world. Which means that anytime anything new happens, somebody is going to have to see it and make a decision about it at some point.

Now in this case, it’s correct that there is really no reason for humans to be on the case anymore. A determination has been made about this video. And so, in this particular case, it’s a done deal. I do think that’s a sound decision, and it takes humans out of the loop, but it’s just one instance in a sea of things. And so while I’m glad to hear that YouTube won’t have people looking at this video over and over again, it’s sort of a weird thing to tout.

You have called removing violent footage from YouTube “closing the barn door after the horses are out.” Is all this discussion around the shooting video focusing on the wrong thing when shootings are rare, but radicalization from content on these platforms is much more common?

That’s right. I really hold YouTube to account in a different way too, because it creates an incentive for incendiary, controversial “viral” material, through their [pay-per-view] monetization program. They’re cracking down on that but that is relatively recent. And there are other ways that people benefit from popularity on YouTube even if they’re demonetized. So I do think these conversations are always a day late and a dollar short.

They’re always focused in the weeds early on after a particular horrific event or instance—which of course we need to discuss. But it’s always the wrong conversation. It’s a relatively new phenomenon in the last few years that companies are even having conversations openly about their moderation practices. They certainly haven’t done a deep soul-searching publicly about “OK, was this this individual who perpetrated the crime in New Zealand fed on a diet of material that we served on our platform?” That’s the kind of reckoning that has to take place. I’m sure there are deep conversations happening internally. But we’re not privy to that.

Why do you think YouTube has not had to have that conversation yet, the way Facebook at least had to answer a lot of tough questions post-election about Cambridge Analytica? You could argue that it didn’t actually lead to real significant material changes, but why isn’t that happening with content moderation after these sort of radicalized shootings?

There are different public relations approaches at different companies. YouTube and Google are notoriously tight-lipped about those sorts of things. With Facebook, whether or not we think there were material changes, it did seem to matter that the platform was talking openly about changing. But Facebook is always on the line because Facebook is top dog. So it gets called to account in a different kind of way.

We’re not that far out from the Christchurch event, and it may be that those conversations will be had. The companies themselves don’t “come to Jesus,” as they say in the South. They are put under pressure. Pressure comes from reporting. Pressure comes from questions from legislators. Pressure comes from advocacy groups. Pressure comes from academics. It’s incumbent upon us to hold their feet to the fire on this, and we can’t let this incident fade away like so many others. There’s too much on the line. So we, as the public, actually have to hold them to account as well and demand that others in power hold them to account appropriately on this.

A lot of our discussion has been predicated on the assumption that violent content should not exist on platforms like Facebook and YouTube. But journalism has a long history of treating the documentation of tragedy as newsworthy, important, and worth being shown to people. Is there a fundamental difference here?

There is a long history of [violent and traumatic] documentary evidence being treated as newsworthy, and having other kinds of value too—in prosecution for example. But, in journalism, that material is typically contextualized, and the inclusion of it may not be whole. It may be in part, or it may be the entirety of the content coupled with additional information. It has also likely been greatly debated before being published. For example, the 1972 photo of the young woman burned in the napalm strike won a Pulitzer Prize, but almost didn’t go to print. Those dimensions are not present in the same way on social media.

The second issue is whether YouTube or other commercial platforms, whose responsibilities and motivations are not clear, should be the de facto repository of this important information? Why is that the go-to place? It is that a good thing. Should there be other options?

I find these discussions about content moderation very disheartening, because it seems like, even in the best-case scenario, where these platforms are way more attentive to these problems, it’s just going to be such a cat-and-mouse game short of shutting down social media.

I agree. And it’s just kind of a lose-lose for the people doing the frontline work [on content moderation]. It’s a thankless job.

This interview has been edited for length and clarity.

Related Posts