Does the Media Need to Report on Extremist Manifestos?

An extremism scholar discusses the ways in which journalism has been hijacked by bad faith actors in the wake of the Poway and Christchurch shootings.
A makeshift memorial sits across the street from the Chabad of Poway Synagogue on Sunday, April 28th, 2019, one day after a gunman opened fire on worshippers.

More than a month before the April 27th shooting at Chabad of Poway Synagogue that left one member of the California congregation dead and three others injured, Whitney Phillips took to Twitter to caution users against sharing the words of a different attacker on a different continent.

Like the Poway shooter before him, the perpetrator of the consecutive terrorist attacks in a pair of Christchurch, New Zealand, mosques that left 50 people dead and at least 50 others injured in March had left behind a white-nationalist manifesto, which Philips warned had been designed to deceive and manipulate.

“[I]t’s a trap, it’s all a trap, do not give them this, after they have already taken away so much,” she tweeted. “Shame on anyone who spreads any of this because of clicks.”

Philips, an assistant professor of communications, culture, and digital technologies at Syracuse University who specializes in identity-based harassment online, has spent a lot of time thinking about how the media is susceptible to manipulation. In her 2018 report The Oxygen of Amplification: Better Practices for Reporting on Extremists, Antagonists, and Manipulators Online, Philips conducted interviews with dozens of journalists in order to paint a comprehensive picture of the ways in which journalism can be hijacked by bad faith actors—and to come up with a set of “best practices” for those who wish to report on extremist activity without becoming propaganda organs.

Pacific Standard spoke to Philips about extremist manifestos, their proliferation, and the online forums giving them rise.

Is a shooter’s manifesto inherently newsworthy?

I’ve been thinking about this a lot, and I think that we’re long overdue for a revisiting of assumptions of newsworthiness. There’s this basic assumption, when you have these kinds of events, that what you need to do is explain what 8chan is and explain some of the references in the manifesto. It’s assumed that this information is going to help people understand that it’s part of the story.

Where are you drawing the boundaries around context? I mean, why is it that we need to do a bunch of explainers about what 8chan is in order to understand this person’s white-supremacist violence? I don’t know that the average American needs to know about 8chan in order to understand the violence and danger of white-supremacist ideology. It’s not the center of the narrative.

I think there are many, many ways to talk about these issues that inform the public and call attention to the public-health crises at the heart of these incidents without feeding into the interest of the violent attackers. So much of that has to do with: Where are you pointing your camera? What kinds of conversations are you framing?

In the past month, we’ve seen two major mass shootings that both involved manifestos, and there have been many other recent, similarly styled attacks. Do you think it’s fair to say that there’s been an observable rise in white nationalism globally, particularly online, in just the last few years?

The Internet has been a hotbed for white nationalism and supremacy for a really long time, and it’s hard to tell exactly how the numbers have shifted in a quantifiable way. It’s certainly the case that the topic of white supremacy and white nationalism and the overall ideologies of those positions are front and center in American politics, even when you’re not talking about online extremism specifically.

There’s a great deal of overlap between, for example, the president’s policies on immigration and his rhetoric around immigration, and much of the white nationalist and supremacist discourse that takes place on these more fringe communities. So it certainly is the case that these kinds of dehumanizing messages have become more culturally prominent. What gets tricky is that it’s not that online hate has really exploded, it’s that, in the public sphere, hate has become more normalized.

When it comes to the phenomenon of the extremist manifesto specifically, is the rise of these online platforms like 4chan and 8chan giving them more of a home, or is it a non-corollary?

It’s a chicken and the egg kind of a question, it’s really difficult to tell. No online space creates behavior because those online spaces are populated by humans who are performing for their chosen audiences. It’s not that the platforms themselves aren’t part of the conversation—they are—but it’s really critical to remember that these are people making choices about how to live their lives, where to go, who to talk to, what to do, and how to act on their violent ideations.

I certainly don’t think that 8chan causes shootings. That would be way too reductive. But what a space like 8chan and 4chan can do is that they provide a safe space for like-minded individuals to radicalize each other. And so again, you can’t separate the behavior and ideology away from where it takes place, but it’s a people problem, first and foremost, and then technologies reinforce some of those problems.

What is the media’s role in sharing—or not sharing—a shooter’s manifesto?

Where the conversation becomes really complicated and distressing from a media perspective is that these behaviors are copycat behaviors: You have groups of individuals that are performing their identities for their audiences, and they’re constantly trying to live up to the expectations of their audiences. And that’s true if you’re talking about a group of dog lovers on Facebook too: You take your audience into account, and you try to live up to their expectations for you—and that can be really positive or just neutral depending on what the group is. But if the group norms are to be violent bigots, then the one-upsmanship of trying to live up to your own, to try to ensure that your face fits the mask that you’re wearing, that becomes a very fraught concept when you’re talking about these individuals.

With these two manifestos (Christchurch and Poway), and the fact that you have these connections to 8chan, what the pattern tells us is that, because people in these spaces perform for each other, there’s gonna be a kind of one-upsmanship: How outrageous can the manifesto be, how many people can the manifesto reach? It’s going to now become, if you could even imagine, uglier. And part of the reason that it will be uglier is because of the news coverage.

So how is it possible for the media to mitigate some of the harms, or to cover events like these in a way that won’t inspire copycats? Is copycat behavior inevitable?

There’s certainly an inevitability when the shooter is the center of the narrative, the antihero of the story. Coverage does incentivize future attacks when the perpetrator gets to be the center of the story, and everyone else just gets to be a bit player in their orbit.

Under those conditions, the behaviors are incentivized for someone else to do something similar, and the way they talk about it is they “gamify” these behaviors and quantify body counts, and then the goal is to beat that, to do better next time. So as long as the coverage plays into and allows for the gamification, then that will continue.

Most of these stories sort of center the “why” question. Why would this person do this thing? And they sort of begin and end with the shooter themselves. But I think more importantly, the public-health takeaway is, how is it that these kinds of behaviors can thrive so easily in online spaces?

So if you were to write a story that talks about moderation failures on social media, and the different ways that hate has been monetized on these platforms, and the various incentives of these platforms to not deal with the issue of hate and harassment, and how all of those forces create a circumstance in which violence emerges, then you can talk about why this violence comes to be without having to focus on the shooter. The public needs to know what these social media platforms are doing to them and to their democracy—that is a critical public-health takeaway.

In New Zealand, where freedom of speech laws are somewhat less robust that what the First Amendment grants us here in the United States, possession of the Christchurch shooter’s manifesto has been prohibited, and at least two people have already been charged with sharing video of his attack on social media. Does that go too far? Is there inherent good in some form of suppression when it comes to dangerous extremist materials?

It’s a good thought exercise of a question. However, the idea that those restrictions would ever be imposed in the U.S. is sort of a non-starter to begin with, simply because of people’s investment in the idea of freedom of speech.

I think that certain materials are weapons—the way that they are used and the way that certain information is easily weaponizable—and you have to do something about weapons.

So what do you do when people’s attachment to free speech sort of overwhelms the public-health concern of having certain kinds of information easily accessible? One of the things New Zealand has readily acknowledged, even though they’ve been taking a lot of interesting, compelling steps, is that people outside of New Zealand can still share [the video and manifesto] and people can still access them. It’s not like they have a lockdown on the Internet. So they can make certain choices about what is permissible and lawful within their borders, it immediately raises the question of global enforcement and how you scale that.

What kind of culpability does 8chan share? How much of the onus is on the media when it comes to keeping a lid on these extremist materials, and how much of it is on platforms that are being weaponized to spread hate?

Well when you’re talking about 8chan, it was designed to be a weapon, it was only ever going to be a space where this sort of thing happened—they’re doing what they were meant to do.

Social media absolutely has culpability, and the platforms themselves acknowledge it in the very thoughtful words they say whenever they give press conferences admitting that they’re falling short of their own ideals. “Doing something” usually means promising that they’re going to tweak an algorithm that nobody can see to begin with, and then saying a lot of nice words about how they have the responsibility to do these things.

An enormous element of that response is that we always just have to take them at their word—we don’t know what they’re actually tweaking, what’s actually under the hood.

The thing is, they work for themselves, they don’t work for us. They’re private companies that have a vested interest in their vested interests. So this kind of hate and disinformation, whether or not they’d be willing to admit it publicly, does benefit them. And that doesn’t mean that I think that they personally like it or that they don’t care, or that there’s not wonderful people who work at those companies, but they’re still companies. It’s nice to hear nice words, but I think we all, at this point, should be pretty sick of hearing nice words because they don’t mean anything—nice words just call attention to how far they’re falling short of the responsibilities that they claim themselves.

This interview has been edited for length and clarity.

Related Posts