Twitter Will Remove Dehumanizing Content Against Religious Groups

The expansion of the social media company’s hateful content policy doesn’t go as far as initially promised.
A sign is posted on the exterior of Twitter headquarters on July 26th, 2018, in San Francisco, California.

Twitter says it is taking yet another step in its attempts to control the dissemination of hate speech on its platform. On Tuesday, the company announced an update to its policies around content considered dehumanizing, promising to remove posts targeting religious groups.

“Our primary focus is on addressing the risks of offline harm, and research shows that dehumanizing language increases that risk,” Twitter’s safety team said in a blog post. Examples of posts that will be removed from the platform include comparisons between religious groups and rats, maggots, and viruses.

The company’s most recent approach is being publicized as an “expansion” of an existing hateful content policy. Last year, Twitter disclosed its intention to tackle dehumanizing content targeting people based on their membership in an “identifiable group,” taking into account a number of characteristics such as race, gender, sexual orientation, disability, and others.

But after receiving feedback from users—some of whom expressed concern over how the measure would impact their ability to engage with political groups or call out hate groups—Twitter decided to narrow down its policy to focus only on religious groups, at least for now.

In an interview with the New York Times, Jerrel Peterson, Twitter’s head of safety policy, said: “While we have started with religion, our intention has always been and continues to be an expansion to all protected categories.”

Social media companies have long struggled with content moderation. Last fall, in a conversation with David M. Perry for Pacific Standard, Siva Vaidhyanathan, a professor of media studies at the University of Virginia, explained why they keep failing:

What we’ve been seeing in the last year or two are these feeble attempts to create and enforce standards and generate public confidence in [the company’s] ability to filter. But neither the humans who work for these companies nor the artificial intelligence systems they build can anticipate the varieties of human craziness and cruelty.

Let’s say you build an algorithm that steadily predicts where white supremacist content comes from. You can’t build moral and ethical judgment into that algorithm or ask it to critically weigh when something is merely goofy or potentially harmful. And we all know that! Anyone who spends any time thinking about manners, let alone community safety, knows that these are hard choices, so we make them at the smallest possible scale. We have become adept at policing appropriate speech within churches, classrooms, and work spaces. But imagine doing that for 2.2 billion Facebook users or 300 million Twitter users in more than 100 languages. It’s absurd to expect these companies to do anything but a set of cosmetic reactions.

Recently, Twitter, which has been accused by President Donald Trump of being biased against conservatives, also announced that it will be labeling posts from political figures that violate its policies. But Twitter won’t remove the posts from the platform because they are a matter of public interest.

On Thursday, the White House will host a social media summit, but neither Twitter or Facebook is reported to have been invited.

Related Posts