The Trump Admin Won’t Join an Initiative to Curb Social Media Extremism, Citing Free Speech

Experts say the initiative does not “have much teeth to it” but that it does not run contrary to the First Amendment.
Crowds gather for a vigil in memory of the victims of the Christchurch mosque terror attacks on March 16th, 2019, in Auckland, New Zealand.

The United States is citing freedom of speech concerns as its justification for declining to join an international initiative to reign in extremist content on social media, but experts warn that the decision could just be “political one-upmanship.”

Spearheaded by New Zealand and France in the wake of a March terror attack on Muslims in Christchurch, New Zealand, the “Christchurch Call to Action” details nine concrete steps that large technology companies can take to halt the spread of terrorist content within their platforms, including reviewing their algorithmic operations and providing greater transparency in setting community standards and terms of service.

French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern have thus far successfully lobbied 26 other nations to adopt the agreement, which encourages governments to work with online service providers in order to “address the issue of terrorist and violent extremist content online and to prevent the abuse of the Internet.”

The White House has hedged on signing the agreement outright, instead offering its symbolic support of the document.

“While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the call,” the Trump administration said in a statement last Wednesday. “We will continue to engage governments, industry and civil society to counter terrorist content on the Internet.”

“We encourage technology companies to enforce their terms of service and community standards that forbid the use of their platforms for terrorist purposes,” the statement continues. “We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press.”

Internet platforms played a key role in the March 15th terrorist attack that left 51 dead and dozens more wounded: One of the attackers used Facebook’s live-streaming feature to broadcast a 17-minute video of the attack and then later uploaded a 74-page anti-Muslim manifesto to Twitter and the discussion forum 8chan. Since the agreement debuted, five tech companies—including Facebook, Microsoft, Twitter, Google, and Amazon—have also signed on, pledging to take steps to address the spread of terrorist content on their platforms.

But while the tech platforms themselves have seemingly signaled willingness to evolve in order to clamp down on extremism, Danny O’Brien, international director at the non-profit Electronic Frontier Foundation, says that that eagerness says more about what isn’t in the agreement than what ended up in the final draft.

“The Christchurch Call doesn’t have much teeth to it, apart from some sort of hand-wavey language about governments having the responsibility to sort of help control this content,” O’Brien says. “Most of it is about voluntary agreements, more research, and approaches that don’t directly filter content, which is one of the reason the companies are so happy to sign onto it.”

Recent years have seen countries increasingly flexing their muscles in an effort to get big tech companies to hew to the public interest. In 2016, the European Union announced a new framework known as the General Data Protection Regulation with the intention of protecting Internet users’ personal data. France has been particularly bullish: In January, the country’s data privacy watchdog, the National Data Protection Commission, imposed a €50 million fine against Google for what the authority described as a “lack of transparency, inadequate information and lack of valid consent.”

According to O’Brien, the Christchurch Call is, in many ways, just the latest attempt by politicians to take a stronger leadership role in reigning in these companies—which could help to explain the larger motive behind the Trump administration decision not to sign on.

“If you’re politically frustrated that France is getting the rhetorical support of these actions, one of the ways you can cede that initiative back is by simply refusing to get involved,” he says. “I really feel that there’s a sizable chance that this statement is about that kind of political one-upmanship.”

While the document does pay lip service to human rights and a commitment to a “free, open and secure Internet,” it also underscores the Internet’s responsibility to “act as a force for good, including by promoting innovation and economic development and fostering inclusive societies.”

This vaguery, O’Brien suggests, means that there’s likely a way of adopting the document without any threat of infringing upon freedom of speech rights. And while the White House’s justification for opting out of the agreement isn’t totally disingenuous—any U.S. administration talking about putting stronger controls on tech platforms would be inherently limited by the First Amendment—that argument hits a snag when it comes to the Trump administration’s own domestic approach to bringing social media platforms to heel.

Shortly after the Christchurch Call to Action was announced, the White House rolled out its own platform-targeted initiative: a Web form aimed at allowing users to report any instances of political bias they had experienced on sites like Facebook and Twitter. (Trump, for his part, has long used the specter of conservative suppression on social media as a bogeyman to rile his voter base.)

“Social media platforms should advance freedom of speech,” the copy accompanying the form reads. “Yet too many Americans have seen their accounts suspended, banned, or fraudulently reported for unclear ‘violations’ of user policies. No matter your views, if you suspect political bias caused such an action to be taken against you, share your story with President Trump.”

“They face exactly the same kind of First Amendment restrictions in forcing companies to either allow more conservative content or enforce some sort of fairness rule,” O’Brien says. “So the domestic approach also struggles under that same First Amendment analysis.”

Related Posts