“Fake news” became the media’s favorite electoral boogeyman during the presidential election, and with good reason: Researchers from the Massachusetts Institute of Technology and Harvard University found that Facebook and Twitter indeed precipitated an unprecedented level of misinformation, creating what the Columbia Journalism Review called a “media network anchored around Breitbart developed as a distinct and insulated media system, using social media as a backbone to transmit a hyper-partisan perspective to the world.” Research confirms that the wave of anti-establishment fake news that came crashing down on social media users in the months leading up to the election is very real.
Fortunately for lawmakers, Facebook creator Mark Zuckerberg, somewhere between his seemingly endless line of meet-and-greets with middle America (the purpose of which is completely apolitical, according to Zuckerberg), seems intent on putting an end to the debate surrounding the social network’s ethical and moral standards for censoring content. Earlier this month, Zuckerberg announced that Facebook will crack down on fake news and misinformation ahead of Britain’s general elections in June. He’ll do this, the New York Times reports, by “remov[ing] tens of thousands of possibly fake accounts” and “tweaking [Facebook’s] algorithms in the country to reduce the amount of misinformation and spam” that appear in its News Feed.
Facebook has been flagging “disputed” news stories since March, but the news of an algorithmic tweak came just one week after Zuckerberg announced his social network would add 3,000 workers to its 4,500-strong community moderation force to review reports of explicit content—namely videos of murder, suicide, and torture—on its Facebook Live platform (the company has been tackling the issue of content moderation in its News Feed for years, usually with the help of contractors who likely don’t have the health coverage to help them cope with what they see on the job). Facebook’s old existential crisis appears over: By necessity, it is a publisher rather than simply a technology company—and it needs editors.
But this poses an interesting question: How does Facebook see the world? The company says its mission is to “give people the power to share and make the world more open and connected.” But it’s far from a neutral platform. In addition to individual products like its much-maligned “Trending News” module, which has vacillated from left-leaning editorial node to hoax whisperer, the platform’s fundamental design subtly shapes our behavior and, in turn, the structure of the digital social relations that reflect and augment our real-life ones, from the media we share to the “filter bubbles” we build for ourselves.
Facebook has opened itself up to influence from lawmakers and other government forces.
On Sunday, the Guardian lifted the veil on Facebook’s secret rules governing what its billions of users can and cannot publish across the platform, releasing a cache of 100 documents that represents “the most comprehensive view so far into how the world’s largest publisher wields its censorship tools,” as Guardian reporter Julia Carrie Wong put it. But, according to the Guardian, many Facebook moderators “have concerns about the inconsistency and peculiar nature of some of the policies … [with] those on sexual content, for example, are said to be the most complex and confusing.” A few select rules, from the Guardian’s report:
- Remarks such as “Someone shoot Trump” should be deleted, because as a head of state he is in a protected category. But it can be permissible to say: “To snap a bitch’s neck, make sure to apply all your pressure to the middle of her throat”, or “fuck off and die” because they are not regarded as credible threats.
- Videos of violent deaths, while marked as disturbing, do not always have to be deleted because they can help create awareness of issues such as mental illness.
- Some photos of non-sexual physical abuse and bullying of children do not have to be deleted or “actioned” unless there is a sadistic or celebratory element.
- Photos of animal abuse can be shared, with only extremely upsetting imagery to be marked as “disturbing”.
- Facebook will allow people to livestream attempts to self-harm because it “doesn’t want to censor or punish people in distress”.
The full corpus of the Guardian‘s “Facebook Files” offers a fascinating look at how a technology company long in denial of its editorial responsibilities is suddenly grappling with its new role as the largest de facto censor of information on the planet, one that reportedly deals with more than 6.5 million reports regarding allegedly fake accounts each week. “Facebook cannot keep control of its content,” one company employee told the Guardian. “It has grown too big, too quickly.”
This puts the company in an interesting dilemma. By slowly attempting to take responsibility for its growing influence over the private and public political spheres, Facebook has opened itself up to influence from lawmakers and other government forces.
Indeed, Facebook was a trendy topic during a recent Senate Judiciary subcommittee hearing on Russian influence over the 2016 presidential election. Throughout former Acting Attorney General Sally Yates and former Director of National Intelligence James Clapper’s testimonies, members of the subcommittee repeatedly touched on the company’s growing role in warping the flow of information and misinformation.
In one instance, Senator Chris Coons (D–Delaware) asked Clapper whether the “longstanding Russian practice” of spreading disinformation had been perpetuated by the American media. Coons also claimed that “there was a significant amount of fake news, of manufactured articles, mixed in with, seemingly, actual e-mails that had been hacked.” Senator Sheldon Whitehouse (D-Rhode Island), questioning Clapper and Yates, said that “propaganda, fake news, trolls, and bots … were in fact used in the 2016 election.” The rules and guidelines that comprise the de facto editorial apparatus that’s colonizing the News Feed are suddenly in very high demand.
In March, Facebook and Twitter faced fines of up to $53 million for “not doing enough to curb hate speech on their platforms,” the Times reported. The aggressive digital guerrilla tactics of the so-called alt-right didn’t take during the French presidential elections, but that hasn’t stopped newly elected French President Emmanuel Macron from pledging to “regulate the Internet” against the forces of fake news, or French prosecutors from launching investigations over “cyber misinformation campaigns.”
This, in some ways, may mark the beginning of the end of Facebook as a somewhat unfiltered reflection of our “real” social and political worlds—and, in turn, our best and worst impulses. Facebook has become “the most powerful mobilizing force in politics, fast replacing television as the most consequential entertainment medium,” as Farhad Manjoo wrote for the New York Times Magazine in April. “But over the course of 2016, Facebook’s gargantuan influence became its biggest liability.” Now all-powerful, Facebook has adopted the unilateral policing power of a state—a power that may one day control flow of information at the behest of government.
There’s a hint of irony here: Despite the fact that Twitter lost to Facebook in terms of size and scope, the company doesn’t face nearly as much pressure from governmental organizations. Though Twitter frequently faces criticism for not doing anything about its Nazi problem, its inaction appears to be a safer strategy than Facebook’s too-little-too-late adjustments; by not giving an inch, Twitter can’t be taken a mile when it comes to content restrictions. In some ways, Twitter can claim to be the technology company that Facebook always wanted to be. Now that Facebook has defined the editorial view of the world that comes with the role of publisher, it has to defend those principles against government power—whether it wants to or not.