The Federal Trade Commission issued its largest-ever fine, of $5 billion, to Facebook for violating a 2011 privacy settlement in late July. But the amount is only about a month's worth of the company's revenue, suggesting that the fine, while seeming large, is, in fact, rather modest.
More significantly, Facebook is required to have an "outside assessor"—a sort of privacy cop—to monitor the company's handling of user data, along with following a few other corporate procedural requirements. That assessor could address the fundamental problems with the way Facebook operates–but as a scholar of technology companies' business practices, I'm worried that this potentially all-important role is set up for failure.
In my opinion, in order to be effective, there are three main privacy-related concerns the FTC's newly designated cop would need to look out for: the potential for genuine violations of users' privacy; the targeted spread of harmful content, especially resulting in election manipulation and ethnic violence; and instances of collecting and harvesting far more data than is warranted to provide services to users.
An independent assessor will lack the standards, regulatory, and legal guidelines, and the insight needed to actually monitor how Facebook handles those three issues. This makes the privacy cop's job much harder than that of a regular cop or, say, a financial auditor.
Protecting Users' Privacy
Facebook's history of privacy violations extends well beyond the most publicized ones, like letting Cambridge Analytica access the personal data of 50 million users to craft micro-targeted political ad campaigns.
Facebook has secretly shared data with other companies for years, without notifying the users. That practice, as well as the function that lets users sign in to other websites and apps with their Facebook login, has helped advertisers follow their targets around the Internet. The company has also used its trove of user data to gain a competitive advantage in business negotiations, boosting its own profits without compensating the users themselves.
The FTC ruling gives the privacy cop no clear guidance on which data-sharing or data-withholding arrangements between Facebook and other companies are legitimate and where they cross a line. This is because there are still no internationally agreed-upon data protection rules, and few clear regulations in the United States to compare Facebook's actions against.
Facebook's business model uses its treasure trove of user data to target advertising, the source of almost all the company's revenue. An outsider will be unable to tell the difference between legitimate business practices that harvest user data to increase profits and problematic abuses that violate users' privacy. In fact, FTC Commissioner Rohit Chopra, who dissented from the decision, declared that the new settlement still "allows Facebook to decide for itself how much information it can harvest from its users and what it can do with that information."
Blocking Harmful Content
Facebook has struggled to limit harmful content on its networks, such as that which fed ethnic violence, distributed misinformation, or facilitated election interference. Personal data helped the perpetrators target their messages to certain groups of Facebook users.
The outside assessor will be focused on privacy, which means that identifying, verifying, and policing content will be beyond the assessor's mandate. Ironically, steps to enhance privacy, such as ensuring end-to-end encryption across all of Facebook's messaging platforms—as Mark Zuckerberg intends to do—would help in protecting the identity of the spreaders of harmful messages, rather than exposing them and their actions.
Protecting Users From Giving Up Too Much
Access to Facebook seems free, because it costs no money, but users pay with their data. The assessor should ask if the users are being charged fairly, in privacy terms, for the service they're receiving. That raises the question of what a "fair" price is for what Facebook provides.
Normally, price is set by a competitive market, where customers can choose from a range of service providers. Not so on Facebook, where there are high costs—again, not financial, but in terms of time and effort—to leaving, and no other option offering equivalent services.
A social science phenomenon called the "network effect" means that any network is increasingly valuable as more people join it–but that means it's also increasingly hard to leave. There are now more than 2.3 billion Facebook users around the world. For too many people, their most active online social connections are on Facebook.
It's hard to leave Facebook, not only because there are so many users. Many customers use their Facebook logins on thousands of other apps and services. If they delete their Facebook accounts, they lose all access to those other apps too, like customized Spotify playlists and Netflix viewing preferences. Worse still, Facebook has bought up many of its competitors. Lots of people who quit Facebook shift over to Instagram–which is owned by Facebook.
Looking to the future, the company is making the price of leaving Facebook even higher, by planning to consolidate its data-collection power by integrating its various apps, including Facebook Messenger, Instagram, and WhatsApp–as well as through a proposed digital currency for transactions conducted on Facebook platforms. All of these create a playing field that is tilted in favor of an all-encompassing single-parent company, limiting users' choices and making switching difficult. No assessor can remedy the inherent unfairness of that imbalance.
Far more than the fine, the centerpiece of the FTC deal is the outside assessor. If properly designed, this role could be truly game-changing—one of a forceful privacy cop setting the standards for how the power of big technology firms is managed from here on. But the fine is a slap on the wrist, and the cop's arms are tied and don't reach far enough. This sets a very bad precedent: Both the FTC and Facebook can declare a victory of sorts, while the consumer loses.