In June of 2016, shortly before the Republican National Convention, Donald Trump‘s presidential campaign hired the analytics firm Cambridge Analytica to revamp its data efforts. The company boasted of a “secret sauce” that yielded “psychographic profiles” that, as the New York Times observed in March of 2017, “could predict the personality and hidden political leanings of every American adult” for a political operation to analyze—and exploit.
Cambridge Analytica’s “secret sauce,” as it turns out, consisted of private information culled from more than 50 million Facebook profiles, according to a bombshell March 17th report from the New York Times, which labeled the operation “one of the largest data leaks in the social network’s history.”
But calling the incident an “existential crisis over privacy and data” is an understatement: As Alexis Madrigal pointed out in The Atlantic, Facebook has grappled with problematic data usage for the better part of a decade. (Zuckerberg infamously referred to early adopters of Facebook as “dumb fucks'” for trusting him with their personal data.) In reality, the Cambridge Analytica fiasco is not some revelatory moment for Facebook and its users regarding the network’s role as the world’s foremost information broker, but rather a tipping point in how we think about the self in a digital world.
The sharpest insight into the fury unleashed by Facebook and Cambridge Analytica doesn’t mention either company once. Rather, it’s in the form of a lengthy New Yorker feature on how advances in the technology undergirding virtual reality (VR) are challenging our accepted conceptions of what constitutes the self, the “who and what we are,” as author Joshua Rothman puts it. VR confounds the idea of the “mind-body problem,” the relationship between the conscious mind and physical body that’s remained a staple of the philosophy of mind since Aristotle and Plato.
Mind-body dualism dictates that the mental being is “in here” while the physical self is “out there.” But questions of neurobiology make that whole proposition more complicated. Here’s Rothman with a relatively digestible distillation of German neurophilosophy professor Thomas Metzinger’s approach to the matter.
The instruments in an airplane cockpit report on pitch, yaw, speed, fuel, altitude, engine status, and so on. Our human instruments report on more complicated variables. They tell us about physical facts: the status of our bodies and limbs. But they also report on mental states: on what we are sensing, feeling, and thinking; on our intentions, knowledge, and memories; on where and who we are. You might wonder who is sitting in the cockpit, controlling everything. Metzinger thinks that no one is sitting there. “We” are the instruments, and our sense of selfhood is the sum of their readouts. On the instrument panel, there is a light with a label that says “Pilot Present.” When the light is on, we are self-conscious; we experience being in the cockpit and monitoring the instruments. It’s easy to assume that, while you’re awake, this light is always on. In fact, it’s frequently off—during daydreams, during much of our mental life, which is largely automatic and unconscious—and the plane still flies.
Two facts about the cockpit are of special importance. The first is that although the cockpit controls the airplane, it is not itself an airplane. It’s only a simulation—a model—of a larger, more complex, and very different machine. The implication of this fact is that the stories we tell about what happens in the cockpit—”I pulled up on the stick”; “I touched my jacket”—are very different from the reality of what is happening to the system as a whole. The second fact, harder to grasp, is that we cannot see the cockpit. Even as we consult its models of the outer and inner worlds, we don’t experience ourselves as doing so; we experience ourselves as simply existing. “You cannot recognize your self-model as a model,” Metzinger writes in his book Being No One. “It is transparent: You look right through it. You don’t see it. But you see with it.”
Rothman’s conclusion? On a neurological level, the gulf between the “outside” and the “inside” is widely overstated, a kind of ontological self-deception. “Our mental models of reality are like VR headsets that we don’t know we are wearing,” he writes. “Through them, we experience our own inner lives and have inner sensations that feel as solid as stone.”
This is the heart of the Facebook problem. Mind-body dualism has its own intellectual ancestor for the social media era in Nathan Jurgenson’s “digital dualism,” the idea that the Internet constitutes the “virtual” and the offline the “real.” It’s the a priori assumption—that the presentation of self on Facebook, like the body, is “out there” to be consciously controlled by the mind that’s “in here” or offline—that triggers the surprise people feel when, say, Facebook figures out you’re pregnant before you’ve told anyone.
But our deliberately crafted presentation of the self online, from privacy settings to meticulously arranged Instagram uploads, actually isn’t as deliberate as we may think. In The Presentation of Self in Everyday Life, the pioneering sociologist Erving Goffman likened public life to a theatrical performance, with costumes and props chosen for the right social context. But Facebook, like the Internet writ large, is anything but an alien or artificial context. “Individuals’ interactions with computers, television, and new media,” scholars Clifford Nass and Byron Reeves famously noted in 1996’s The Media Equation, “are fundamentally social and natural, just like interactions in real life.” Facebook isn’t just a mirror for our selves; it’s a VR headset for the presentation of self across political and geographical definitions and across social contexts.
Indeed, Facebook was very much designed this way. Five years ago, well after anxiety over Facebook’s Orwellian data collection had become established dinner-table conversation, company design director Kate Aranowitz explicitly described the platform as “the perfect empty vessel,” designed to be filled with all the digital detritus that accompany us as we move through our daily lives—and all the data that comes with. Here’s a handy summary, per The Atlantic‘s Madrigal:
“We tend to think of everything in terms of social design. The box, for us, is a vehicle to allow one person to communicate with another. It’s entirely about who’s on the other end of that box, not really the box itself,” Facebook designer Russ Maschmeyer told me…. “Our overarching design goal is to make that box as invisible as possible, so that your content is the thing that’s most important. …
“We want to create the space in which people can communicate the emotions, ideas, thoughts, wonderful things they find, beautiful images that they see in the most efficient and clean way possible,” he concluded.
Facebook allows for the full experience and expression of the self by melting into the background. Consider baseball: After enough practice, you don’t think through every individual movement of your body when you swing at a pitch; the body does what it does seamlessly, without conscious thought. Facebook, more than any other mental model, has created such an impeccable complement to social reality that we pour ourselves into it, with a seamlessness akin to that of a baseball swing.
This is the scariest element of Facebook’s growing role in modern political systems: It doesn’t just hoover up data—it also sets the terms for how and where the self is expressed. French philosopher Michel Foucault had his “medical gaze,” the mode of institutional examination that both observes and reduces subjects to state-defined clinical categories that are at the root of governmental “biopower”; in the era of social media, we get the “Facebook Eye,” where we actively perceive the world in terms of presenting the self on Facebook. For centuries, government institutions determined the identities of its subjects and monopolized the power that’s rooted in those definitions. Now, Facebook exercises this power by literally monetizing the self.