It’s 10 P.M. Do You Know What Your Avatar Is Doing?

The psychologist Jeremy Bailenson’s quest to prepare us for the coming virtual world

IN THE 1982 SCIENCE-FICTION NOVELSoftware, an elderly character named Cobb Anderson trades in his frail human body for an android avatar and then sets out on an unusual mission: to start a cult. The old man’s new body allows him to alter his appearance at will, which turns out to be handy for gathering disciples. To gain trust and devotion, Anderson meets with his initiates one at a time—and then changes his face to resemble theirs. “I always use this trick on the recruits,” he says with a chuckle.

A few years ago, a research psychologist at Stanford University named Jeremy Bailenson effectively proved the soundness of Anderson’s recruitment methods (pdf). A week before the 2004 presidential election, Bailenson asked a bunch of prospective voters to look at photographs of George W. Bush and John Kerry and then give their opinions of the candidates. What the voters didn’t know was that the photographs had been doctored: each voter’s own visage had been subtly morphed together with that of one of the candidates.

In this and two follow-up experiments, Bailenson found what Rudy Rucker, the novelist who wrote Software, would have predicted: voters were significantly more likely to support the candidate who had been made to look like them. What’s more, not a single voter detected that it was, in part, his or her own face staring back.

In another experiment (pdf), Bailenson outfitted college students with head-mounted virtual-reality displays and then sat them across a digital table from an artificial-intelligence agent—a computer program with a human face. The students then listened as the “agent” delivered a short persuasive speech. When the agent was programmed to mimic a student’s facial movements on a four-second delay—a tilt of the chin, a look to the left, a downward glance—the students found it more likeable and compelling. And like the prospective voters, the students showed no sign that they knew they were being mimicked. Nothing, it seems, is more persuasive than a mirror.

The similarities between science fiction and Bailenson’s research are not accidental. Before he began studying avatars and virtual reality in the lab 12 years ago, Bailenson dabbled in writing science fiction himself, and he is still an avid reader of the genre. (Cobb Anderson’s recruiting trick from Software, for instance, has shown up in Bailenson’s academic writing.) Above all, he credits Neuromancer—the seminal 1984 novel by William Gibson, which takes place largely in a virtual-reality landscape populated by avatars—as his inspiration. He reports that he has designed major laboratory experiments to test Gibson’s fictional ideas, and that he frequently relies on Gibson’s language to crystallize the central themes of his research. “Neuromancer has been my crutch,” he said in a 2011 interview with the British novelist Matt Rees.

This may not sound like the most promising basis for a serious career in the social sciences. But over the past five years or so, Bailenson has seen his research funding skyrocket. A gregarious guy with shaggy brown hair and a laid-back surfer’s demeanor, he receives more calls these days than he can handle from political consultants, defense contractors, government agencies, environmental foundations, health-care providers, and marketing companies seeking his advice. That’s because, nearly 30 years after Neuromancer was published, the state of technology and society is—in ways quiet and not so quiet—catching up with science fiction.

Gaming is only the most obvious frontier: today, kids ages 8 to 18 spend more than an hour a day, on average, in the virtual worlds of video games. Upwards of nine million people play World of Warcraft, spending on average some three hours a day interacting in its massive multiplayer online world as goblins, blood elves, and dwarves. Meanwhile, more than a billion people—one-seventh of the world’s population—spend time inhabiting a different sort of avatar: the strategically curated, idealized public self of a Facebook profile.

The virtual-reality hardware available to consumers has been rapidly evolving too. Microsoft’s Kinect—an infrared sensor for the Xbox that allows the game system to track a user’s body movements—quickly became the fastest-selling consumer electronic device in history after it was released at the end of 2010. And in recent months, a company called Oculus has generated nearly $2.5 million in Kickstarter funds to develop its much-hyped virtual-reality headset, the Oculus Rift, which stands to be the first mass-market device of its kind. If the company succeeds, it will knock down one of the most significant technological barriers between labs like Jeremy Bailenson’s and your home entertainment center.

As a scholar who has spent his career working at the intersecting frontiers of social psychology and virtual reality, Bailenson has emerged as the go-to guide to this new world. And the psychology is every bit as important as the technology. In recent years, a growing body of research has shown that, contrary to behavioral models that portray humans as driven by calculating, rational self-interest, we are in fact shot through with a bunch of irrational tendencies—what psychologists call “cognitive biases.” We’re instantly fond of people who resemble us; we respect people who are taller than we are; we pay attention and our heart rates soar when someone establishes eye contact. What Bailenson’s impressive body of work shows is that the technologies of virtual interaction offer a powerful tool not just to explore those quirks and biases—but also to strategically exploit them, for good or for ill.

ON A RECENT AFTERNOON, I found myself standing on a wooden plank atop a 10-story building, peering over the edge at blue sky and cityscape. I’d been raised up from the ground floor; below and behind me, in the interior of the building, was a deep pit. At the coaxing of Bailenson—whom I’d just met—I turned and walked from one end of the plank to the other, tentatively leaned forward, and stepped off into the void.

There was a whoosh, and I waved my arms as I plummeted into the depths of the building. Once I landed, I felt nauseous, as if my brain had been left back on the roof, and I laughed uneasily.

Bailenson chuckled too, and his voice came from somewhere to my right. “You’re very brave—many people won’t do that,” he said. “The point of that is to make you experience something called ‘presence.’ You know there’s no hole in that floor, but it was hard for you. You hesitated before jumping off.”

Presence is a word psychologists use to describe the subjective sense of “being there”—the state in which the brain accepts the illusion of a virtual environment. In reality, I was standing in Bailenson’s sleek Virtual Human Interaction Lab—which had no hole in its floor—wearing an ungainly goggled headpiece. I had known the scenario wasn’t real, but had a hard time making myself act as if it were a fabrication—and so has pretty much everyone else who’s ever gone through the simulation.

An experimental subject walks out on a virtual plank over a digital abyss in Jeremy Bailenson’s lab. Bailenson has run at least 3,000 people through his simulation, whose graphics are less sophisticated than those in a Nintendo Wii game. Yet he estimates that one in three people can’t even bring themselves to walk within a meter of the edge of the virtual pit. In countless ways, the brain fails to make distinctions between virtual and real cues.

This gets straight at the heart of one of the fundamental discoveries that Bailenson and others have made about life as a digital avatar. The term virtual reality has been in popular circulation since the late 1980s, and more than a decade has gone by since we first saw Keanu Reeves learn virtual kung fu in The Matrix; anyone whose expectations of virtual reality are defined by pop culture may have a certain “Where’s my jetpack?” impatience with the technology. After all, here in the real world, the basic equipment of VR hasn’t changed all that much in 15 years: you’re still using an expensive head-mounted display trailing a snakelike bundle of wires connected to a computer in a laboratory to experience a virtual world made of pixelated graphics. But what the pit demonstration shows is that, in many of the ways that matter most, video-realistic graphics simply aren’t necessary to fool us.

Bailenson has run at least 3,000 people through the same simulation he put me through. The graphics are less sophisticated than a Nintendo Wii game. Yet he estimates that one in three people can’t even bring themselves to walk within a meter of the edge of the virtual pit. In countless ways, the brain fails to make distinctions between virtual and real cues.

This simple fact explains why VR has become such an effective and widely used training tool on military bases, in architecture firms, and in medical schools. And why, in academia, virtual-reality headsets have become a tool to explore all sorts of fundamental questions about the mind. Scientists who study perception, for instance, use the headsets to easily manipulate visual cues. The brain gives up a lot of its secrets when you can reliably fool it in targeted ways.

But Bailenson has taken his own research in a different direction. He has always been interested in virtual reality for its own sake: not just as a tool for understanding normal human functioning, but as a new social habitat—one that might actually transform human functioning.

When Bailenson was a postdoc at the University of California at Santa Barbara in the early aughts, he and a graduate student named Andy Beall would often go out surfing before dawn. At the time, the two worked in the lab of the social psychologist James J. Blascovich, a major pioneer in virtual-reality research. Bobbing in the Pacific, Bailenson and Beall would argue about the proper parameters of VR research. “It would be me trying to get whatever we were working on past hard-nosed editorial reviewers, and him throwing out wonderful crazy ideas, beyond what those methods would allow,” recalls Beall, who is now the CEO of WorldViz, a company that makes head-mounted virtual-reality displays.

Bailenson’s interests did indeed stray somewhat from the traditional confines and concerns of psychology; his affiliation today is with Stanford’s Department of Communication. From that perch, Bailenson has, perhaps more than any other American scholar, explored the behavioral implications of an increasingly virtual world.

WE’VE LONG BEEN ABLE to contemplate our reflection. Our cavemen ancestors saw themselves in water; we’re used to glimpsing ourselves in glass. And for some time now, we’ve been able to observe ourselves acting asynchronously—not in real time, but in a home movie or a video that plays back our past actions. Now, for the first time ever, we can look at images of ourselves doing things we’ve never physically done, and we can inhabit avatars that look nothing like us. The implications, it turns out, are not easy to predict.

In the early days of online virtual communities, some scholars thought digital worlds would help us transcend all the stereotypes and hang-ups that the flesh is heir to. In 1995, the MIT sociologist Sherry Turkle suggested that online social worlds would make racism—an artifact of the physical realm—obsolete. But stereotypes are still a powerful force in virtual reality: studies by Bailenson and others have found that having a white person walk in a black avatar’s shoes doesn’t engender empathy, but actually primes racist thoughts. Likewise, people with overweight avatars in Second Life still get teased. (And if they aren’t given the option of simply switching avatars, they have been known to spend hours in a virtual gym trying to slim down.)

What’s more, Bailenson has found that even a little bit of time spent in these virtual situations has behavioral effects that linger long after you remove your head-mounted display. A taller avatar increases your confidence, and the boost carries over into the physical world; a better-looking avatar makes you more likely to act the role of the social butterfly in real life.

When examining how virtual reality might augment human capacities—one of his pet interests—Bailenson has looked for ways the technology might be used to prime good behavior, either in the form of a paternalist “nudge” or as a vehicle for plain self-help. In one study, for instance, Bailenson’s team found that if you met your older self, you’d save more money for the future than you would otherwise (pdf). Research has shown that Americans save insufficiently for retirement, in part because they feel little connection to their future selves; the result is a huge collective burden downstream, when seniors run out of money and earning potential. Bailenson found, however, that college students who were introduced to avatars of themselves morphed to look like senior citizens—who basically looked into a virtual mirror and were confronted with vivid representations of their 70-year-old selves—were motivated to put aside twice as much money for retirement. “You can’t age in a mirror in real life,” Bailenson says. “But in this case, you can.”

In other studies out of Bailenson’s lab, subjects who were shown recognizable avatars of themselves exercising and rapidly losing weight later voluntarily exercised more; subjects who saw themselves being sedentary and steadily gaining weight were also motivated to hit the gym. Subjects who cut down a tree in virtual reality—who felt the chainsaw buzz in their hands, watched the trunk fall, and felt the vibrations of its collapse on the forest floor—later used 20 percent fewer paper napkins.

But for all the ways virtual reality and avatars might conceivably be used to stoke good behavior, it’s just as easy—if not easier—to imagine ways they might be used for commercial or political ends. This is clear enough from the line of consultants, military types, and marketing professionals beating a trail to Bailenson’s door. On just one of the days he and I spoke, Bailenson gave a lab tour to a defense contractor and fielded a request from an advertising agency eager for him to “partner” on its latest campaign for Prudential. (He turned the agency down.)

Bailenson’s body of research has plenty in it to whet the corporate appetite. In one experiment, he and another scholar, Sun Joo Anh, explored what would happen if companies started grabbing images of consumers and incorporating them into tailored advertisements. “When you see yourself in advertising using a product you’ve never touched, and loving it—we call this ‘self-endorsing’—does that make you like the product later on?,” Bailenson asks. “We found the answer to be yes.”

When an engineer in Bailenson’s lab, Maria Jabon, went to work for LinkedIn in 2010, she invited Bailenson to the company to give a talk. His lecture inspired LinkedIn to develop one of its most successful ad campaigns. The company’s “Picture Yourself” ads pair your actual profile photo with a job description at a company that wants to advertise itself in an aspirational light (for example: “Bonnie Tsui, engineer at Research in Motion”). Though Jabon wasn’t at liberty to give me specific click-through rates, she said the ads are so successful that LinkedIn runs them continuously, mostly for large companies who are looking for employees or followers.

But the ad campaign crossed a creepiness threshold when LinkedIn later ran ads that showed profile photos of other people in your social network who work for or follow a company—as if to use your friends and contacts as mini-celebrity endorsers. The New York Timescalled the move a “social ad misstep”; the ads garnered such a negative popular response that LinkedIn shut them down.

FOR A GUY SO IMMERSED in thinking about virtual reality, Bailenson is surprisingly unengaged with it outside of work. He doesn’t have a Facebook account, and he doesn’t play games online. He says his distance from the medium allows him to see it more clearly as a social scientist. As a teenager growing up in upstate New York, he was subsumed in the New York punk and heavy-metal scene; he also spent a lot of time hiking in the outdoors. Those deeply immersive activities, he says, informed his view of technology.

“Virtual reality has never been about the technology for me—it’s always been about the experience,” he told me on a visit I made to his home, in a sunny development near the Stanford campus. “It’s about being somebody else in a different place, or doing something that somebody can never do. It’s about the transportation.” He admits that he’s waiting for something more immersive than the current technology—something truly worthy of William Gibson’s vision of a “consensual hallucination.”

“There’s no discussion of tech in Neuromancer—it’s seamless,” Bailenson explains. “So maybe that’s why I’m not a World of Warcraft guy for 25 hours a week.”

In many ways, Bailenson’s research still reflects an eagerness to see the real world catch up with his imagination. Just two years ago, Bailenson and Carrie Leonetti, a professor at the University of Oregon School of Law, wrote a paper for the Marquette Law Review in which they floated the idea of conducting parts of court trials in virtual reality—strapping jurors into head-mounted displays and then guiding them through immersive crime-scene reconstructions. “Right now they do use digital representations: dioramas, 2-D stuff,” he explains. “Why not make one that’s accurate and makes you feel as if you are there?”

But lately the speed at which virtual reality is infiltrating mainstream life has surprised even Bailenson. “I knew it was going to happen eventually, but I thought I was going to be gray and old by then,” he says with a laugh. “There were a couple of years when we were only a handful of people who could build avatars of you,” he tells me. “Now anyone can do it.” On a recent visit to Microsoft headquarters, he was stunned as engineers waved a Kinect game console around him and built an instant, high-quality 3-D avatar—using a device that, as of a year ago, had been sold to more than 18 million people.

Bailenson’s thoughts about the implications of virtual reality have increasingly been shaped by what he learns about the knotty, fragile nature of humanity itself. “VR is an amplifier—it allows you to access the most amazing and the most horrific things,” he says. “In the beginning, we were just so excited in the lab”—he mentions his experiments in influencing prospective voters with doctored photographs, and on the powers of persuasion through mimicry. “It’s only in the last few years that I’ve decided that every study we do needs to be viewed with an ethical lens.” That means fewer studies on the demagogic and political uses of virtual reality, and more on how to use it to conserve paper, lose weight, save for retirement, and the like. He’s making an orchestrated effort to balance the ledger of his work.

In 2011, Bailenson’s research was briefly cited in a Supreme Court ruling by Justice Samuel Alito, in a concurring judgment for a case determining whether violent, immersive video games count as First Amendment–protected speech. The majority opinion deemed that such games are protected, and concluded that playing them is not different “in kind” from reading about violence in a book. On this latter point, Alito disagreed: “We should not jump to the conclusion that new technology is fundamentally the same as some older thing with which we are familiar.”

Bailenson agreed with the justice. “We need to think about this technology in a qualitatively different way,” he says. That might mean eventual changes in law or regulation. But until then, Bailenson sees his role as that of an intellectual guide. “My purpose is to inoculate,” he said in a recent talk at the offices of Google: to prepare people for the virtual tricks and traps about to be set for them.

For instance, these days Bailenson advises his students to post only low-resolution photos of themselves on Facebook. Why? Because it’s now possible for a third party to surreptitiously build an avatar of you using high-resolution images. And there could soon come a day when bots and spiders and phishing scammers are also avatar-builders. As Bailenson sees it, there’s going to be an arms race—a sensorially overwhelming extension of the war between spammers and our inboxes. As soon as you get wise to the fact that political ads are stealing your face, say, or that marketing bots are automatically mimicking your movements while trying to sell you something, then they’ll find a way to mimic you more subtly. Then you’ll have to get wise again. And before long, it will be passé to remark on how much this all feels like something out of science fiction. ★

Related Posts

The Era of Our Discontent

Feeling disillusioned by ... almost everything? You're not alone. That angst actually has a name—Weltschmerz, or "world pain" in German—and its history can tell us a lot about our current cultural moment of dissonance and the future of America.
See More