How Should We Program Computers to Deceive?

Placebo buttons in elevators and at crosswalks that don’t actually do anything are just the beginning. One computer scientist has collected hundreds of examples of technology designed to trick people, for better and for worse.

Just outside the Benrath Senior Center in Düsseldorf, Germany, is a bus stop at which no bus stops. The bench and the official-looking sign were installed to serve as a “honey trap” to attract patients with dementia who sometimes wander off from the facility, trying to get home. Instead of venturing blindly into the city and triggering a police search, they see the sign and wait for a bus that will never come. After a while, someone gently invites them back inside.

It’s rare to come across such a beautiful deception. Tolerable ones, however, are a dime a dozen. Human society has always glided along on a cushion of what Saint Augustine called “charitable lies”—untruths deployed to avoid conflict, ward off hurt feelings, maintain boundaries, or simply keep conversation moving—even as other, more selfish deceptions corrode relationships, rob us of the ability to make informed decisions, and eat away at the reserves of trust that keep society afloat. What’s tricky about deceit is that, contrary to blanket prohibitions against lying, our actual moral stances toward it are often murky and context-dependent.

In recent years, it has become common to hear that technology is making us more dishonest—that the Internet, with its anonymous trolls, polished social media profiles, and viral hoaxes, is a mass accelerant of selfish deceit. The Cornell University psychologist Jeffrey Hancock argues that technology has, at the very least, changed our repertoire of lies. Our arsenal of dishonest excuses, for instance, has adapted and expanded to buffer us against the infinite social expectations of a 24/7 connected world. (“Your email got caught in my spam folder!” “On my way!”) But while it’s true, according to Hancock, that the Internet affords us more tools to help manage how people perceive us, he also says that people are often more truthful in digital media than they are in other modes of communication. His research has found that we are more honest over email than over the phone, and less prone to lie on digital résumés than on paper ones. The Internet, after all, has a long memory; what it offers to would-be deceivers in the way of increased opportunity is apparently offset, over the long run, by the increased odds of getting caught.

But the slight moral panic over technology-induced lying sidesteps another, more interesting question: What kind of lies does our technology itself tell us? How has it been designed to deceive?

The slight moral panic over technology-induced lying sidesteps another, more interesting question: What kind of lies does our technology itself tell us?

The fake bus stop at the Benrath Senior Center is, in its way, a piece of deceptive technology: a “user interface” designed to perpetuate an expedient illusion. And it’s hardly the only example. Dishonest technology exists in various forms and for various reasons, not all of them obviously sinister. If you don’t know it already, you should: Many crosswalk and elevator door-close buttons don’t actually work as advertised. The only purpose of these so-called placebo buttons is to give the impatient person a false sense of agency. Similarly, the progress bars presented on computer screens during downloads, uploads, and software installations maintain virtually no connection to the actual amount of time or work left before the action is completed. They are the rough software equivalent of someone texting to say, “On my way!”

But these examples offer only a hint of what we’re liable to see in the near future. As more of our daily lives involves interacting with devices loaded with software, and as more of that software is designed to adjust to a dynamic environment and potentially even make predictions about a user’s behavior in order to serve up the best “value,” perhaps now is a good time to ask: How deceitful should our new technologies be?

“GOOD DESIGN IS HONEST.” So holds one of the Ten Principles of Good Design, a set of guidelines laid down by the iconic German industrial designer Dieter Rams in the 1970s. Today, Rams’ principles are printed up and sold on posters, and his most prominent admirer is no less than Jonathan Ive, the head of design at Apple. A good product, Rams’ guidelines continue, “does not attempt to manipulate the consumer with promises that cannot be kept.”

When honesty is prized so highly, thinking about deception in anything but reflexively negative terms can be difficult. Deceit, after all, is something a good designer doesn’t do. But is all dishonest design necessarily bad?

Last year, a paper trying to address that question was presented at a major conference on computer-human interaction in Paris. “Benevolent Deception in Human Computer Interaction” is the work of Eytan Adar, a computer scientist at the University of Michigan, and Desney Tan and Jaime Teevan, two scholars at Microsoft Research.

Adar says he became interested in deceptive technology when, as an undergraduate in computer science in the 1990s, he learned about the history of early telephone networks. In the 1960s, the hardware that comprised the byzantine switching systems of the first electronic phone networks would occasionally cause a misdial. Instead of revealing the mistake by disconnecting or playing an error message, engineers decided the least obtrusive way to handle these glitches was to allow the system to go ahead and patch the call through to the wrong number. Adar says most people just assumed the error was theirs, hung up, and redialed. “The illusion of an infallible phone system was preserved,” he writes in the paper.

Since then, Adar has collected hundreds of examples of deceptive design, manifesting in a formidable stack of papers on his desk. As the stack grew, Adar discovered a spectrum of design falsehoods that mirrored what passes between ordinary humans every day: a lot of deception by designers, some of it benign and some problematic, and very little discussion about it. “It’s not clear that designers have a good grasp of how to make design decisions that involve transparency or deception,” he says.

Adar wanted to move away from treating deception in design as taboo and toward thinking more systematically about it, and to identify ways in which deceptive technology might help rather than harm us. He began looking for a clear line separating benevolent deception, which benefits the user of a technology, from malevolent deception, which benefits a system owner at the expense of the user. The goal of Adar and his co-authors’ paper was to showcase and classify examples that fall along this spectrum.

You’re probably familiar with malevolently deceptive software: the roaming online ads that trick you into clicking on them when all you really want to do is close them so you can read an article; the privacy settings on Facebook that, according to critics, rely on confusing jargon and user interfaces to trick people into sharing more about themselves than they intend. (This has come to be called “Zuckering,” after the company’s founder.) A website called darkpatterns.org is dedicated to tracking these kinds of tricks and abuses.

Pretty much everyone agrees that this sort of thing is rotten, and these malevolent deceptions have been well studied, mainly with an eye toward detecting and policing them. But many other varieties of deceptive design fly below the radar. One relatively benign class of examples occurs when an operating system fails in some way and a piece of software is programmed to cover up the glitch. The misdials of the early phone switching system fall into this category. Similarly, reports Adar, when the servers at Netflix fail or are overwhelmed, the service switches from its personalized recommendation system to a simpler one that just suggests popular movies.

Designers of technology engage in another kind of relatively neutral deception when they manipulate users to behave in ways that will help improve system performance. Some speech-recognition software functions better if it can analyze a person’s normal speech, as opposed to the sort of halting robot-speak many people instinctively use when talking to a machine. Thus, designers attempt to make the software sound more like a person than a strict commitment to honesty in design would probably allow.

And then there’s more straightforward benevolent deception. Placebo buttons and other calming interfaces, like the digital signs that over-estimate wait times for lines at amusement parks, arguably fall into this category by giving people the illusion of control, or by soothing anxious nerves. Coinstar kiosks, the coin-counting machines stationed in Walmart and other stores, are rumored to take longer than necessary to tally change because designers learned that customers find a too-quick tally disconcerting. Another example: robotic systems designed to help people overcome their own perceived limits. Researchers have experimented with rehabilitation robots that under-report the force a patient exerts, to help her move past a sense of learned weakness and recover from injury faster.

Many crosswalk and elevator door-close buttons don’t actually work. The purpose of these placebo features is to give the impatient person a false sense of agency.

In the non-robot realm, your personal trainer might also use deceit for your benefit when she covers the treadmill display so you can’t see your running speed, spurring you to run faster than you thought possible. The term benevolent deception itself seems to have its roots in medicine, where it has been a matter of discussion for years. Some doctors believe that being too bluntly honest about a diagnosis can do more harm than good in some instances, so they omit some details or avoid direct answers.

That the idea of benevolent deception originated in a domain where, much as in technology, there is a huge asymmetry in information—and power—between users and providers, is telling. Doctors and tech workers have similar reputations for arrogance, and any known practice of benevolent deception might easily breed resentment in users and patients. But how much complexity do we want to be saddled with in the name of full disclosure, and how much can we safely, expediently navigate? Consider that the standard user interface on your computer—a desktop with folder and trashcan icons—is perhaps the most familiar deception of all, hiding a universe of code behind a simple, “usable” facade.

ADAR’S SIMPLE TAXONOMY OF deception bears some resemblance to that of Thomas Aquinas, who claimed there were three types of lies: malicious lies (meant to do harm; mortal sins), jocose lies (told in fun; pardonable), and officious lies (helpful; pardonable)—a hierarchy that is itself a simplification of St. Augustine’s eight types of lies, established nearly a thousand years before. Separated by centuries, these systems are all attempts to schematize the complex emotional and social landscape of deception in human affairs.

Human-computer affairs are not so different. Software that always deceives in a way that is detectable is repellent to us, just as are people who always lie. Software that is inconsistent with its truthfulness or deceit may breed mistrust and annoyance. And software that deceives in a way that benefits the person using it may be as easily forgiven as a personal trainer who’s helping you get in shape. It’s not hard to understand that the designer behind a workout program or physical therapy robot is looking out for your own good. Besides, it seems easy enough to tweak the settings if you don’t like the lies.

But deceptive technology is liable to evolve. Since the advent of computers, people have grown accustomed to being in charge of their machines: You type on a keyboard or click a mouse and the computer responds. Sure, it may increasingly seem like we are the ones who are programmed to react to the beeps and buzzes of our devices. But in most cases, each interaction with a computer starts with an input from us (we are the ones who opted to receive those “push notifications”) and ends with us. Right now, the computer, phone, or robot is simply an intermediary, a messenger—a conduit for a human-human interaction.

In the future, true artificial intelligence systems will alter the game significantly. They will make mistakes and recover and learn as a human would. And ultimately they will be able to scan their environment for contextual clues about how to behave and respond. For instance, engineers working on partially self-driving cars are busy envisioning how a human operator might best share responsibility for driving with the car itself. Here’s one possibility: The car’s software may use embedded sensors to look for biological cues (variations in heart rate, skin conductance, eye movement) that indicate distraction or impairment in a human driver, and then take over if those cues are detected.

In other words, engineers are already thinking about how machines might sense a person’s state of mind.

That’s what we do when we suss out whether it might be best to fudge the truth with someone we care about. It’s what children are doing when, as they begin to formulate a “theory of mind” at age three or four, they tell their first semi-competent lies. And it’s what a doctor does when she must tell her patient he is dying. Since each patient is different, the doctor must intuit the patient’s range of responses to various versions of the news and then select the best one for this patient. The doctor then makes the decision to redirect the conversation, to gently administer the blow, or to be blunt. We call this having a good bedside manner. In contexts far beyond medicine, something like it will be important for artificial intelligence systems to learn. A good AI system will be able not just to reach logical conclusions, but to present them in a sensitive way.

In Albert Camus’ novel The Stranger, the main character Meursault is, according to the author, “a hero for the truth,” unable or unwilling to lie. During the trial in which Meursault is accused of murder, the prosecutor argues that his brutally honest demeanor is that of a “monster, a man without morals.” To be unyieldingly truthful, then, is to become a sort of inhuman grotesque.

It is an uncomfortable truth that, if the goal is to make artificial intelligence as human-like as possible, these smart machines will, almost by definition, have to be programmed to know when to be honest—and when to lie.

For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).

Related Posts