HAL 9000 may have had to issue commands and use brute force to get his way, but as technology has developed and changed, growing faster and more reliable over time, it’s become better at persuading us, too.What’s less known is that there’s a term for the seemingly ubiquitous and invisible forces behind behavior change on our phones and computers.
Persuasive technology, which can take the form of apps or websites, marries traditional modes of persuasion—using information, incentives, and even coercion—with the new capabilities of devices to change user behavior. Persuasive technology can be found in mobile downloads, or on the digital homes of tech giants like Amazon and Facebook, where behavior-oriented design persuades us to buy more often (one-click checkout) or stay logged in (manipulating social media news feeds). Many mobile apps that try to influence user behavior are either health-oriented—apps that incentivize weight loss, help to manage addictions and other mental health issues, or influence sleep practices—or promote environmental awareness—like Opower, which encourages energy conservation. Though it’s been around for a while, persuasive technology is becoming increasingly popular and profitable, inviting a deeper look into its ethics and efficacy.
Persuasive technology and its field of study, captology, was pioneered in large part by BJ Fogg, who began studying the use of persuasion in technology in the 1990s while a doctoral student in psychology at Stanford University. In 1998 he founded the Stanford Persuasive Tech Lab, a hub for the study and promotion of persuasive technologies. Among the lab’s ongoing projects are examinations of the psychology of Facebook and the Peace Innovation Lab, a project launched in 2010 to cast “a spotlight on how technology and emerging social behaviors and insights are promoting new paths to global peace.”
“Users are expected to accept the basic premise of the ‘correctness’ of the designers’ chosen end behavior; and the designer is not expected to have rigorously debated the preferability of this end behavior.”
Fogg, still the Tech Lab’s director and the author of several books on the power of persuasive technologies, including Persuasive Technology: Using Computers to Change What We Think and Do, also spends half of his time consulting and working on industry projects. Part of Fogg’s trademark is the simple, three-step Fogg Method: Get specific, make it easy, and trigger the behavior. In addition to consulting, Fogg also offers “Behavior Design Boot Camps” for designers looking to emulate the success of popular persuasive technologies and apps like Instagram—whose co-founder was a student of Fogg’s. And, perhaps unsurprisingly, there are plenty of Fogg’s former students working in tech. His famous 2007 “Facebook class,” which pushed students to design and launch Facebook apps at break-neck speed, launched the careers of many of its 75 students, and made some of them big money before they’d even completed the course. Interviewed about the wild success of his class in a 2011 New York Times article, Fogg said of the year 2007, shortly after the rise of Facebook, that it was “a period of time when you could walk in and collect gold.”
It’s no longer 2007, but persuasive technology is still experiencing a boom concurrent with the rising prominence of Silicon Valley—and Fogg’s influence can be seen everywhere. The ninth International Conference on Persuasive Technology is being held in Padova, Italy, in just a few weeks, and persuasive mobile apps are continuing to rake in millions in investment, if not profit. But there are still many lingering ethical questions, especially regarding the line between persuasion and manipulation, and how to determine persuasive technology’s efficacy over the long-term.
As J. David Bolter reminded us in his 1984 classic Turing’s Man, “Computers perform no work themselves; they direct work;” it’s important to remember who’s doing the persuading in persuasive technology. One of the first calls to examine the ethics of persuasive technology came in 1999, when Daniel Berdichevsky and Erik Neuenschwander examined the subject in a special issue of Communications of the ACM. In laying out a framework for evaluating the morality of a persuasive technology, the authors arrive at a golden rule: “The creators of a persuasive technology should never seek to persuade anyone of something they themselves would not want to be persuaded of.”
Individuals can either willingly or unwillingly engage with persuasive technology, and it’s the unwilling Berdichevsky and Neuenschwander are concerned about. They can fall prey to Facebook’s native advertising by following unwelcome brands or products, or they can opt in to mobile products that help individuals stop smoking or serve as reminders to take medications. In this case, the difference between persuasion and manipulation is equated to that between opting in and unknowing participation, but things get more complicated when considering situations in which a community, rather than an individual, is the target.
For Bran Knowles, a research associate at Lancaster University, the question of ethics arose while working on BARTER, an app that works to encourage members of the Lancaster, England, community to spend locally. BARTER doesn’t give its users rewards for buying from local businesses; it only provides a visualization of money circulating around and out of the community, highlighting areas where money is going elsewhere. As designed, it encourages a community-focused approach to spending.
Because BARTER aims to influence an entire community rather than an individual, Knowles and her team wanted to be especially careful not to manipulate the app’s users. “The issue that it posed for us was that we couldn’t actually go and get consent from everyone in the community,” she says. “It was important for us that it wasn’t us forcing the community to do something against their will, and that they were somehow brought into a dialogue about what it is that we were trying to do and why.” To that end, Knowles and her team are currently interviewing business owners and members of their community to elicit feedback.
“For issues like local spending as a root to increasing the wealth of a community, that’s sort of controversial because it’s not been proven,” Knowles says. As compared to apps that help people lose weight or stop smoking, encouraging local spending as a way to retain wealth in a community is a more political goal, Knowles says. “So you can’t really just design a behavioral solution because there’s no guarantee that that behavior is definitely preferable to the alternative.”
Though much of persuasive technology arises from the Fogg tradition, Knowles sees herself as directly critiquing his approach. In a recent paper on the ethics of BARTER, Knowles alleges that “the issue of whether the techniques of so-called ‘persuasion’ are indeed manipulative has escaped serious scrutiny within the computing community.” And though Fogg dedicates a chapter of his 2002 book Persuasive Technology to questions of ethics, searches for “ethics” and “manipulation” on the Stanford Persuasive Tech Lab website yield no results, and academics and researchers probing questions of ethics in persuasive technology are few and far between. Knowles chalks this up to the example set by Fogg, in which “users are expected to accept the basic premise of the ‘correctness’ of the designers’ chosen end behavior; and the designer is not expected to have rigorously debated the preferability of this end behavior.”
James Williams, a doctoral student at the Oxford Internet Institute studying the ethics of persuasive technology, also believes that “we need more research into what acceptable technological persuasion looks like, as well as a greater societal awareness about the persuasive mechanisms at play.” However, he says that identifying a technology as persuasive can lead to more than the usual amount of questioning and deliberation in creating an app or website. “Invoking persuasion brings the question of design goals into the foreground, inviting the users to think more about the purpose of the product than they would have otherwise,” Williams says over email. “As a result, more attention—and scrutiny—can be paid to questions that might otherwise have gone unasked.”
IN ADDITION TO ETHICS, there’s also the question of effectiveness in persuasive technology. Especially when it comes to measuring long-term behavior change, there’s little consensus that persuasive technologies have a big impact on their users’ lives. In an April literature review of 95 persuasive technology studies, 54.7 percent reported positive results, while another 37.9 percent reported partially positive results. As the study’s authors mention, however, many of the studies employed a short time frame in evaluating their technology’s efficacy.
“The creators of a persuasive technology should never seek to persuade anyone of something they themselves would not want to be persuaded of.”
Do people actually stick with demanding apps that require them to change their behavior? It’s hard to tell. Kara Follmer, a 23-year-old consultant in Washington, D.C., tried out an app called Pact for two weeks before she called it quits. Pact uses financial incentives to help its users meet three kinds of goals, from exercising more often to eating more fruits and vegetables or consuming fewer calories. Each week, the user decides how many goals she wants to make—workouts completed or veggies eaten—and determines how much money she’ll pay the app for each goal she doesn’t complete, between $5 and $50. At the end of the week, those who achieved their goals get paid by the users who weren’t so successful. According to Marissa Window, Pact’s head of marketing, the app’s 550,000 users meet about 92 percent of their goals every week, and getting charged by the app usually isn’t a deterrent to continued use.
Follmer, who’s been trying to adjust to a slightly less active lifestyle after college, wanted to eat more fruits and vegetables, but Pact also “seemed like an easy way to make, like, a dollar or 40 cents a week, which I thought was worth a try.” But while meeting her goals proved manageable, remembering to use the app wasn’t. “I stopped using it because I realized that I would eat a fruit or vegetable at lunch and then I would forget to take a picture of it,” she says. Recording her exercise also proved difficult, and Follmer found herself cramming in workouts on Saturday nights to avoid getting charged by the app on Sunday. “I just realized that that made me feel too attached to the app in a way that wasn’t worth it to me.”
Knowles thinks Pact’s message—conflating money with exercise—is confusing, even if it initially provides a strong incentive for maintaining goals. Increasing the effectiveness of persuasive technology lies in identifying and encouraging the right values in users, according to Knowles. “The theory, essentially, is that if you appeal to the wrong values in people, you’re not going to be as effective,” she says. She breaks down these values into two main groups: Self-Transcendent values—feelings of social justice and community, for example—and Self-Enhancement values—feelings of ambition and self-improvement. “Typically, persuasive technologies … tend to appeal to Self-Enhancement values, because they try to tell people why they should adopt a behavior because it’ll be in their self-interest,” Knowles says. “So you might get short-term behavior changes, but they won’t be long-term because they won’t be self-driven.”
The promise of money might get us on the treadmill, but it could take something more to build strong, life-long exercise habits. If persuasive technology is going to have any chance combating the forces of addiction, laziness, and other undesirable human traits, understanding how to harness it is key. Then again, as awareness of persuasive technology’s power continues to grow, it’s worth keeping an eye on who’s holding the reins.