Many of the boundary lines in our lives are highly literal, and, for the most part, this is how we've been trained to think of boundaries: as demarcations shored up by laws, physical, legal, or otherwise, that indicate exactly where one thing ends and another begins. Here is the border of your property; here is the border of your body; here is the border of a city, a state, a nation—and to cross any of these boundaries without permission is to transgress. But one of the most significant boundary lines in our lives is not this way, and one piece of ubiquitous technology is making this line increasingly permeable and uncertain, at a cost that we may only be starting to comprehend.
Here's a thought experiment: Where do you end? Not your body, but you, the nebulous identity you think of as your "self." Does it end at the limits of your physical form? Or does it include your voice, which can now be heard as far as outer space; your personal and behavioral data, which is spread out across the impossibly broad plane known as digital space; and your active online personas, which probably encompass dozens of different social media networks, text message conversations, and email exchanges?
This is a question with no clear answer, and, as the smartphone grows more and more essential to our daily lives, that border's only getting blurrier.
Michael Patrick Lynch, a professor of philosophy at the University of Connecticut and director of the school's Humanities Institute, which promotes interdisciplinary research, says that the notion of an "extended self" was coined by the philosophers Andy Clark and David Chalmers in 1998.
"They argued that, essentially, the mind and the self are extended to those devices that help us perform what we ordinarily think of as our cognitive tasks," Lynch says. This can include items as seemingly banal and analog as a piece of paper and a pen, which help us remember, a duty otherwise performed by the brain. According to this philosophy, the shopping list, for example, becomes part of our memory, the mind spilling out beyond the confines of our skull to encompass anything that helps it think.
"Now if that thought is right, it's pretty clear that our minds have become even more radically extended than ever before," Lynch says. "The idea that our self is expanding through our phones is plausible, and that's because our phones, and our digital devices generally—our smartwatches, our iPads—all these things have become a really intimate part of how we go about our daily lives. Intimate in the sense in which they're not only on our body, but we sleep with them, we wake up with them, and the air we breathe is filled, in both a literal and figurative sense, with the trails of ones and zeros that these devices leave behind."
This gets at one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: We not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn't suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is.
At the beginning of his recent book, The Internet of Us, Lynch uses a thought experiment to illustrate how thin this boundary is. Imagine a device that could implant the functions of a smartphone directly into your brain so that your thoughts could control these functions. It would be a remarkable extension of the brain's abilities, but also, in a sense, it wouldn't be all that different from our current lives, in which the varied and almost limitless connective powers of the smartphone are with us nearly 100 percent of the time, even if they aren't—yet—a physiological part of us.
According to data released in 2017 by the analytics firm Flurry, American consumers spent five hours per day on their mobile devices, and showed a dizzying 69 percent year-over-year increase in time spent in apps like Facebook, Twitter, and YouTube. The prevalence of apps represents a concrete example of the movement away from the old notion of accessing the Internet through a browser and the new reality of the connected world and its myriad elements—news, social media, entertainment—being with us all the time.
When Moira Weigel, a writer and junior fellow at Harvard University, was researching her book, Labor of Love: The Invention of Dating, she found that 2009, as the Facebook and Twitter mobile platforms were taking off and our social media identities became increasingly woven into our daily life, seemed to be a focal point in the transition away from separate notions of online and IRL. She points to online dating as a good example. Even if you didn't meet someone on an app, you wouldn't go out with them before checking out their Facebook profile or their Instagram. Our online identities had become a part of who we are in the world—whether we were aware of it or not.
"In the '90s and even through the early 2000s, for many people, there was this way of thinking about cyberspace as a space that was somewhere else: It was in your computer. You went to your desktop to get there," Weigel says. "One of the biggest shifts that's happened and that will continue to happen is the undoing of a border that we used to perceive between the virtual and the physical world."
The debate over what it means for us to be so connected all the time is still in its infancy, and there are wildly differing perspectives on what it could mean for us as a species. One result of these collapsing borders, however, is less ambiguous, and it's becoming a common subject of activism and advocacy among the technologically minded. While many of us think of the smartphone as a portal for accessing the outside world, the reciprocity of the device, as well as the larger pattern of our behavior online, means the portal goes the other way as well: It's a means for others to access us.
Most obviously, this can take the form of the omnipresent harassment that many people experience online, as well as more specific tactics, like revenge porn and the leaking of nude pictures; doxxing, or the revealing of someone's personal details; and swatting, the practice of calling a SWAT team to an individual's home under false pretenses.
Less clear to most people, however, is the extent to which the companies that make the technology, apps, and browsers that we use are not just tracking but shaping our behavior. While this issue has recently come to the fore as a result of revelations like the Cambridge Analytica scandal, Weigel sees the unfettered access to our data through smartphone and browser use of what she calls the Big Five tech companies—Apple, Alphabet (the parent company of Google), Microsoft, Facebook, and Amazon—as a legitimate problem for notions of democracy. Thanks to the border-breaking nature of these technologies, and particularly the smartphone, the success of these companies has put an unfathomable amount of wealth, power, and direct influence on the consumer in the hands of just a few individuals—individuals who can affect billions of lives with a tweak in the code of their products.
"This is where the fundamental democracy deficit comes from: You have this incredibly concentrated private power with zero transparency or democratic oversight or accountability, and then they have this unprecedented wealth of data about their users to work with," Weigel says. "We've allowed these private companies to take over a lot of functions that we have historically thought of as public functions or social goods, like letting Google be the world's library. Democracy and the very concept of social goods—that tradition is so eroded in the United States that people were ready to let these private companies assume control."
Considering the magnitude of the iPhone's impact, it's hard to believe that it came out barely over a decade ago. But while the influence of both the phone itself and the tech revolution as a whole can often feel irresistible—look no further than those usage numbers—there are measures that could be taken to help shore up our crumbling borders.
Tim Hwang, a writer and researcher in San Francisco who used to work as the global public policy lead for artificial intelligence and machine learning at Google, has thought extensively about how these devices foster the functioning of the collective in addition to the individual. About a decade ago, he explains, the rhetoric around the Internet was that the crowd would prevent the spread of misinformation, filtering it out like a great big hive-mind; it would also help to prevent the spread of things like hate speech. Obviously, this has not been the case, and even the relatively successful experiments in this, like Wikipedia, have a great deal of human governance that allows them to function properly. He says that the pessimism resulting from this realization has led us to give power to the platforms so that they can regulate themselves, like allowing Facebook to tell us what's true and what's not, but that there is another approach to the way we actually exist in these spaces.
"Are there tools, are there designs we can put in place to allow communities to do a better job at self-governance?" Hwang asks. "Do we want to give more moderation to particular users? Does the platform want to grant users more power to control issues of harassment and hate speech, knowing that, in some cases, it might be over-applied?"
Weigel sees two potential opportunities for limiting the amount of influence the Big Five has on consumers. The first would be legal; she cites a growing body of work exploring possible antitrust suits designed to break up these companies. Writing in Logic, a magazine Weigel co-founded, K. Sabeel Rahman, an associate professor of law at Brooklyn Law School who writes about inequality and democracy in the modern economy, compares these potential efforts to those that broke up the industrialists in the late 19th and early 20th centuries. "Today, as technology creates new forms of power, we must also create new forms of countervailing civic power," Rahman writes. "We must build a new civic infrastructure that imposes new kinds of checks and balances."
The other option is for workers, many having entered tech for idealistic rather than financial motives, to help regulate and restrict their own employers. Many have already begun to express regret for the effectiveness of their innovations, a phenomenon perhaps best exemplified by the Center for Humane Technology, led by former Google design ethicist Tristan Harris. But Weigel views these efforts with suspicion due to the idea that they often follow the same playbook as the paternalistic, top-down design infrastructure that created these problems in the first place.
A more fitting example of positive change, Weigel suggests, took place in June, when Google employees successfully campaigned for the company to stop its work with the Pentagon on Project Maven, a program that improved the effectiveness of military drones.
"Reading the New York Times, especially until about six months ago, whenever this tech backlash started, I feel like you could be forgiven for thinking there were five people in the tech industry," Weigel says, laughing. "In fact, these are huge companies that employ tens of thousands of people, many of whom don't necessarily agree with everything the companies are doing. I think that engineers have enormous power to influence these companies for the better right now."
Lynch, the University of Connecticut philosophy professor, also believes that one of our best hopes comes from the bottom up, in the form of actually educating people about the products that they spend so much time using. We should know and be aware of how these companies work, how they track our behavior, and how they make recommendations to us based on our behavior and that of others. Essentially, we need to understand the fundamental difference between our behavior IRL and in the digital sphere—a difference that, despite the erosion of boundaries, still stands.
"Whether we know it or not, the connections that we make on the Internet are being used to cultivate an identity for us—an identity that is then sold to us afterward," Lynch says. "Google tells you what questions to ask, and then it gives you the answers to those questions."
And we should especially recognize this when it seems least clear: in those situations online that most closely seek to emulate the structures and dynamics of real life. Like, for example, your relationships. It isn't enough that the apps in our phone flatten all of the different categories of relationships we have into one broad group: Friends, Followers, Connections. They go one step further than that.
"You're being told who you are all the time by Facebook and social media because which posts are coming up from your friends are due to an algorithm that is trying to get you to pay more attention to Facebook," Lynch says. "That's affecting our identity, because it affects who you think your friends are, because they're the ones who are popping up higher on your feed."