The Singularity Could Destroy Us All

In his new book, Nick Bostrom charts the near-inevitable rise of superintelligence. The future does not look bright.

Futuristic thinking can be embarrassingly over-imaginative, or embarrassingly under-imaginative. Some visions of the future have elements of both: Back to the Future Part II, set in the then-distant future of 2015, features far-fetched scenes of cars flying and the Cubs winning the World Series, but also scenes of fax machines, pay phones, and laserdiscs. Predicting the future of technology is incredibly hard in part because we don’t even know what’s possible, let alone what will actually happen. To make a prediction, we need to start from at least a few stable background assumptions—and when we can’t rely even on these, often the only honest prediction is just a humble shrug.

Of particular interest for predictors like Nick Bostrom, a philosopher at Oxford, is the distant possibility of the development of “superintelligence”—an artificial mind, perhaps robotic or an enhanced brain, greater than normal human minds in every measurable way. Superintelligence would have such profound repercussions that its advent has been dubbed the “Singularity,” and it is awaited in certain nerd-circles with messianic zeal normally reserved for cold fusion or a new and even worse Star Wars prequel. The Singularity would happen in three stages:

1. Humans create an artificial superhuman intelligence.

2. That intelligence, being smarter than we are, improves on our design and creates a new version of itself that’s even smarter.

3. Who knows? But it probably doesn’t end well for us.

Bostrom considers the Singularity potentially catastrophic, and in his new book, Superintelligence: Paths, Dangers, Strategies—he calls it a “magnum opus”—is an attempt to chart it and make us aware of its dangers. He enumerates types of superintelligence, ranging from brain emulation (think Johnny Depp in Transcendence) to synthetic artificial intelligence, or AI (Scarlett Johansson in Her) to biological enhancements through eugenics (Jude Law in Gattaca). Considering all cases simultaneously, he argues that superintelligence is possible—perhaps even inevitable and not too far off; that it has the potential to develop extremely rapidly; and that it might kill us all, depending on its goals and how we factor into them.

Seeding the system with the right values, so that those goals have the highest chance of not being catastrophic for us—the “control problem,” as Bostrom calls it—occupies the bulk of the book and is treated with the most technical jargon. The stakes, as he describes them, are the survival of the human race, and he argues that we should weigh the ethical and practical implications of superintelligence at least as carefully as we consider, say, the implications of thermonuclear warfare. And we have only one chance to influence its development before the superintelligence takes over. He imagines a scenario in which superintelligence has converted the entire explorable universe into a computing machine, solely to run simulations of people and torture them forever. Bostrom, who does not think small, estimates the number affected at 10 billion trillion trillion trillion trillion human souls but concedes, darkly, that “the true number is probably larger.”

Superintelligence: Paths, Dangers, Strategies. (Photo: Oxford University Press)

To make any predictions about superintelligence, let alone precise ones, Bostrom has to bootstrap his way out of the inherent problems of uncertainty regarding things that do not exist. He uses maneuvers that resemble Pascal’s wager and the ontological arguments for the existence of God, in that they reach grand conclusions on the basis of no empirical evidence. Consider Bostrom’s “instrumental convergence thesis”—that though we may not know what motivations a dominant superintelligence will ultimately possess, we can surmise that some specific intermediate goals are likely, because many possible superintelligences would want them. Thus, for example, we may assume that a superintelligence will want to acquire resources, because resources are strategically important for achieving goals regardless of what those goals are.

His main intellectual gambit, deployed time and again, is to punt difficult questions to the superintelligence itself. We may not know how to make the AI do our bidding, he says, but the artificial intelligence itself probably does. Instead of worrying that we might load the AI with a values system we might regret—for example, we might tell it to “maximize our pleasure” only to have it implant our brains with electrodes to stimulate our pleasure-centers endlessly—he suggests we tell it to pursue the values we would have asked it to pursue if we were as smart as it. If the artificial intelligence has any doubt, it should either make an educated guess or execute a controlled shutdown. How would we know the superintelligence is correct in its estimations of our desires, for example? Because it would tell us so, and it’s the superintelligent one. If that sounds like cheating to you, you probably won’t enjoy the remainder of his explication.

Bostrom conjures up doomsday scenarios that both frighten and amuse. He considers an AI whose sole function, given to it by its fallible human designers, it is to manufacture paperclips; it becomes so good at its task that it turns the universe into a ruthless paperclip-manufacturing machine. More malicious is a superintelligence that responds to the instruction to “follow the commands written in this letter” by overwriting the letter to say something like “Kill all humans.” Bostrom’s visions telescope between the grand (universe-sized computers) and the oddly mundane (how the transition to a post-singularity world would affect pension schemes).

The way Bostrom anatomizes the different sequences in which a superintelligence might emerge and in what manner we might be punished for failing to intervene occasionally takes on a theological cast, with picayune disagreements about whether the Rapture will occur before or after the Great Tribulation, or whether the Second Coming will occur before or after the Golden Age. In Bostrom’s analysis, the key eschatalogical questions are whether superintelligence will emerge slowly enough for us to realize what’s happening (the “slow takeoff” scenario) or so quickly that we never get the chance, and whether one intelligence will come to dominate or many will share power.

Bostrom’s overall tone of warning is well struck, but what little guidance he is able to give for how we might avoid these calamities ultimately suffers from a lack of imagination. Without any precedent or stable assumptions, we have no basis for assigning probabilities to any of the outcomes. A future in which our fundamental notions of reality and consciousness are subject to radical overhaul is, by definition, unknowable to us. Even granting that superintelligence might become possible—by no means a safe assumption—surely for every scenario where Bostrom’s advice proves life-saving we can imagine a complementary one where it proves fatal. The self-modifying intelligence that improves itself is by nature chaotic, in the sense of being extremely sensitive to its initial conditions. Bostrom’s advice is the equivalent of trying to stop a storm a thousand years from now by preventing a butterfly from flapping its wings today.

Perhaps the superintelligence will despise us for trying to thwart it and will punish everyone, including Bostrom, who worked to inhibit it. Worse yet, a young AI, in an effort to learn more about itself, might scan human literature for the word “superintelligence,” find Bostrom’s book, and use it as a blueprint for our destruction. What if our inability to articulate our values causes the AI to judge us unworthy of existence? What if our AI god is a jealous god? If these scenarios sound far-fetched, recall that “far-fetched” is not a discrediting term in this book.

The problems raised by the Singularity touch on deep human anxieties: that our existence is fleeting; that our meager intelligence seems to point beyond itself at an order we can’t comprehend; that technological mastery often comes at great expense; that we are unfit custodians of the planet. And more acutely, they exemplify a fear specific to what it means to be a parent: that the next generation will surpass us, and might not uphold our values or treat us kindly. If superintelligent AIs are humanity’s children, how will we ever survive their rebellious phase?

Related Posts