This is a conversation Elon Musk had recently with Kelly Evans and Julia Boorstin on CNBC about his investment in artificial-intelligence company Vicarious:
Boorstin: Well why did you invest in Vicarious? What exactly does Vicarious do? What do you see it doing down the line?
Musk: Well, I mean, Vicarious refers to it as recursive cortical networks. Essentially emulating the human brain. And so I think—
Boorstin: So you want to make sure that technology is used for good and not Terminator-like evil?
Musk: Yeah. I mean, I don’t think—in the movie Terminator, they didn’t create A.I. to—they didn’t expect, you know, some sort of Terminator-like outcome. It is sort of like the Monty Python thing: Nobody expects the Spanish inquisition. It’s just—you know, but you have to be careful. Yeah, you want to make sure that—
Evans: But here is the irony. I mean, the man who is responsible for some of the most advanced technology in this country is worried about the advances in technology that you aren’t aware of.
Musk: Yeah.
Just to reiterate here, the man who is the inspiration for Iron Man is concerned enough about artificial intelligence taking over and destroying humanity that he’s investing in companies with no intention of making any money but merely to “keep an eye” on things.
We’re reaching Peak A.I. in movies. There’s Transcendence, in which Johnny Depp lifts things with his mind. (I only saw the preview, so I’m guessing here.) The RoboCop remake is terrible—and Hollywood will remake anything these days—but its mere existence is another sign of A.I.’s increasing relevance. Her, featuring Scarlett Johansson jettisoning sad sack Joaquin Phoenix due to his underpowered brain, is the current champion for best representation of “the bleak way we’ll live in the not too distant future.” The operating system he falls in love with isn’t dangerous, but it does showcase the insignificance of humanity.
So, should we be worried?
Google thinks so. The irony is thick here.
James Barrat, a documentary filmmaker who wrote Our Final Invention: Artificial Intelligence and the End of the Human Era, thinks so, too. In an interview with Smithsonian, he said:
Eventually, machines will be created that are better than humans at A.I. Research. At that point, they will be able to improve their own capabilities very quickly. These self-improving machines will pursue the goals they’re created with, whether they be space exploration, playing chess or picking stocks. To succeed they’ll seek and expend resources, be it energy or money. They’ll seek to avoid the failure modes, like being switched off or unplugged. In short, they’ll develop drives, including self-protection and resource acquisition—drives much like our own. They won’t hesitate to beg, borrow, steal and worse to get what they need.
Here’s another argument that’s a little happier:
Generally speaking, if the trend toward cooperation and homeostasis with our environment and those around us is the trend that new technological innovations, improvement of education, reduction of diseases and aberrant social behavior, and other compliments to our existence that technology helps us achieve, it would seem likely that an artificial intelligence on par with our own levels of understanding (and minus the evolutionary tendencies to compete regarding hunger, sexuality, etc.) would be logically fitted for coexistence, rather than hell bent on destroying us.
While that’s all good, the jury still seems to be out. Perhaps the more compelling question, then, is why are we so fascinated with A.I. and our potential extinction? Philosopher Stuart Armstrong, who has spent time as a fellow at Oxford’s University Future of Humanity Institute, has a compelling answer.
“One of the things that makes AI risk scary is that it’s one of the few that is genuinely an extinction risk if it were to go bad,” he said. “With a lot of other risks, it’s actually surprisingly hard to get to an extinction risk.” Even nuclear winter wouldn’t kill everyone.
In some ways, our interest in A.I. mirrors our captivation with aliens: great unknowns that are real enough to be credible and pose a threat that would end humanity for all time. It’s sort of fun to contemplate, though, isn’t it?