We all know artificial intelligence is dangerous. The machines got too smart and took over in the Terminator movies. In War Games the computer almost initiated thermonuclear war (until it realized that mutually assured destruction was pointless via cinema’s most suspenseful tic-tac-toe session). And, of course, in the Hitchhiker’s Guide to the Galaxy books, Eddie the shipboard computer was dangerously chipper and Marvin was depressingly depressed.
What does the future of artificial intelligence hold? I hope to be able to use a computer to store and retrieve my memories. I want be able to control machines (e.g., my car, my toaster, etc) using a brain reader and Bluetooth. Mostly, I want a computer that thinks like a person but can also do the things that computers do well.
I grew up during a golden age of artificial intelligence, and I watched it (with kid eyes) because my father was an AI engineer in the 1980s. Back when computers were far less powerful than your phone is today, AI was based on big dreams. A major goal was “strong AI” — machine intelligence that matches or exceeds human intelligence.
It turns out that strong AI has a history. It’s consistently way, way harder than we think. Some of the biggest challenges turn out to be things that we don’t find challenging at all. Don’t try to get a computer to chat at a cocktail party or, for that matter, cry at a movie. Brilliant minds have made strides working on these sorts of problems and haven’t even come close to scratching the surface of figuring out what the problem is, much less how to solve it.
Humans and computers are good at different things. We are good at laughing at jokes; they are good at doing arithmetic. We are good at rough-housing with our kids (or parents); they are good sorting a list of 10 million children’s names in alphabetical order. We’re good at recognizing objects; they’re good at storing images without forgetting a single bit of data, ever (or forgetting anything else, for that matter). We are good at chatting about the weather; they are good at allowing us to videoconference with friends across the globe. We are good at designing cars; they are good at assembling them. And on and on.
Here’s a key insight about AI: It’s most useful when machines complement, not replicate, human intelligence.
My car is very smart; it has anti-lock brakes and airbags and cruise control and so forth. These things have to be incredibly vigilant and make instantaneous decisions, and they have to calibrate their decisions to their environment just right. These are things a person simply can’t do. And that’s why they’re so valuable. Sure, I’d like to be friends with my car, but I’d rather drive safely and be friends with the person sitting next to me.
Of course, anti-lock brakes may complement my intelligence, but by themselves they aren’t what we usually think of as intelligent.
IBM’s Watson, on the other hand, has the markings of intelligence. But unlike strong AI, Watson’s not about consciousness or being human. It’s not designed to think like a person thinks. It’s designed to answer trivia questions. It just took down the best players in the history of Jeopardy. And it’s brilliant — really, really brilliant; I won’t try to do it justice, but you can read more here.
Watson shows us AI’s strengths and limitations all at once. It’s certainly not intelligent like a person, and it’s nothing like what we see in sci-fi movies. But what it does very well is complement human intelligence.
Watson seems to know everything. It’s true that Google (or, more precisely, the World Wide Web) seems to know everything, too. But if you type a question into a search engine, it returns Web pages. That in itself is remarkable. But you still have to figure out the answer to your question. Watson tells you the answer.
Don’t you wish you could carry Watson around in your pocket? And don’t you wish your car could drive? Google has built one that can. Artificial intelligence can be revolutionary. It’s just not going to produce something human-like anytime soon.