You say he’s wistful. I say he’s plotting. (WALL-E image courtesy Disney)
Ah, lovable robots, so helpful and kind and compliant, who wouldn’t fall in love with them? And that was the gist of a popular story by Robert Ito in our current print edition. People, perhaps channeling the Sirius Cybernetics Corporation’s promise of a robot as “your plastic pal who’s fun to be with,” are growing inordinately fond of their mechanical friends:
What happens as robots become ever more responsive, more humanlike? Some researchers worry that people—especially groups like autistic kids or elderly shut-ins who already are less apt to interact with others—may come to prefer their mechanical friends over their human ones.
Sure, it’s always harmless fun with robots, until they take over the world (like they did in that series of really loud movies). But don’t take my fevered word, or Will Smith’s, on this. Take the word of a some boffins at Cambridge, where such rantings get serious notice. Thanks to the BBC.com, I learned that the “risk of robot uprising wiping out the human race” is being studied at the aptly named Centre for the Study of Existential Risk.
A trip to their still-Spartan website wiped the smirk right off my face.
Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.
The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind. We are convinced that there is nowhere on the planet better suited to house such a centre. Our goal is to steer a small fraction of Cambridge’s great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future.
We’ve certainly looked at climate change around here and taken its threat seriously, and we’ve also considered some of the issues surrounding nanotechnology. Sure a robot apocalypse might seem far-fetched if our depiction of it goes straight from a Roomba to a Terminator, but less so if we start considering how artificial intelligence already contributes to flash crashes and medical tragedies without malevolence programmed in.
As two of the Cambridge centre’s founders, the philosopher and the entrepreneur, wrote about intelligent machines at the Australian website The Conversation:
The good news is that we probably have no reason to think they would be hostile, as such: hostility, too, is an animal emotion.
The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen.
People sometimes complain that corporations are psychopaths, if they are not sufficiently reined in by human control. The pessimistic prospect here is that artificial intelligence might be similar, except much much cleverer and much much faster.
…
If that sounds far-fetched, the pessimists say, just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.
Even if you don’t have a taste for the dystopian, perhaps preferring George Jetson to HAL 9000, it still seems smart to worry a bit about too-smart robots now … while we can still fire them.
http://youtu.be/sF51ChhNaSc