You’re at the wheel, tired. You close your eyes, drift from your lane. This time you are lucky. You awaken, scared. If you are smart, you won’t drive again when you are about to fall asleep. Through your mistakes, you learn. But other drivers won’t learn from your mistakes. They have to make the same mistakes by themselves—risking other people’s lives.
Sebastian Thrun is an educator, programmer, robotics developer, and computer scientist. He led the development of Google’s self-driving car. He has also worked to use artificial intelligence in automated homes, health care, drones, and other applications. He is chief executive and co-founder of Udacity, which offers online courses for students and professionals.
Not so the self-driving car. When it makes a mistake, all the other cars learn from it, courtesy of the people programming them. The first time a self-driving car encountered a couch on the highway, it didn’t know what to do and the human safety driver had to take over. But just a few days later, the software of all cars was adjusted to handle such a situation. The difference? All self-driving cars learn from this mistake, not just one. Including future, “unborn” cars.
When it comes to artificial intelligence (AI), computers learn faster than people. The Gutenberg Bible is a beautiful early example of a technology that helped humans distribute information from brain to brain much more efficiently. AI in machines like the self-driving car is the Gutenberg Bible, on steroids. The learning speed of AI is immense, and not just for self-driving cars. Similar revolutions are happening in fields as diverse as medical diagnostics, investing, and online information access.
Because machines can learn faster than people, it would seem just a matter of time before we will be outranked by them. Today, about 75 percent of the United States workforce is employed in offices—and most of this work will be taken away by AI systems. A single lawyer or accountant or secretary will soon be 100 times as effective with a good AI system, which means we’ll need fewer lawyers, accountants, and secretaries. It’s the digital equivalent of the farmers who replaced 100 field hands with a tractor and plow. Those who thrive will be the ones who can make artificial intelligence give them superhuman capabilities.
But if people become so very effective on the job, you need fewer of them, which means many more people will be left behind. That places a lot of pressure on us to keep up, to get lifelong training for the skills necessary to play a role.
The ironic thing is that with the effectiveness of these coming technologies we could all work one or two hours a day and still retain today’s standard of living. But when there are fewer jobs—in some places the chances are smaller of landing a position at Walmart than gaining admission to Harvard—one way to stay employed is to work even harder. So we see people working more, not less.
Get ready for unprecedented times. We need to prepare for a world in which fewer and fewer people can make meaningful contributions. Only a small group will command technology and command AI. What this will mean for all of us, I don’t know.
For the Future of Work, a special project from the Center for Advanced Study in the Behavioral Sciences at Stanford University, business and labor leaders, social scientists, technology visionaries, activists, and journalists weigh in on the most consequential changes in the workplace, and what anxieties and possibilities they might produce.