The conventional economic wisdom is that, while technology will certainly destroy some jobs, it will also create entirely new kinds of work, often in industries or roles previously unimaginable. While this view is generally supported by experience, history also tells us there is one type of worker witnessing a great deal of job destruction—and virtually no job creation.
Martin Ford is the author of two books about the potential impact of advancing technology on the job market and economy: Rise of the Robots: Technology and the Threat of a Jobless Future (2015) and the Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (2009).
In 1915, there were over 22 million horses in the United States; by 1960, about three million. Is it possible that the work available to a great many human beings is ultimately destined to follow the same path? On the surface, it may seem absurd to compare people to horses. Horses, after all, can provide transportation or help out on the farm, but they have very little ability to adapt. When cars, trucks, and tractors came along, horses had nowhere to turn.
People, of course, are intelligent. People can adapt to new roles. From an economic standpoint, perhaps the single most important difference between horses and humans is that people can learn to do new things.
Should we take comfort from that fact? Does it guarantee that we’ll always have enough remunerative work to employ the vast majority of our adult population? The reflexive answer might be yes. But there is another point to consider. Unlike the cars, trucks, and tractors that displaced horses, today’s machines and algorithms can learn. In other words, modern information technology is not just encroaching on new types of work; it is gradually taking on the single most important capability that has allowed workers to stay ahead of the machines.
This new ability for machines and algorithms to adapt and learn is perhaps best illustrated by recent advances in an area of artificial intelligence known as deep learning. Computer scientists have used artificial neural networks, which operate according to the same essential principles as the biological neurons in the brain, to perform basic pattern recognition tasks for decades. Deep learning takes advantage of recent technical breakthroughs that allow neural networks of unprecedented complexity to be employed in areas like image recognition and language translation. Some deep learning systems can now perform better than humans at recognizing images such as road signs. Chinese researchers recently unveiled an algorithm that can outperform humans at the type of verbal reasoning problems found on IQ tests, while a team at the University of California–Berkeley has used deep-learning techniques to build robots capable of figuring out how to complete tasks that require a high degree of dexterity, like unscrewing the top from a bottle.
Deep learning, as well as a number of other approaches to machine learning, is already being deployed in areas from self-driving cars to algorithms that write news stories. And there is every reason to expect both the capability of these smart algorithms and the number of uses to which they are put to accelerate.
Yet another critical difference between today’s information technology and the mechanical innovations of the past is the breadth of its impact. Information technology has invaded and transformed every industry. In fact, it might be fair to view information technology as a kind of utility—a phenomenon in many ways similar to electricity, except that rather than simply delivering electric power, it delivers machine intelligence, including the ability to learn, adapt, make decisions and solve problems. As that capability improves, it’s inevitable that it will substitute for more and more human labor, and it will increasingly threaten knowledge-based jobs that require significant education and training. Indeed, we already see smart algorithms capable of basic journalism, legal document review, and analysis of medical images.
The risk workers face today is that we have passed a technological turning point, and as a result, a new kind of creative destruction will unfold. Smart, learning algorithms will power robots, self-service systems, and increasingly capable mobile devices, and this will inevitably drive labor intensive industries like retail, fast food, and hospitality toward employing ever-fewer workers. At the same time the new industries that we hope will create replacement jobs will rely on artificial intelligence and robotics right from their inception. Companies like Google and Facebook, both of which employ tiny workforces but massive computing facilities, probably offer us a pretty good preview of what most future industries will look like.
The bottom line is that as smart, learning machines are deployed across the economy, a large percentage of our workforce may face the same challenge as the horse. If that turns out to be the case, then adapting to that new reality could well be one of the seminal challenges in coming decades.
For the Future of Work, a special project from the Center for Advanced Study in the Behavioral Sciences at Stanford University, business and labor leaders, social scientists, technology visionaries, activists, and journalists weigh in on the most consequential changes in the workplace, and what anxieties and possibilities they might produce.