I am optimistic about our ability to use artificial intelligence and other exponential technologies to address humanity’s grand challenges. Exponential technologies like AI, robotics, synthetic biology, and nanotechnology double in price performance every 18 to 24 months. They will be applied to many of society’s most intractable problems—education, energy, environment, food, water, health, prosperity, security, governance, and space.
Neil Jacobstein is co-chair of artificial intelligence and robotics at Singularity University and distinguished visiting scholar in the mediaX Program at Stanford University.
All technologies have risks. Powerful and rapidly growing technologies have larger and different risk profiles. Researchers may disagree about the timeline for AI advancement, but the accelerating trend is apparent. I think that we need to move forward technologically to address our biggest problems, but we also need to take the risks of technology seriously. Specifically, we need to be proactive about addressing the spectrum of potential risks, reducing their probability, and increasing our ability to respond very fast—in machine time. This is not a trivial endeavor.
Last January I attended a Future of Artificial Intelligence workshop in Puerto Rico with more than 40 AI researchers. We discussed risks, and signed a letter to recommend research priorities in AI safety and control. Research priorities included: 1) verification—making sure AI programs conform to formal specifications; 2) validation—ensuring that the specifications produce the behavior we want without side effects; 3) security—ensuring that the systems can’t be tampered with from within or without; and 4) control—building in redundant pathways to re-establish control if the systems exhibit undesired behavior. Each of these research areas is relevant to the AI in semi-autonomous vehicles, medical, and manufacturing applications. Since that workshop, many additional members of the AI and other communities have signed the research priorities letter. Elon Musk, who attended the workshop, later put up $10 million for a Future of Life Institute grant program in key areas of AI safety research.
In addition to the need for more research in AI safety and control, I am concerned about the impact of AI and machine learning on white-collar employment. Some AI researchers tend to underplay the very real risk that the rapid and accelerating advances in machine learning performing white-collar work will outpace the rate of job creation for humans. There will be considerable new job creation via innovation and entrepreneurial businesses. People can team with AI and machine learning to vastly increase their productivity and extend the range of the possible. However, the rate of job destruction could outpace the rate of human job creation, at least in the short term. Longer term, there are issues about: 1) AI being able to do many of the new jobs better, faster, and cheaper than humans; and 2) the vast amplification of productivity that augmenting humans with AI will produce. These factors may serve to reduce demand for human labor.
We need to address the human employment issues squarely by considering three job-related factors that are often conflated: 1) jobs as a source of money and the things that money can buy; 2) jobs as an incentive to make technical and social contributions; and 3) jobs as a source of self-esteem and community. One fact relevant in this context is that exponential technologies will produce a vast increase in wealth. The absolute amount of this increase in wealth needed to provide a minimum basic income for those without jobs could be considered a round-off error. In addition, AI may make access to first-rate education and health-care diagnostics much less expensive. 3-D printing and eventually atomically precise manufacturing will make the ability to digitally manufacture high quality goods cheap and ubiquitous.
There are barriers to a basic minimum income as a response to technological unemployment or underemployment. First, given the unpopularity of current tax rates, and the tendency toward offshore corporate locations, the huge increase in technology-driven wealth may not be easily shared via today’s tax structure. Second, many citizens with jobs and money object to “a handout for the poor or lazy.” We could build in incentives to contribute and be innovative, but this will take time. Although there is no evidence that laziness has anything to do with unemployment caused by advances in technology, some people feel that, no matter how rich they become, they just don’t like sharing. Third, some rich folks worry about “robbing the poor of their self-esteem.” After we see to it that our fellow citizens have basic shelter, food, health care, and access to education, we can worry about the higher rungs on Maslow’s hierarchy. Self-esteem is important, but as a concern, it probably comes after feeding unemployed people’s children.
The good news is that concerns about community and self-esteem are readily addressed when society places a higher value on taking care of children and old people, health and environment, city infrastructure, and community development. With the huge increase in wealth that exponential technologies enable, we could afford to support these activities and to honor the people who perform them. I think we will eventually do the right things to address these challenges, but I have a sense of urgency about how long it will take society to figure that out, and respond in time.
For the Future of Work, a special project from the Center for Advanced Study in the Behavioral Sciences at Stanford University, business and labor leaders, social scientists, technology visionaries, activists, and journalists weigh in on the most consequential changes in the workplace, and what anxieties and possibilities they might produce.