Last week, at Recode’s annual Code Conference, Elon Musk offered some thoughts on the prospect of advanced artificial intelligence (AI). In particular, he very rationally characterized the fears he and others have regarding AI, arguing that: “I think it would be fair to say that not all AI futures are benign.”
Musk framed the potential worst-case scenario as one of a tyrannical and despotic superintelligent AI wreaking havoc upon humanity. To hedge against that outcome, he started OpenAI, a nonprofit that aims “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.” The intent, according to Musk, “is to democratize AI power.” By decentralizing, distributing, and democratizing research into AI, Musk and others hope to head off the potential for a cataclysmic event that would render humanity subject to the whims of a capricious AI or a superintelligence controlled by a small cadre of humans bent on personal power and gain.
This all sounds like a lot of scary developments—the stuff of Terminator-style science fiction. But does AI really hold the potential for snuffing out civilization? For the time being, it seems clear this is not the case. Granted, Musk is investing in the longer-term with OpenAI, but people seem to ignore the fact that task-specific AI is already all around us.
AI is not merely confined to the Watsons, Amazon Echos, Siris, and other popular technologies. Machine learning algorithms also underlie Google’s search engine, autonomous vehicle sensor technology, factory floor robots, and much more. We often don’t consider these technologies to be AI because they have become relatively commonplace and don’t have the luster or allure of the superintelligent agents we regularly witness in science fiction novels, movies, and video games. Couple that with the normalcy bias we all experience (especially pronounced in an age of rapidly developed and socially-incorporated and accepted technologies) and the result is a world filled with narrowly-tailored artificial intelligence that no one recognizes. As the computer scientist John McCarthy once said: “As soon as it works, no one calls it AI anymore.”
As algorithms and machine learning processes get more advanced, larger and larger data sets become available, and processing power continues to decrease in cost while increasing in absolute terms, we’re likely to see more advanced AI. The prospects of such advancements are not merely limited to more effective search results and handier personal digital assistants; rather, they extend to a world where automation consumes more and more jobs.
Of course, jobs won’t simply disappear overnight. First, our work will be redefined in the context of more robots performing increasingly complicated tasks, greater automation, and more advanced algorithms capable of learning from previous mistakes. Then, when machine learning AI become sufficiently advanced, the likelihood of mass dislocation of labor approaches. (This is one reason it’s worth considering social safety net provisions like those discussed by my new colleague, Samuel Hammond. Alternatively, ITIF recently released a report that discusses why mass labor dislocation may not be something to fear from AI.)
The essence of how humans understand the world is simply a process of filtering real world “big data” through our brains. We observe the world, interpret the data using our senses, evaluate based on context and prior experience, and then make decisions according to those analyses—all occurring as rapidly as the synapses in our brains can permit. AI operates according to the same principles. As computers get better at analyzing unstructured data online, we should expect to find more exciting, practical, and economic growth-enhancing uses for the technology. Until then, we should temper civilization-is-doomed fear-mongering and focus on practical policies that can help guide the development of this potentially life-and world-changing technology.