Before we have the technology
Minimizing risk from machines and ourselves
From robots self-navigating city streets to chatbots inventing their own languages, the media is rife with stories about artificial intelligence (AI) becoming ubiquitous and more powerful. To some, this marks the start of an era of leisure and plenty as promised by utopian science fiction. To others, the outlook is not so rosy. There is a growing movement of business leaders, technologists, and researchers that is seriously concerned with the rapid pace of AI progress. Some even believe it may bring about the end of humanity. Although AI could one day pose a substantial risk, it is not (yet) beyond our control. There is still time to remedy the current harms and even sidestep the most serious threats, but we cannot afford to delay in establishing technical and ethical standards and taking action on the present problems that exist.
Cause for alarm
Elon Musk is a billionaire entrepreneur and engineer known for his ambitious technology initiatives that sometimes resemble science fiction themselves. He has used his platform to voice concerns about the rate of AI developments. At SXSW in 2018, Musk warned the audience, “AI is far more dangerous than nukes.” Renowned physicists like Stephen Hawking and MIT professor Max Tegmark have also emphasized the potential dangers of AI machine learning if it becomes smarter than human beings (or superintelligent). “The real worry isn’t malevolence, but competence,” Tegmark clarifies. “A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours.”
A superintelligent AI’s goal may be originally defined by humans to be benign or even something that helps us, but unexamined objectives may result in an AI that accomplishes our stated goals but has harmful unintended consequences. Swedish philosopher and Superintelligence author Nick Bostrom highlights the issue in his famous paper clips thought experiment:
A simple example is that of a paperclip maximiser: an AI with a utility function that seeks to maximize the number of paperclips. This goal is a not too implausible test command for a new system, yet it would result in an AI willing to sacrifice the world and everyone in it, to make more paperclips. If superintelligent, the AI would be very good at converting the world into paperclips, even if it realizes its creators actually didn’t want that many paperclips – but stopping would lead to fewer paperclips, and that’s what its goal is.
There are are many documented cases of machine learning systems performing in ways that surprise their creators, subverting their goals or finding humorous (or perhaps terrifying) hacks. When we see today’s AI do strange things like supposedly “invent their own language,” it is because human beings constructed the goals for the machine poorly. They did not require it maintain human readability. Without constraints placed on the machine learning system (usually in the form of penalties against a numerical score), the AI learned a more efficient means to its end. Machine learning systems are ruthless optimizers. When they learn to maximize scores in a way that no longer results in useful or human-readable outputs, we might want to pull the plug, but a sufficiently advanced AI may not let us.
Although almost everyone agrees that superintelligent AI does not yet exist, AI safety advocates like Bostrom emphasize that its emergence could happen unexpectedly. “We humans are like small children playing with a bomb,” he writes in his book. The concern is rooted in the potential for these systems to self-improve. Current machine learning capabilities cannot meaningfully modify themselves and adapt to change, but once the ability to self-enhance is achieved, advocates worry that it may accelerate at a rate that is beyond anything people can hope to keep up with. Unlike tyrants of the past, AI agents are not constrained to a human lifespan or biological needs like sleep. If intellectual dominance is achieved, humans may have little hope of ever regaining control.
Where we are now
Even today’s most cutting-edge AI still has a distance to go before it can learn to improve upon itself, let alone start harvesting humans to make into paper clips. Current AI technology is considered “narrow” meaning that it can only perform well on specifically defined data processing tasks like looking for objects in images or identifying the terms of a legal document. This means that a single AI—even one that can beat the very best humans at a particular game like chess, Jeopardy, or the notoriously-difficult-to-master Go—will fail embarrassingly when presented with a new task such as playing a different kind of game. AI does not generalize well. While there is a lot of interest in areas of research such as transfer learning (the ability to apply the learning from one task or area to another), right now AI is fairly constrained.
This does not mean that narrow AI is ineffectual. Today’s AI provides us with unprecedented abilities to interpret data and understand our world. Narrow AI already helps us to avoid traffic, translate between languages, detect fraud, and take amazing selfies. These systems excel at tasks that are often overwhelming, redundant, or downright impossible for human minds to accomplish like cross-referencing multitudinous information sources and processing millions of files in mere minutes. According to a survey by IBM, more than 70% of CEOs said that AI will play an important role in the future of their organization, and 50% of them had plans to adopt the technology by the end of this year. IBM CEO Ginni Rometty said at the Gartner Symposium last year: AI is “going to change 100 percent of jobs, 100 percent of industries, and 100 percent of professions.” We need not create superintelligent AI in order for these technologies to transform our world. In fact, they already have.
The present danger
Even if we avoid creating superintelligence, focusing on narrow AI alone is not enough to protect us. The powerful, specialized tools of narrow AI can still be incredibly dangerous. The increasing interest in machine learning systems by military organizations provides a glimpse of what may lie in store. The same multiplying force that makes modern AI a powerful tool for good can also make it a dangerous weapon. Already, we have seen how these technologies applied to policing and surveillance can produce a data-driven panopticon. While superintelligence prognosticators fear powerful AI turning against us, perhaps we should be worried about it doing exactly what it’s told.
Whether by minimizing a punishing “loss function” or trying to maximize a high score, these AI agents optimize around the goals and incentives we provide them. Even with all of our high-powered deep neural networks, human beings are still responsible for defining the rules of the game: what to value and what can be optimized away. Constructing rigid rules in the face of our ever-changing, multiplicitous world will inevitably produce suboptimal outcomes. There are no silver bullets or universal golden rules. Our messy human lives require nuance, compassion, and mindful adaptation to changes of context. It follows that if we’re trying to create AI that benefits all of us and reduces harm, it is of the utmost importance that we are careful in choosing what sorts of behavior we incentivize.
We don’t have to wait for the advent of superintelligent systems to see the effects of runaway algorithms and maligned incentive systems; they’re all around us! This pernicious issue with rule-making predates computers entirely. We see it in our law, our economic systems, and even our morality. One need not look further than the burgeoning global climate crisis to see the effects of a failure to incentivize behavior that values human life. Perhaps we are the paper clip optimizers.
Elusive intelligence
So why is it, given the dangers that exist even with narrow AI, that people still pursue technological superintelligence? Despite the warnings by scholars and researchers, pursuit of ever-more powerful AI capabilities shows no sign of slowing. Enthusiasts point to complex, multivariate problems that people have been unable to address on our own. Powerful, adaptable AI systems could perhaps be used in long distance space travel missions or to care for and provide companionship to people as they age. This type of behavior is considered “AI-hard,” meaning that it would require an entity or system that is able generalize, adapt, and learn. An AI like this might become a superintelligence.
Even if superintelligent AI could be achieved without its goals falling out of alignment with our own, we (as the cognitively inferior) might lose the ability to understand its reasoning. The benefit reaped in solving many AI-hard problems, by definition, comes at the expense of our ability to check the AI’s work. Perhaps worse still, there’s no guarantee that such a benevolent artificial superintelligence would even be believed by people. Returning to the example of addressing global climate change, we might pour resources into answering AI-hard questions only to come to conclusions that we already “knew” but did not want to act upon. Perhaps it is not answers we seek, but rather we seek an authority greater than ourselves.
Indeed, the very definition of “true artificial intelligence” has eluded us time and time again as the goalposts keep shifting. It was once thought that mastery at the game of chess might indicate “the possibility of a mechanized thinking” as Bell Labs engineer Claude Shannon mused in the 1950 Philosophical Magazine paper “Programming a Computer for Playing Chess.” Instead, as Shannon predicted, it largely resulted in a narrowed redefinition of human-like thought.
If we are to make strategic choices about AI today and in the future, we have to better define the scope of our problem. Then we can make informed judgments about what we are willing to sacrifice in order to address it. The telltale characteristics of AI superintelligence are amorphous and subject to change. We can’t formulate very effective strategies for such a nebulous threat. We have to target that which we can define. Fortunately, there’s a difference between asking “how can we develop AI in a safe, responsible fashion” and “how can we stop an evil superintelligence that’s already out there?”
What can we do now?
AI is a powerful tool. Even it its current form, there exist numerous opportunities to improve our health, increase safety, and help us to better understand each other. There are countless greenfield opportunities to do new incredible things. However, to reap the benefits that AI can bring, humanity must develop it responsibly and humanely. All of us must hold the creators of these systems to high ethical standards and make the weaponization of AI so reprehensible and shameful that we limit its proliferation. This will slow AI development. And it should. Ultimately, we have to be willing to make some compromises to reduce the likelihood of harm and maximize our benefit. (Call it a Skynet insurance policy.)
AI research could be likened to stem cell research: it’s incredibly potent and has the potential to do tremendous good, but there are also some dangers that we are not yet prepared to address. We can learn from the standards set forth by the international community on biological research and not feed a frenzy of “progress” for the sake of “progress.” We can demand clear explanations of why capabilities are important and require demonstrations that they can be applied safely as a stipulation for funding.
In the field of narrow AI, there is plenty of room for improvement. The general public is becoming better educated about the harmful biases built into many systems that are already live in the field. In the information security industry, companies hire hackers to actively try to break their systems to help them catch vulnerabilities and to improve. It’s a kind of security quality assurance. Organizations developing AI should apply a similar quality assurance process to the ethics of AI systems (especially in high stakes decision making). People should know where AI is being used and what risks it brings. In many cases, the architects of these technologies already are aware of their shortcomings. If the low-hanging fruit are so obvious as to not warrant an external party’s scrutiny, we should do better.
One of the most powerful tools that we have available to us right now is our connectedness. Especially while the AI community is still relatively small, individual people have the ability to influence the norms of what we want to achieve and what we are willing to accept. Indeed, it is our responsibility to be critical of work that does not take into account the potential harms (superintelligent or otherwise). This doesn’t just mean reacting at the research publishing stage or once something gets media coverage. This means friends, family, and colleagues asking other tough questions about what each other is working on. This means celebrating whistleblowers for protecting us from harm. We should make it socially untenable for anyone pursuing research on unconstrained self-modifying systems or weapons projects. Ethics and integrity are critical, and we should be unabashed in upholding these ideals.
A brighter future
Ultimately, there’s a wealth of opportunity in narrow AI and a multitude of complex and meaty problems for researchers to explore. Many of the juiciest problems aren’t technological at all. We may want to build super-powered AI systems that make ethical decisions, but we don’t yet have agreement on what ethical decisions even are for normal-powered humans. With better alignment on our own goals, we may not need to create artificial general intelligence that learns and performs at a human level; we can focus on creating ethical tools that empower people to perform at a super-human level instead.