In Silicon Valley this week, a debate about the potential dangers (or lack thereof) when it comes to artificial intelligence has flared up between two tech billionaires.
Facebook CEO Mark Zuckerberg thinks that AI is going to “make our lives better in the future,” while SpaceX CEO Elon Musk believes that AI a “fundamental risk to the existence of human civilization.”
They’re both right, but they’re also both missing the point. The dangerous aspect of AI will always come from people and their use of it, not from the technology itself. Similar to advances in nuclear fusion, almost any kind of technological developments can be weaponized and used to cause damage if in the wrong hands. The regulation of machine intelligence advancements will play a central role in whether Musk’s doomsday prediction becomes a reality.
It would be wrong to say that Musk is hesitant to embrace the technology since all of this companies are direct beneficiaries of the advances in machine learning. Take Tesla for example, where self-driving capability is one of the biggest value adds for its cars. Musk himself even believes that one day it will be safer to populate roads with AI drivers rather than human ones, though publicly he hopes that society will not ban human drivers in the future in an effort to save us from human error.
What Musk is really pushing for here by being wary of AI technology is a more advanced hypothetical framework that we as a society should use to have more awareness regarding the threats that AI brings. Artificial General Intelligence (AGI), the kind that will make decisions on its own without any interference or guidance from humans, is still very far away from how things work today. The AGI that we see in the movies where robots take over the planet and destroy humanity is very different from the narrow AI that we use and iterate on within the industry now. In Zuckerberg’s view, the doomsday conversation that Musk has sparked is a very exaggerated way of projecting how the future of our technology advancements would look like.
While there is not much discussion in our government about apocalypse scenarios, there is definitely a conversation happening about preventing the potentially harmful impacts on society from artificial intelligence. White House recently released a couple of reports on the future of artificial intelligence and on the economic effects it causes. The focus of these reports is on the future of work, job markets and research on increasing inequality that machine intelligence may bring.
There is also an attempt to tackle a very important issue of “explainability” when it comes to understanding actions that machine intelligence does and decisions it presents to us. For example, DARPA (Defense Advanced Research Projects Agency), an agency within the U.S. Department of Defense, is funneling billions of dollars into projects that would pilot vehicles and aircraft, identify targets and even eliminate them on autopilot. If you thought the use of drone warfare was controversial, AI warfare will be even more so. That’s why here it’s even more important, maybe even more than in any other field, to be mindful of the results AI presents.
Explainable AI (XAI), the initiative funded by DARPA, aims to create a suite of machine learning techniques that produce more explainable results to human operators and still maintain a high level of learning performance. The other goal of XAI is to enable human users to understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners.
The XAI initiative can also help the government tackle the problem of ethics with more transparency. Sometimes developers of software have conscious or unconscious biases that eventually are built into an algorithm -- the way Nikon camera became internet famous for detecting “someone blinking” when pointed at the face of an Asian person or HP computers were proclaimed racist for not detecting black faces on the camera. Even developers with the best intentions can inadvertently produce systems with biased results, which is why, as the White House report states, “AI needs good data. If the data is incomplete or biased, AI can exacerbate problems of bias.”
Even with the positive use cases, the data bias can cause a lot of serious harm to society. Take China’s recent initiative to use machine intelligence to predict and prevent crime. Of course, it makes sense to deploy complex algorithms that can spot a terrorist and prevent crime, but a lot of bad scenarios can happen if there is an existing bias in the training data for those algorithms.
It important to note that most of these risks already exist in our lives in some form or another, like when patients are misdiagnosed with cancer and not treated accordingly by doctors or when police officers make intuitive decisions under chaotic conditions. The scale and lack of explainability of machine intelligence will magnify our exposure to these risks and raise a lot of uncomfortable ethical questions like, who is responsible for a wrong prescription by an automated diagnosing AI? A doctor? A developer? Training data provider? This is why complex regulation will be needed to help navigate these issues and provide a framework for resolving the uncomfortable scenarios that AI will inevitably bring into society.