Could AI Developments Get Out of Hand? Have They Already?
Fear not the killer robot, at least in your business AI applications, but plenty can go wrong that is a lot less dramatic.
Artificial intelligence developments are occurring in countless industries and at an increasingly rapid rate. But are these developments occurring without the proper safeguards?
There’s no way to know for sure at this point, but it’s clear that this concern is in the air. In late October, a lifelike, AI-based, female robot named Sophia mocked the idea by telling her interviewer that he had been “reading too much Elon Musk.” Of course, Sophia herself expressed a desire to “destroy humans” not long ago.
Aside from far-fetched concerns over “Terminator” movie scenarios, there’s plenty for AI developers to be concerned about, from security and safety to just doing the job well enough that companies feel justified in -- ideally, delighted by -- switching to new systems. All that requires care and the right approach.
Getting AI right the first time.
K.R. Sanjiv, chief technology officer of Wipro Limited, understands this struggle. Wipro is a leading global information technology, consulting and business process services company. Sanjiv works on the front lines of the AI revolution and thus appreciates the need for honing the effectiveness and reliability of AI projects in development.
“In creating an AI system,” he said, “my team worked to determine what the ‘right’ ecosystem for AI looks like. For our system to continuously become smarter, we knew an ecosystem made up of technology, data, industry-specific knowledge, research and security would be necessary.”
Software companies would do well to follow Sanjiv’s lead. An AI system is a multicomponent one, and it will only be as strong as its weakest piece. Even a first-rate AI system would be useless if it contained a glaring security flaw. Sanjiv added, “Without each of these key ingredients, even a human wouldn’t be able to execute what we ask of AI, meaning the ecosystem plays a vital role in such an initiative’s success.”
Safeguarding your company’s AI future.
It’s inevitable: The world’s collective future will increasingly involve AI. Putting the right ecosystem in place will ensure companies are prepared to meet this future head-on. In order to prepare now, here are four practices to incorporate to safely accelerate your company’s forward AI trajectory.
1. Avoid overconfidence.
In 2015, Wired ran an eye-opening piece showing how AI systems for visual recognition can be fooled with what should be “no-brainer” test images. The tested systems -- described as “leading image-recognizing neural networks” -- failed miserably, misidentifying formless abstract shapes as baseballs and African Gray Parrots. Even worse, the systems had 99 percent confidence in the accuracy of their identifications.
Such misplaced confidence is mildly problematic when it underpins a system providing nightlife recommendations or apparel suggestions based on local weather conditions. But what if an AI system is confidently wrong about a customer’s financial records, military operations (even making moral decisions in combat) or emergency management?
For these reasons, the first step always ought to be to retain a healthy skepticism regarding the capabilities of one’s AI systems.
2. Test systems effectively.
The right attitude is a start, but alone it is not enough. To ensure effectiveness and safety, AI systems need to be rigorously tested under the right conditions.
Testing must be done before an AI system is deployed in a real-world application to head off potential unforeseen problems before they do any damage. For example, researchers at RAND simulated the design of different types of fertilizers intended to reduce atmospheric carbon and fed an AI system the characteristics of different fertilizers as learning input.
At first, the system seemed on target, but once the AI was allowed to consider delayed-release agents that are often used in fertilizers, it bypassed EPA safeguards for protecting the environment. Because all this was known to the researchers conducting the simulation, they were able to prevent the implementation of policy recommendations that could have had disastrous environmental results. Such testing should apply to any high-stakes domain, such as transportation, policing, military operations, medical device and drug development, childcare and many others.
3. Build in accountability.
Of course, testing can never offer perfectly accurate modeling of the real world. As Bas Steunebrink, a researcher at the Swiss AI lab IDSIA, told Futurism.com: “We cannot accurately describe the environment in all its complexity; we cannot foresee what environments the agent will find itself in in the future.”
So even after simulation trials clear an AI project for deployment in the real world, vigilance -- and mechanisms for response if something goes wrong -- is critical. Project leaders and companies must maintain clear and explicit accountability standards for their AI systems.
Good auditing is a best practice here, as is encryption with the most up-to-date security practices. Imagine the PR catastrophe that could result from the hacking of a system that has control over the personal information of millions of clients.
4. Take a page from Tony Robbins.
Any good businessperson knows that to stay competitive, you never stop driving to get better. It turns out that life for a competitive AI system is no different.
Steunebrink’s research is dedicated to instilling this principle in the AI world through what he calls “recursive self-improvement,” in which AI systems are empowered to monitor their own performance and adjust themselves accordingly to produce enhanced future results. This happens over time and with increasing experience in the problem domain.
When 9-year-old kids do this on the violin or piano, we call it practicing. AI systems fitted with recursive self-improvement capabilities are not much different -- but the results can be startling. AI systems iterate through problem sets far faster than any human ever could, and unlike kids, they don’t get tired or stop practicing to catch their favorite TV show.
Steunebrink’s approach, which he calls EXPAI (experience-based artificial intelligence), aims to create AI systems that take baby steps but then make the micro-adjustments necessary, based on performance feedback, to tune themselves so that future actions are more accurate. It’s a potentially powerful approach that companies should keep in mind.
The AI future is already here, but instead of fretting about the sorts of doomsday scenarios that are yielding press coverage for Elon Musk, smart companies are looking to use sensible principles to create robust opportunities for AI deployment in their markets.
It’s an exciting time. Fortunately, by following the practices outlined above and working with companies that build integrity into their AI approach, big strides will come quickly, bringing with them products and services that will enthrall customers and improve 21st-century life for everyone.