Humans Won't Be Able to Control Artificial Intelligence, Scientists Warn

Some smart robots can perform complex tasks on their own, without the programmers understanding how they learned them.
Humans Won't Be Able to Control Artificial Intelligence, Scientists Warn
Image credit: Owen Beard vía

Grow Your Business, Not Your Inbox

Stay informed and join our daily newsletter now!
Entrepreneur Staff
3 min read
This article was translated from our Spanish edition using AI technologies. Errors may exist due to this process.

The most recent advances in artificial intelligence (AI) have raised several ethical dilemmas. Perhaps one of the most important is whether humanity will be able to control autonomous machines.

It is becoming more and more common to see robots in charge of housework or self-driving vehicles (such as Amazon's ), which are powered by AI. While this type of technology makes life easier, it could also complicate it.

An international group of researchers warned of the potential risks of creating overly powerful and standalone software. Using a series of theoretical calculations, the scientists explored how artificial intelligence could be kept in check. His conclusion is that it would be impossible, according to the study published by the Journal of Artificial Intelligence Research portal.

“A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that carry out certain important tasks independently without the programmers fully understanding how they learned it […], a situation that could at some point become uncontrollable and dangerous for humanity, ” said Manuel Cebrian, co-author of the study, to the Max Planck Institute for Human Development .



The scientists experimented with two ways to control artificial intelligence. One was to isolate her from the Internet and other devices, limiting her contact with the outside world. The problem is, that would greatly reduce its ability to perform the functions for which it was created.

The other was to design a "theoretical containment algorithm" to ensure that an artificial intelligence "cannot harm people under any circumstances." However, an analysis of the current computing paradigm showed that no such algorithm can be created.

“If we decompose the problem into basic rules of theoretical computing, it turns out that an algorithm that instructed an AI not to destroy the world could inadvertently stop its own operations. If this happened, we would not know if the containment algorithm would continue to analyze the threat, or if it would have stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable, ” explained Iyad Rahwan, another of the researchers.

Based on these calculations, the problem is that no algorithm can determine whether an AI would harm the world. The researchers also point out that humanity may not even know when superintelligent machines have arrived, because deciding whether a device possesses intelligence superior to humans is in the same realm as the containment problem.

More from Entrepreneur
Our Franchise Advisors are here to help you throughout the entire process of building your franchise organization!
  1. Schedule a FREE one-on-one session with a Franchise Advisor
  2. Choose one of our programs that matches your needs, budget, and timeline
  3. Launch your new franchise organization
Discover the franchise that’s right for you by answering some quick questions about
  • Which industry you’re interested in
  • Why you want to buy a franchise
  • What your financial needs are
  • Where you’re located
  • And more
Make sure you’re covered for physical injuries or property damage at work by
  • Providing us with basic information about your business
  • Verifying details about your business with one of our specialists
  • Speaking with an agent who is specifically suited to insure your business

Latest on Entrepreneur