Humans Won't Be Able to Control Artificial Intelligence, Scientists Warn

Some smart robots can perform complex tasks on their own, without the programmers understanding how they learned them.
Humans Won't Be Able to Control Artificial Intelligence, Scientists Warn
Image credit: Owen Beard vía Unsplash.com

Grow Your Business, Not Your Inbox

Stay informed and join our daily newsletter now!
Entrepreneur Staff
3 min read
This article was translated from our Spanish edition using AI technologies. Errors may exist due to this process.

The most recent advances in artificial intelligence (AI) have raised several ethical dilemmas. Perhaps one of the most important is whether humanity will be able to control autonomous machines.

It is becoming more and more common to see robots in charge of housework or self-driving vehicles (such as Amazon's ), which are powered by AI. While this type of technology makes life easier, it could also complicate it.

An international group of researchers warned of the potential risks of creating overly powerful and standalone software. Using a series of theoretical calculations, the scientists explored how artificial intelligence could be kept in check. His conclusion is that it would be impossible, according to the study published by the Journal of Artificial Intelligence Research portal.

“A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that carry out certain important tasks independently without the programmers fully understanding how they learned it […], a situation that could at some point become uncontrollable and dangerous for humanity, ” said Manuel Cebrian, co-author of the study, to the Max Planck Institute for Human Development .

 

 

The scientists experimented with two ways to control artificial intelligence. One was to isolate her from the Internet and other devices, limiting her contact with the outside world. The problem is, that would greatly reduce its ability to perform the functions for which it was created.

The other was to design a "theoretical containment algorithm" to ensure that an artificial intelligence "cannot harm people under any circumstances." However, an analysis of the current computing paradigm showed that no such algorithm can be created.

“If we decompose the problem into basic rules of theoretical computing, it turns out that an algorithm that instructed an AI not to destroy the world could inadvertently stop its own operations. If this happened, we would not know if the containment algorithm would continue to analyze the threat, or if it would have stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable, ” explained Iyad Rahwan, another of the researchers.

Based on these calculations, the problem is that no algorithm can determine whether an AI would harm the world. The researchers also point out that humanity may not even know when superintelligent machines have arrived, because deciding whether a device possesses intelligence superior to humans is in the same realm as the containment problem.

More from Entrepreneur

Our Franchise Advisors will guide you through the entire franchising process, for FREE!
  1. Book a one-on-one session with a Franchise Advisor
  2. Take a survey about your needs & goals
  3. Find your ideal franchise
  4. Learn about that franchise
  5. Meet the franchisor
  6. Receive the best business resources
Entrepreneur Insider members enjoy exclusive access to business resources for just $5/mo:
  • Premium articles, videos, and webinars
  • An ad-free experience
  • A weekly newsletter
  • Bonus: A FREE 1-year Entrepreneur magazine subscription delivered directly to you
Try a risk-free trial of Entrepreneur’s BIZ PLANNING PLUS powered by LivePlan for 60 days:
  • Get step-by-step guidance for writing your plan
  • Gain inspiration from 500+ sample plans
  • Utilize business and legal templates
  • And much more

Latest on Entrepreneur