AI Won't Replace Us Until It Becomes Much More Like Us
Grow Your Business, Not Your Inbox
The late Stephen Hawkins worried that AI could end mankind. It seemed reasonable. Elon Musk warned machines that learned to operate without a human telling them what to do could “destroy humanity as a matter of course without even thinking about it” if it “[had] a goal and humanity just happens [to be] in the way.”.
But reality has proven that while AI can beat humans at games, it still fails at common tasks an infant can do, such as holding an object. In fact, to solve this problem, researchers from OpenAI used 6144 CPUs and 8 GPUs to collect about one hundred years of experience and trained the AI for 50 hours. As a result, the robotic hand can handle unknown objects -- as long as they are “within reason.”
The fundamental gap
As Antonio Bicchi, a professor of robotics at the Istituto Italiano di Tecnologia said, the research had a number of limitations, such as the hand is always facing up so that the objects always fall in the palm.
We can’t tell for certain if another 100 years of training data would make the AI even better, or if it needs a new set of training data. What we can say is that humans are exceptionally good at incremental learning. Once a human learns to play with a ball, they can master any ball game quite easily. Or when we learn a foreign language, learning other languages becomes easier every time.
But an AI must learn everything from scratch. AI can’t use “other AIs.” It always starts from zero. AI cannot be “combined” with other AIs to do more complex tasks -- at least not yet. So, while AI masters skills at a superhuman level, it only masters one task.
The missing piece
Recent developments in AI were boosted with the invention of deep learning algorithms and improved computer powers, which seemed to mimic how the brain operates by simulating perceptrons. But the brain is much more than that.
We don’t know how the real brain works and, according to Sam Rodriques, we’ll never know until we drill holes in the human skull and plant probes to study how it really works “behind the scenes” (or bones). But what we do know is that the brain is much less rational than we used to think. Studies prove that we decide first, then we try to find reasons for why we decided the way we did.
In fact, people with brain damage who were incapable of developing emotions could perfectly describe what they should be doing in logical terms, yet they found it very difficult to make even simple decisions, such as what to eat. Our choices are arguably always based on emotion.
Yet, there is no AI system that works like this. AI can’t reason, which has led to hidden AI bias in many projects. Of course, there is work in progress to solve these cases, but AI was primarily designed as a black box since we did not know how to code it in the first place.
In addition, just throwing more power in and building bigger machines is not the way if the machine takes the wrong path, and it can be expensive to know where it is really going since we can’t know what it has learned.
Superhuman results and horrible errors
What AI has achieved would take unknown time for humans to code. In fact, the coding would become so complex that it became nearly impossible to “teach” the computer what to do. Instead, we let it “learn” by itself by giving it huge amounts of data and lots of time for training. The AI managed to land on incredible results, but in some cases also got the concept totally wrong.
Remember the AI that learned to play Atari games and beat them? In some cases, it did not actually learn the logic to win, but rather found a shortcut -- a bug that would let it score millions without proceeding to the next level. “Fun” does not matter to AI, just the points.
It gets worse when AI was supposed to land a plane on an aircraft carrier with minimum force -- instead it learned to apply a huge force. This would overflow the program’s memory and be registered as a very small force and a perfect landing. This would kill the pilot but achieved perfect score.
Has AI failed? Definitely not. AI is integrated in our daily lives and has proved its usefulness. We just must not have unrealistic expectations (or fears) and miss on its true potential. It continues to deliver stunning results especially in the fields of computer vision, and recently beat us at Dota 2.But we still have a long way to General AI, and that probably won’t happen until some selfless souls devote their skulls to AI research. On that day, we’ll probably learn how to create that ultimate algorithm -- and probably the machine power has also expanded enough to cater for human-level intelligence. Until that day, we must take the cheating path; use humans when AI comes short, or just use AI to fill in our shortcomings.