It’s definitely not the first time AI beat a human at a game, with AI recently besting professional poker players and the world’s top Go player, Lee Sedol. But, this time it’s different, since the competition was held within a much more complicated game environment.Universe that allows anyone to train AI agents across the world’s supply of games and other applications.
The culmination of their research took place on a mainstage of the International Dota 2 Championships, one of the biggest esporting events in the world. As the team of researchers mentioned, it only took an hour of training for the bot to learn the necessary skills to dominate built-in game bots. And it took two weeks of learning in real-time to beat the top human player.“Over the past week, our bot was undefeated against many top professionals including SumaiL (top 1v1 player in the world) and Arteezy (top overall player in the world),” the creators of the bot said on their blog.
Ishutin also mentioned how the bot didn’t hesitate when making certain key decisions, which most of the human players usually do -- which again proves how much volatility emotions have on human decision-making process.
“His movement is insane, I’m really nervous. … OK. I give up, he is too strong,” Ishutin said during the gameplay.AI wins over Go game champions? Multiplayer online battle arena games (MOBA) are generally much more complex than games with so-called perfect information, where the players are informed about everything happening in the game. Chess and Go are examples of the games with perfect information, since each player is fully informed about all of the figures on the board.
In contrast to chess, esports gameplay for games such as Dota usually contains a lot of hidden information. That’s why intelligent game agents must learn to adapt, create a strategy of attacks and tricks, and even learn the ability to deceive their opponents.
As OpenAI states, the correlation between its bot’s actions-per-minute is comparable to that of an average human player, which means it’s not the ability to execute fast that matters for the gameplay, it’s the skill.
That is why OpenAI’s win was immediately hailed by Elon Musk, one of the main supporters of the initiative, as an achievement bigger than Google DeepMind’s win in the ancient game of Go.
OpenAI first ever to defeat world's best players in competitive eSports. Vastly more complex than traditional board games like chess & Go.— Elon Musk (@elonmusk) August 12, 2017
There is more and more interest coming from big companies to experiment with artificial intelligence within the gaming environments. DeepMind recently announced a partnership with Blizzard, the creator of the wildly popular Starcraft game. They’ve introduced a new way to train bots for Starcraft and released a joint research paper with the outline of the game environment for the reinforcement learning applications.The goal of this type of research is to train intelligent agents in different game-like environments since they are much more complex than a typical research problem and usually contain multiple subgoals, which it makes it harder to build intelligence for. That is why these environments may really push the boundaries of research and change what we think is possible for AI to achieve in the long run.