At DeepMind, Alphabet’s AI labs, researchers built virtual video-game players that master the game by playing other bots. Most of the time, the bots played a capture-the-flag video game better than human game testers who are professional. DeepMind researcher Max Jaderberg said that the work, first described in the company blog last year, is moving towards “developing the fundamental algorithms” that could in the future lead to a “more human intelligence.” Not every lab, however, can afford the compute power required.
The Wall Street Journal reports that the goal of Alphabet, OpenAI, Facebook, Microsoft and others is to build “artificial intelligence that can solve a variety of problems in diverse settings without additional training, much the same way humans leverage prior experience to navigate new situations or to improvise.” Successfully creating this, which some refer to as “general AI,” has been difficult.
Games have been tagged as a way to move this forward. “This is another one of those game domains where you think the humans have a special capability,” said Massachusetts Institute of Technology professor Jonathan How. “To have a technology come out and say that’s not true … it created quite a buzz.”
Not everyone believes that games can help evolve AI systems because “AIs that have bested humans at various video games have been duped when small changes were made to the settings with which the bots were familiar.”
“I am less and less convinced that computer games are still in the critical path toward general AI,” said Georgia Institute of Technology AI researcher Mark Riedl. “I don’t think we’ve exhausted them yet, but we’re pretty close.” He added that, for better AI algorithms, researchers “need environments that are much more complicated than what computer games can offer.”
DeepMind’s bots had a three-week training period, during which each bot played 450,000 games, “the equivalent of four years of real-time, human play.” In the matches, bots competed “with and against each other, and against people … [and] roughly three quarters of the time, the machines outperformed humans, even when researchers tweaked the bots so they took about the same amount of time as humans to react to what was happening in the game.”
DeepMind researchers are now “currently working on scaling the technology to accommodate bigger teams, and also larger, more complicated environments.”
The New York Times reports that to test AI algorithms with games, DeepMind, OpenAI and other similar labs “rely on tens of thousands of computer chips … [and] renting access to all those chips [can] cost the lab millions of dollars.” Carnegie Mellon University researcher Devendra Chaplot noted that, whereas labs such as those can bear the costs, “academic labs and other small operations cannot.” That concerns some who believe that “a few well-funded labs will dominate the future of artificial intelligence.”
But, adds NYT, “even the big labs may not have the computing power needed to move these techniques into the complexities of the real world, which may require stronger forms of AI that can learn even faster.”