Video games and artificial intelligence have been going hand-in-hand since the dawn of the art form. Developers often strive for the most difficult opponents in games, from first-person shooters like Far Cry creating intelligent and fearsome combatants to sports titles like FIFA attempting to create as authentic rival strategies as possible. Even League of Legends has made recent tweaks to its bots in an attempt to make non-human players even more intelligent.
In recent years, developers have tried to push the boundaries of A.I. even more. Peter Molyneux created Project Milo in an attempt to show how the Kinect could work to build interesting A.I.-to-human interaction. Meanwhile, a group of German scientists at Tubingen university have created a version of Mario that learns how to play by itself, responds to voice commands, and has ‘emotional’ reactions to events that take place.
Not to be outdone, it looks as though Google has also been experimenting with A.I. and video games – this time, however, with a bit of a twist. Researchers at Google’s DeepMind division have created an Artificial Intelligence that plays old-school Atari 2600 games. The A.I. learns the rules incrementally, allegedly as a human would, and even regularly beats the scores of human participants.
The Artificial Intelligence is called the Deep Q-network agent, or DQN for short, and its scores are certainly impressive. DQN outscored humans in 23 of the 49 games that it was tested on, including the likes of Space Invaders and Breakout. It’s not a simple set of programmed problems, a la chess-playing computers. Instead, DQN learns by observing what in-game actions increase the game’s score, therefore learning how to play the game to maximum effect through improvisation.
The system isn’t foolproof, however, and performs best in action-focused games. Atari games where a level of advanced planning seem beyond DQN’s strategy. Volodymyr Mnih, co-author of the team’s study, states that the bot’s learning system does not function as well in games where sophisticated exploration and pathfinding are essential to in-game success. Those interested in learning more can check out the team’s paper in Nature.
This is just the beginning of the team’s efforts though. Rather than simply aiming at making the best A.I. Atari-player, instead the project was created to show how useful the A.I. could be with circumstances outside of a specific set pattern. “Ultimately, if the agent can drive a car in a racing game then, with a few tweaks, it can drive a real car.” It’s a fascinating goal, with the team promising that they are testing DQN on even more complex data, including “racing games and other types of 3D games.” Who knows, it might not be long before DQN enters some retro game championships.
Source: NBC News