Games have long served as a training ground for AI algorithms, and not without good reason. From a report: Games -- particularly video games -- provide challenging environments against which to benchmark autonomous systems. In 2013, a team of researchers introduced the Arcade Learning Environment, a collection of over 55 Atari 2600 games designed to test a broad range of AI techniques. More recently, San Francisco research firm OpenAI detailed Procgen Benchmark, a set of 16 virtual worlds that measure how quickly models learn generalizable skills. The next frontier might be Mega Man, if an international team of researchers have their way. In a newly published paper [PDF] on the preprint server Arxiv.org, they propose EvoMan, a game-playing competition based on the eight boss fights in Capcom's cult classic Mega Man 2. As they describe it, competitors' goal is to train an AI agent to defeat every enemy and evaluate their performances by common metrics. Why Mega Man 2? The paper's coauthors assert that few other environments test an AI's ability to win against a single enemy, or how well an AI generalizes to win matches against waves of enemies or coevolves to create increasingly difficult enemies. To this end, in EvoMan, an AI-controlled Mega Man -- a robot equipped with a powerful arm cannon -- must beat eight so-called Robot Masters equipped with different weapons. Every time a Robot Master is defeated, the agent acquires its weapon, making it easier to defeat the bosses who remain. As proposed, EvoMan challenge entrants would train their agents on any four enemies and measure how well their learned strategy scales up to a whole set of enemies. The agents would be expected to learn how to identify and react to general patterns like avoiding being shot or shooting at the direction of the enemy, and to deplete an enemy's health from 100 energy points to 0 by the end of each match.

Read more of this story at Slashdot.



from RSSMix.com Mix ID 8859861 https://ift.tt/36bwuOw

Post a Comment

Previous Post Next Post