Top players need adaptable short and long-term strategies to grow and defend their bases while laying waste to the opposition. To complicate matters, players cannot see the whole “map” of the game, so decisions are made on partial information.

Photo from DeepMind/Nature

With 44 days of training, an AI model from DeepMind outperformed 99.8% of human players at StarCraft II. The sophisticated e-sport game is widely believed to be at the outer edges of human mastery.

The Guardian reports the achievement was the next ‘grand challenge’ after chess, Go, and poker, all games where AI systems have shown superior play.

The number of possible moves at any point is 10 to the 26th power. The Deep Mind system, named AlphaStar, was deliberately slowed so it couldn’t win by its calculating speed alone. Instead, it had to learn the strategy of play.

The research has implications for advances in areas such as self-driving cars, personal assistants, weather and climate modelling, and other applications involving complexity


AI becomes grandmaster in ‘fiendishly complex’ StarCraft II
THE GUARDIAN | October 30, 2019 | by Ian Sample