|       HOME        |        REVIEWS       |       GAMES        |       STREAM        |        CONTACT       |

Blizzard to Implement Google’s Deepmind Into StarCraft II

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email

Blizzard to Implement Google’s Deepmind Into StarCraft II

Blizzard Entertainment and Google’s Deep Mind announced at BlizzCon 2016 to open up StarCraft II to AI and Machine Learning researchers around the world, Google’s DeepMind will train artificial intelligence on StarCraft II, DeepMind, is an AIcompany whose mission is to understand intelligence. StarCraft II emerged as a target for artificial intelligence researchers because of its layered complexity: players must make high-level strategic decisions while also controlling hundreds of units and making countless quick decisions.

“For almost 20 years, the StarCraft game series has been widely recognised as the pinnacle of 1v1 competitive video games, and among the best PC games of all time. The original StarCraft was an early pioneer in eSports, played at the highest level by elite professional players since the late 90s, and remains incredibly competitive to this day. The StarCraft series’ longevity in competitive gaming is a testament to Blizzard’s design, and their continual effort to balance and refine their games over the years. StarCraft II continues the series’ renowned eSports tradition, and has been the focus of our work with Blizzard.

An agent that can play StarCraft will need to demonstrate effective use of memory, an ability to plan over a long time, and the capacity to adapt plans based on new information. Computers are capable of extremely fast control, but that doesn’t necessarily demonstrate intelligence, so agents must interact with the game within limits of human dexterity in terms of “Actions Per Minute”. StarCraft’s high-dimensional action space is quite different from those previously investigated in reinforcement learning research; to execute something as simple as “expand your base to some location”, one must coordinate mouse clicks, camera, and available resources.  This makes actions and planning hierarchical, which is a challenging aspect of Reinforcement Learning.

We’re particularly pleased that the environment we’ve worked with Blizzard to construct will be open and available to all researchers  next year. We recognise the efforts of the developers and researchers from the Brood War community in recent years, and hope that this new, modern and flexible environment – supported directly by the team at Blizzard – will be widely used to advance the state-of-the-art.

We’ve worked closely with the StarCraft II team to develop an API that supports something similar to previous bots written with a “scripted” interface, allowing programmatic control of individual units and access to the full game state (with some new options as well).  Ultimately agents will play directly from pixels, so to get us there, we’ve developed a new image-based interface that outputs a simplified low resolution RGB image data for map & minimap, and the option to break out features into separate “layers”, like terrain heightfield, unit type, unit health etc. Below is an example of what the feature layer API will look like.

For more information check at Source.

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments