Comparing Three Approaches to Micro in RTS Games

Abstract

We compare three promising approaches to micromanaging units in real-time strategy games. These approaches span the range from easily understandable meta-search, which uses genetic algorithms to search through the space of parameters of a human specified control algorithm to pure potential fields, which searches through a space of less human understandable potential field parameter values, to neuro-evolution of augmented topologies which evolves an opaque difficult to understand neural network. All three approaches use a two-objective pareto optimal fitness function that maximizes damage done to opponent units and minimizes damage received by friendly units. We first show that all three approaches can quickly evolve micro superior to the default AI for Starcraft 2, a popular real-time strategy game and research testbed. We then manually co-evolve micro against previously evolved micro to produce micro that plays well against good (gold level) human Starcraft 2 players. Furthermore, we can integrate the micro produced by different approaches to control a single group of units composed from multiple types. These results indicate that we may choose our approach based on our need to understand unit control behavior and thus provides another bridge to transferring research results from autonomous units in simulation games to autonomous agents (robots) in the real world.

Document Type

Conference Proceeding

DOI

https://doi.org/10.1109/CEC.2019.8790308

Keywords

meta-search, multi-objective optimization, NEAT, pure potential fields, RTS games

Publication Date

6-1-2019

Journal Title

2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings

Share

COinS