Examining the Behavior of Evolutionary Algorithms in the Starcraft II Environment

Additional Funding Sources

The project described was supported by a student grant from the UI Office of Undergraduate Research.

Abstract

Autonomous software has become a large part of everyday society. They drive our cars, deliver our packages, fly drones, and maintain our economy. These robots need to learn at both an individual scale but also learn how all the robots need to work together at a management scale. Evolutionary AI techniques could solve the problems that come with maintaining and teaching these robots. Starcraft provides a test-bed for AI behavior analysis. Starcraft requires the player to create a military infrastructure, manage an army at both the micro and macro level, and collect and administer resources. Using Starcraft we examine the best method for evolving two algorithms a macro algorithm and a micro algorithm. Our results include: running the micro algorithm by itself, the macro algorithm by itself, running them both in parallel, running the macro algorithm for a short time then having the micro come in, and finally running them both separately and then combining them after a set amount of generations. Our results indicate that parallel evolutionary algorithms with interdependent goals learn best when infrastructure is learned solo, and then unit behavior is defined.

Comments

T2

This document is currently not available here.

Share

COinS
 

Examining the Behavior of Evolutionary Algorithms in the Starcraft II Environment

Autonomous software has become a large part of everyday society. They drive our cars, deliver our packages, fly drones, and maintain our economy. These robots need to learn at both an individual scale but also learn how all the robots need to work together at a management scale. Evolutionary AI techniques could solve the problems that come with maintaining and teaching these robots. Starcraft provides a test-bed for AI behavior analysis. Starcraft requires the player to create a military infrastructure, manage an army at both the micro and macro level, and collect and administer resources. Using Starcraft we examine the best method for evolving two algorithms a macro algorithm and a micro algorithm. Our results include: running the micro algorithm by itself, the macro algorithm by itself, running them both in parallel, running the macro algorithm for a short time then having the micro come in, and finally running them both separately and then combining them after a set amount of generations. Our results indicate that parallel evolutionary algorithms with interdependent goals learn best when infrastructure is learned solo, and then unit behavior is defined.