MAIDRL: Semi-centralized Multi-Agent Reinforcement Learning using Agent Influence


In recent years, reinforcement learning algorithms have been used in the field of multi-agent systems to help the agents with interactions and cooperation on a variety of tasks. Controlling multiple agents simultaneously is extremely challenging as the complexity increases drastically with the number of agents in the system. In this study, we propose a novel semi-centralized deep reinforcement learning algorithm, MAIDRL, for mixed cooperative and competitive multi-agent environments. Specifically, we design a robust DenseNet-style actor-critic structured deep neural network for controlling multiple agents based on the combination of local observation and abstracted global information to compete with opponent agents. We extract common knowledge through influence maps considering both enemy and friendly agents for unit positioning and decision-making in combat. Compared to the centralized method, our design promotes a thorough understanding of the potential influence that a unit has without the need for a complete view of the global state. In addition, this design enables multiagent understanding of common goals, unlike fully decentralized methods. The proposed method has been evaluated on StarCraft Multi-Agent Challenge scenarios in the real-time strategy game, StarCraft II, and the results show that, statistically, the agents controlled by MAIDRL perform better than or as well as those controlled by centralized and decentralized methods.


Computer Science

Document Type




Publication Date


Journal Title

IEEE Conference on Games