Date of Graduation
Master of Natural and Applied Science in Computer Science
Distributed decision-making in multi-agent systems (MAS) poses significant challenges for interactive behavior learning in both cooperative and competitive environments. While reinforcement learning (RL) has shown great success in single-agent domains like Checkers, Chess and Go, researchers are motivated to extend RL to MAS. However, as the number of agents increases, effectively dealing with each agent becomes increasingly complex. To mitigate the resulting complexity, a semi-centralized Multi-Agent Influence Dense Reinforcement Learning (MAIDRL) algorithm was previously developed, enhancing agent influence maps to facilitate effective multi-agent control in StarCraft Multi-Agent Challenge (SMAC) scenarios. While MAIDRL shows improved performance in homogeneous multi-agent scenarios, it struggles to make optimal decisions in complex heterogeneous systems. In this research, two major objectives are pursued: first, extending MAIDRL to improve performance in both homogeneous and heterogeneous scenarios, and second, unifying the representations in state space and action space for enabling transfer learning (TL) to leverage knowledge gained from one scenario for other unseen scenarios. To achieve the first objective, this study extends the DenseNet in MAIDRL architecture and introduces a semi-centralized Multi-Agent Dense-CNN Reinforcement Learning framework (MAIDCRL) by incorporating convolutional layers into the deep model. The results demonstrate that the CNN-enabled MAIDCRL significantly enhances learning performance and achieves a faster learning rate compared to the existing MAIDRL, particularly in more complex heterogeneous SMAC scenarios. Additionally, a novel framework is introduced to enable TL for Multi-Agent RL by unifying diverse state spaces into fixed-size inputs, allowing for a unified deep-learning policy applicable across different scenarios within MAS. Furthermore, Curriculum Transfer Learning is adopted, enabling progressive knowledge and skill acquisition through pre-designed homogeneous learning scenarios organized by difficulty levels. This approach facilitates inter- and intra-agent knowledge transfer, leading to high-performance multi-agent learning in more complex heterogeneous scenarios.
deep reinforcement learning, convolutional neural network, multi-agent system, transfer learning, curriculum learning, MAIDRL, MAIDCRL, SMAC, StarCraft II
Artificial Intelligence and Robotics
© Ayesha Siddika Nipu
Nipu, Ayesha Siddika, "SC-MATRL: Semi-Centralized Multi-Agent Transfer Reinforcement Learning" (2023). MSU Graduate Theses. 3884.