Adaptive Experience Sharing in Multi-Agent Reinforcement Learning: A Unified Approach for Heterogeneous and Homogeneous Agents

(IPAB Workshop): Adaptive Experience Sharing in Multi-Agent Reinforcement Learning: A Unified Approach for Heterogeneous and Homogeneous Agents

 In Multi-Agent Reinforcement Learning (MARL), experience sharing plays a crucial role in improving learning efficiency and performance. Traditional approaches to MARL involve three primary paradigms: independent learning, parameter sharing, and experience sharing. Independent learning treats each agent as a separate learner, ideal for heterogeneous agent settings where individual policies differ. Parameter sharing, on the other hand, is used for homogeneous agents, where a single neural network governs all agent behaviors. Experience sharing combines these two paradigms, allowing agents to share experiences while maintaining separate neural networks. In this talk, I present a novel, generalized framework for experience sharing that adapts to the diversity among agents, whether homogeneous or heterogeneous. This adaptive paradigm monitors the evolving policies of agents during training and learns a robust, dynamic experience-sharing protocol based on the emerging policy differences. The proposed approach is applicable across a wide spectrum of agent behaviors, offering improved performance in scenarios where traditional methods may fall short. I will demonstrate the effectiveness of this approach through empirical results, showcasing its broad applicability and robustness in MARL environments.  

Date: 
Thursday, 19 September, 2024 - 13:00 to 14:00
Speaker: 
Atish Dixit
Affiliation: 
University of Edinburgh
Location: 
Informatics Forum, G.03