AgentMixer: Multi-Agent Correlated Policy Factorization

Research output: Contribution to journalConference articleScientificpeer-review

1 Citation (Scopus)

Abstract

In multi-agent reinforcement learning, centralized training with decentralized execution (CTDE) methods typically assume that agents make decisions based on their local observations independently, which may not lead to a correlated joint policy with coordination. Coordination can be explicitly encouraged during training and individual policies can be trained to imitate the correlated joint policy. However, this may lead to an asymmetric learning failure due to the observation mismatch between the joint and individual policies. Inspired by the concept of correlated equilibrium, we introduce a strategy modification called AgentMixer that allows agents to correlate their policies. AgentMixer combines individual partially observable policies into a joint fully observable policy non-linearly. To enable decentralized execution, we introduce Individual-Global-Consistency to guarantee mode consistency during joint training of the centralized and decentralized policies and prove that AgentMixer converges to an ϵ-approximate Correlated Equilibrium. In the Multi-Agent MuJoCo, SMAC-v2, Matrix Game, and Predator-Prey benchmarks, AgentMixer outperforms or matches state-of-the-art methods.

Original languageEnglish
Pages (from-to)18611-18619
Number of pages9
JournalProceedings of the AAAI Conference on Artificial Intelligence
Volume39
Issue number17
DOIs
Publication statusPublished - 11 Apr 2025
MoE publication typeA4 Conference publication
EventAAAI Conference on Artificial Intelligence - Philadelphia, United States
Duration: 25 Feb 20254 Mar 2025
Conference number: 39

Fingerprint

Dive into the research topics of 'AgentMixer: Multi-Agent Correlated Policy Factorization'. Together they form a unique fingerprint.
  • Science-IT

    Hakala, M. (Manager)

    School of Science

    Facility/equipment: Facility

Cite this