Abstract

Relative overgeneralization (RO) occurs in cooperative multi-agent learning tasks when agents converge towards a suboptimal joint policy due to overfitting to suboptimal behaviors of other agents.No methods have been proposed for addressing RO in multi-agent policy gradient (MAPG) methods although these methods produce state-of-the-art results.To address this gap, we propose a general, yet simple, framework to enable optimistic updates in MAPG methods that alleviate the RO problem.Our approach involves clipping the advantage to eliminate negative values, thereby facilitating optimistic updates in MAPG.The optimism prevents individual agents from quickly converging to a local optimum.Additionally, we provide a formal analysis to show that the proposed method retains optimality at a fixed point.In extensive evaluations on a diverse set of tasks including the Multi-agent MuJoCo and Overcooked benchmarks, our method outperforms strong baselines on 13 out of 19 tested tasks and matches the performance on the rest.

Original languageEnglish
Pages (from-to)61186-61202
Number of pages17
JournalProceedings of Machine Learning Research
Volume235
Publication statusPublished - 2024
MoE publication typeA4 Conference publication
EventInternational Conference on Machine Learning - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024
Conference number: 41

Fingerprint

Dive into the research topics of 'Optimistic Multi-Agent Policy Gradient'. Together they form a unique fingerprint.

Cite this