Poster

Context-Aware Sparse Deep Coordination Graphs

Tonghan Wang · Liang Zeng · Weijun Dong · Qianlan Yang · Yang Yu · Chongjie Zhang

Keywords: [ multi-agent reinforcement learning ]

[ Abstract ]
[ Visit Poster at Spot D3 in Virtual World ] [ OpenReview
Wed 27 Apr 6:30 p.m. PDT — 8:30 p.m. PDT
 
Spotlight presentation:

Abstract:

Learning sparse coordination graphs adaptive to the coordination dynamics among agents is a long-standing problem in cooperative multi-agent learning. This paper studies this problem and proposes a novel method using the variance of payoff functions to construct context-aware sparse coordination topologies. We theoretically consolidate our method by proving that the smaller the variance of payoff functions is, the less likely action selection will change after removing the corresponding edge. Moreover, we propose to learn action representations to effectively reduce the influence of payoff functions' estimation errors on graph construction. To empirically evaluate our method, we present the Multi-Agent COordination (MACO) benchmark by collecting classic coordination problems in the literature, increasing their difficulty, and classifying them into different types. We carry out a case study and experiments on the MACO and StarCraft II micromanagement benchmark to demonstrate the dynamics of sparse graph learning, the influence of graph sparseness, and the learning performance of our method.

Chat is not available.