Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Achieving Near-Optimal Individual Regret & Low Communications in Multi-Agent Bandits

Xuchuang Wang · Lin Yang · Yu-Zhen Janice Chen · Xutong Liu · Mohammad Hajiesmaili · Don Towsley · John C.S. Lui

Keywords: [ Multi-agent multi-armed bandits ] [ communication ] [ individual regret ] [ Reinforcement Learning ]


Abstract:

Cooperative multi-agent multi-armed bandits (CM2AB) study how distributed agents cooperatively play the same multi-armed bandit game. Most existing CM2AB works focused on maximizing the group performance of all agents---the accumulation of all agents' individual performance (i.e., individual reward). However, in many applications, the performance of the system is more sensitive to the bad'' agent---the agent with the worst individual performance. For example, in a drone swarm, abad'' agent may crash into other drones and severely degrade the system performance. In that case, the key of the learning algorithm design is to coordinate computational and communicational resources among agents so to optimize the individual learning performance of the ``bad'' agent. In CM2AB, maximizing the group performance is equivalent to minimizing the group regret of all agents, and minimizing the individual performance can be measured by minimizing the maximum (worst) individual regret among agents. Minimizing the maximum individual regret was largely ignored in prior literature, and currently, there is little work on how to minimize this objective with a low communication overhead. In this paper, we propose a near-optimal algorithm on both individual and group regrets, in addition, we also propose a novel communication module in the algorithm, which only needs (O(\log (\log T))) communication times where (T) is the number of decision rounds. We also conduct simulations to illustrate the advantage of our algorithm by comparing it to other known baselines.

Chat is not available.