Skip to yearly menu bar Skip to main content


In-Person Poster presentation / top 5% paper

Efficiently Computing Nash Equilibria in Adversarial Team Markov Games

Fivos Kalogiannis · Ioannis Anagnostides · Ioannis Panageas · Emmanouil-Vasileios Vlatakis-Gkaragkounis · Vaggos Chatziafratis · Stelios Stavroulakis

MH1-2-3-4 #136

Keywords: [ game-theory ] [ multiagent-reinforcement-learning.marl ] [ policy-gradient ] [ learning-in-games ] [ reinforcement-learning ] [ rl ] [ optimization ] [ Theory ]


Abstract: Computing Nash equilibrium policies is a central problem in multi-agent reinforcement learning that has received extensive attention both in theory and in practice. However, in light of computational intractability barriers in general-sum games, provable guarantees have been thus far either limited to fully competitive or cooperative scenarios or impose strong assumptions that are difficult to meet in most practical applications. In this work, we depart from those prior results by investigating infinite-horizon \emph{adversarial team Markov games}, a natural and well-motivated class of games in which a team of identically-interested players---in the absence of any explicit coordination or communication---is competing against an adversarial player. This setting allows for a unifying treatment of zero-sum Markov games and Markov potential games, and serves as a step to model more realistic strategic interactions that feature both competing and cooperative interests. Our main contribution is the first algorithm for computing stationary $\epsilon$-approximate Nash equilibria in adversarial team Markov games with computational complexity that is polynomial in all the natural parameters of the game, as well as $1/\epsilon$. The proposed algorithm is based on performing independent policy gradient steps for each player in the team, in tandem with best responses from the side of the adversary; in turn, the policy for the adversary is then obtained by solving a carefully constructed linear program. Our analysis leverages non-standard techniques to establish the KKT optimality conditions for a nonlinear program with nonconvex constraints, thereby leading to a natural interpretation of the induced Lagrange multipliers.

Chat is not available.