Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Towards convergence to Nash equilibria in two-team zero-sum games

Fivos Kalogiannis · Ioannis Panageas · Emmanouil-Vasileios Vlatakis-Gkaragkounis

MH1-2-3-4 #154

Keywords: [ Theory ] [ learning-in-games ] [ no-regret ] [ min-max ] [ game-theory ] [ nash-equilibrium ] [ min-max-optimization ] [ no-regret-learning ] [ optimization ]


Abstract: Contemporary applications of machine learning raise important and overlooked theoretical questions regarding optimization in two-team games. Formally, two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents, each experiencing a utility identical to that of their teammates and opposite to that of the opposing team. We focus on the solution concept of Nash equilibria and prove $\textrm{CLS}$-hardness of computing them in this class of games. To further examine the capabilities of online learning algorithms in games with full-information feedback, we propose a benchmark of a simple ---yet nontrivial--- family of such games. These games do not enjoy the properties used to prove convergence for relevant algorithms. In particular, we use a dynamical systems perspective to demonstrate that gradient descent-ascent, its optimistic variant, optimistic multiplicative weights update, and extra gradient fail to converge (even locally) to a Nash equilibrium. On a brighter note, we propose a first-order method that leverages control theory techniques and under some conditions enjoys last-iterate local convergence to a Nash equilibrium. We also believe our proposed method is of independent interest for general min-max optimization.

Chat is not available.