Towards convergence to Nash equilibria in two-team zero-sum games
Fivos Kalogiannis · Ioannis Panageas · Emmanouil-Vasileios Vlatakis-Gkaragkounis
Keywords:
optimization
no-regret-learning
min-max-optimization
nash-equilibrium
game-theory
min-max
no-regret
learning-in-games
Theory
2023 In-Person Poster presentation / poster accept
Abstract
Contemporary applications of machine learning raise important and overlooked theoretical questions regarding optimization in two-team games. Formally, two-team zero-sum games are defined as multi-player games where players are split into two competing sets of agents, each experiencing a utility identical to that of their teammates and opposite to that of the opposing team. We focus on the solution concept of Nash equilibria and prove $\textrm{CLS}$-hardness of computing them in this class of games. To further examine the capabilities of online learning algorithms in games with full-information feedback, we propose a benchmark of a simple ---yet nontrivial--- family of such games. These games do not enjoy the properties used to prove convergence for relevant algorithms. In particular, we use a dynamical systems perspective to demonstrate that gradient descent-ascent, its optimistic variant, optimistic multiplicative weights update, and extra gradient fail to converge (even locally) to a Nash equilibrium. On a brighter note, we propose a first-order method that leverages control theory techniques and under some conditions enjoys last-iterate local convergence to a Nash equilibrium. We also believe our proposed method is of independent interest for general min-max optimization.
Video
Chat is not available.
Successful Page Load