Skip to yearly menu bar Skip to main content


Poster

An Online Learning Approach to Generative Adversarial Networks

Paulina Grnarova · Kfir Y Levy · Aurelien Lucchi · Thomas Hofmann · Andreas Krause

East Meeting level; 1,2,3 #19

Abstract:

We consider the problem of training generative models with a Generative Adversarial Network (GAN). Although GANs can accurately model complex distributions, they are known to be difficult to train due to instabilities caused by a difficult minimax optimization problem. In this paper, we view the problem of training GANs as finding a mixed strategy in a zero-sum game. Building on ideas from online learning we propose a novel training method named Chekhov GAN. On the theory side, we show that our method provably converges to an equilibrium for semi-shallow GAN architectures, i.e. architectures where the discriminator is a one-layer network and the generator is arbitrary. On the practical side, we develop an efficient heuristic guided by our theoretical results, which we apply to commonly used deep GAN architectures. On several real-world tasks our approach exhibits improved stability and performance compared to standard GAN training.

Live content is unavailable. Log in and register to view live content