Abstract: Generative adversarial networks have been very successful in generative modeling, however they remain relatively challenging to train compared to standard deep neural networks. In this paper, we propose new visualization techniques for the optimization landscapes of GANs that enable us to study the game vector field resulting from the concatenation of the gradient of both players. Using these visualization techniques we try to bridge the gap between theory and practice by showing empirically that the training of GANs exhibits significant rotations around LSSP, similar to the one predicted by theory on toy examples. Moreover, we provide empirical evidence that GAN training seems to converge to a stable stationary point which is a saddle point for the generator loss, not a minimum, while still achieving excellent performance.

Similar Papers

Generative Ratio Matching Networks
Akash Srivastava, Kai Xu, Michael U. Gutmann, Charles Sutton,
Smoothness and Stability in GANs
Casey Chu, Kentaro Minami, Kenji Fukumizu,
Real or Not Real, that is the Question
Yuanbo Xiangli, Yubin Deng, Bo Dai, Chen Change Loy, Dahua Lin,