Skip to yearly menu bar Skip to main content


Poster

Do WGANs succeed because they minimize the Wasserstein Distance? Lessons from Discrete Generators

Ariel Elnekave · Yair Weiss

Hall 3 + Hall 2B #221
[ ] [ Project Page ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Since WGANs were first introduced, there has been considerable debate whether their success in generating realistic images can be attributed to minimizing the Wasserstein distance between the distribution of generated images and the training distribution. In this paper we present theoretical and experimental results that show that successful WGANs {\em do} minimize the Wasserstein distance but the form of the distance that is minimized depends highly on the discriminator architecture and its inductive biases. Specifically, we show that when the discriminator is convolutional, WGANs minimize the Wasserstein distance between {\em patches} in the generated images and the training images, not the Wasserstein distance between images.Our results are obtained by considering {\em discrete} generators for which the Wasserstein distance between the generator distribution and the training distribution can be computed exactly and the minimum can be characterized analytically. We present experimental results with discrete GANs that generate realistic fake images (comparable in quality to their continuous counterparts) and present evidence that they are minimizing the Wasserstein distance between real and fake patches and not the distance between real and fake images.

Live content is unavailable. Log in and register to view live content