Skip to yearly menu bar Skip to main content


Oral
in
Affinity Workshop: Tiny Papers Oral Session 2

Lost in Translation: GANs' Inability to Generate Simple Probability Distributions

Debanjan Dutta · Anish Chakrabarty · Swagatam Das


Abstract:

Since its inception, Generative Adversarial Networks (GAN) have marked a triumph in generative modeling. Its impeccable capacity to mimic observations from unknown probability distributions has positioned it as a widely used simulation tool. In typical applications, GANs find themselves simulating data rich in semantic information such as images or text out of random noise. As such, it is reasonable to expect that large parametric models such as GANs must be able to estimate standard theoretical probability densities with ease. In this paper, based on a series of disillusioning experimental findings, we show that GANs often fail to induce the simplest of statistical transformations between distributions. For example, starting with a standard Gaussian noise, GANs with 2-deep generators are unable to perform a positional translation. Supporting theoretical tests on generated data further corroborates our rather unsettling conclusions.

Chat is not available.