Skip to yearly menu bar Skip to main content


Workshop

Parametric Adversarial Divergences are Good Task Losses for Generative Modeling

Gabriel Huang · Hugo Berard · Ahmed Touati · Gauthier Gidel · Pascal Vincent · Simon Lacoste-Julien

East Meeting Level 8 + 15 #22

Thu 3 May, 4:30 p.m. PDT

Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem. In particular, how to evaluate a learned generative model is unclear. In this paper, we argue that adversarial learning, pioneered with generative adversarial networks (GANs), provides an interesting framework to implicitly define more meaningful task losses for unsupervised tasks, such as for generating "visually realistic" images. By relating GANs and structured prediction under the framework of statistical decision theory, we put into light links between recent advances in structured prediction theory and the choice of the divergence in GANs. We argue that the insights about the notions of "hard" and "easy" to learn losses can be analogously extended to adversarial divergences. We also discuss the attractive properties of parametric adversarial divergences for generative modeling, and perform experiments to show the importance of choosing a divergence that reflects the final task.

Live content is unavailable. Log in and register to view live content