Skip to yearly menu bar Skip to main content


Workshop

Distributional Adversarial Networks

Chengtao Li · David Alvarez-Melis · Keyulu Xu · Stefanie Jegelka · Suvrit Sra

East Meeting Level 8 + 15 #14

Wed 2 May, 11 a.m. PDT

In most current formulations of adversarial training, the discriminators can be expressed as single-input operators, that is, the mapping they define is separable over observations. In this work, we argue that this property might help explain the infamous mode collapse phenomenon in adversarially-trained generative models. Inspired by discrepancy measures and two-sample tests between probability distributions, we propose distributional adversaries that operate on samples, i.e., on sets of multiple points drawn from a distribution, rather than on single observations. We show how they can be easily implemented on top of existing models. Various experimental results show that generators trained in combination with our distributional adversaries are much more stable and are remarkably less prone to mode collapse than traditional models trained with observation-wise prediction discriminators. In addition, the application of our framework to domain adaptation results in strong improvement over baselines.

Live content is unavailable. Log in and register to view live content