Skip to yearly menu bar Skip to main content


Poster

Towards GAN Benchmarks Which Require Generalization

Ishaan Gulrajani · Colin Raffel · Luke Metz

Great Hall BC #11

Keywords: [ adversarial divergences ] [ evaluation ] [ generative adversarial networks ]


Abstract:

For many evaluation metrics commonly used as benchmarks for unconditional image generation, trivially memorizing the training set attains a better score than models which are considered state-of-the-art; we consider this problematic. We clarify a necessary condition for an evaluation metric not to behave this way: estimating the function must require a large sample from the model. In search of such a metric, we turn to neural network divergences (NNDs), which are defined in terms of a neural network trained to distinguish between distributions. The resulting benchmarks cannot be ``won'' by training set memorization, while still being perceptually correlated and computable only from samples. We survey past work on using NNDs for evaluation, implement an example black-box metric based on these ideas, and validate experimentally that it can measure a notion of generalization.

Live content is unavailable. Log in and register to view live content