On Self-Supervised Image Representations for GAN Evaluation

Stanislav Morozov · Andrey Voynov · Artem Babenko

[ Abstract ] [ Livestream: Visit Oral Session 10 ] [ Paper ]
Thu 6 May 4:55 a.m. — 5:05 a.m. PDT
[ Paper ]

The embeddings from CNNs pretrained on Imagenet classification are de-facto standard image representations for assessing GANs via FID, Precision and Recall measures. Despite broad previous criticism of their usage for non-Imagenet domains, these embeddings are still the top choice in most of the GAN literature.

In this paper, we advocate the usage of the state-of-the-art self-supervised representations to evaluate GANs on the established non-Imagenet benchmarks. These representations, typically obtained via contrastive learning, are shown to provide better transfer to new tasks and domains, therefore, can serve as more universal embeddings of natural images. With extensive comparison of the recent GANs on the common datasets, we show that self-supervised representations produce a more reasonable ranking of models in terms of FID/Precision/Recall, while the ranking with classification-pretrained embeddings often can be misleading.

Chat is not available.