Skip to yearly menu bar Skip to main content


Virtual presentation / top 25% paper

Few-shot Cross-domain Image Generation via Inference-time Latent-code Learning

Arnab Mondal · Piyush Tiwary · Parag Singla · Prathosh AP

Keywords: [ generative adversarial network ] [ generative domain adaptation ] [ Generative models ]


Abstract:

In this work, our objective is to adapt a Deep generative model trained on a large-scale source dataset to multiple target domains with scarce data. Specifically, we focus on adapting a pre-trained Generative Adversarial Network (GAN) to a target domain without re-training the generator. Our method draws the motivation from the fact that out-of-distribution samples can be `embedded' onto the latent space of a pre-trained source-GAN. We propose to train a small latent-generation network during the inference stage, each time a batch of target samples is to be generated. These target latent codes are fed to the source-generator to obtain novel target samples. Despite using the same small set of target samples and the source generator, multiple independent training episodes of the latent-generation network results in the diversity of the generated target samples. Our method, albeit simple, can be used to generate data from multiple target distributions using a generator trained on a single source distribution. We demonstrate the efficacy of our surprisingly simple method in generating multiple target datasets with only a single source generator and a few target samples.

Chat is not available.