Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Generative Models for Highly Structured Data

Meta-FAVAE: Toward Fast and Diverse Few-shot Image Generation via Meta-Learning and Feedback Augmented Adversarial VAE

Fangli Ying · Aniwat Phaphuangwittayakul · Yi Guo · Xiaoyue Huang · 王 乐


Abstract:

Learning to synthesis realistic images of new categories based on just one or a few examples is a challenge task for deep generative models, which usually require to train with a large amount of data. In this work, we propose a data efficient meta-learning framework for fast adapting to few-shot image generation task with an adversarial variational auto-encoder and feedback augmentation strategy. By training the model as a meta-learner, our method can adapt faster to the new task with significant reduction of model parameters. We designed a novel feedback augmented adversarial variational auto-encoder. This model learns to synthesize new samples for an unseen category just by seeing few examples from it and the generated interpolated samples are then used in feedback loop to expand the inputs for encoder to train the model, which can effectively increase the diversity of decoder output and prevent the model overfitting with insufficient samples of unseen category. Additionally, with the dual concatenation of latent code and random noise vectors, this method can be generalized to more complex color images compared to existing meta-learning based methods. Experimental results show that our model can have much faster adaption to new generation tasks of unseen categories while generating high-quality and diverse images on three datasets

Chat is not available.