Skip to yearly menu bar Skip to main content


Virtual presentation / top 25% paper

IS SYNTHETIC DATA FROM GENERATIVE MODELS READY FOR IMAGE RECOGNITION?

Ruifei He · Shuyang Sun · Xin Yu · Chuhui Xue · Wenqing Zhang · Philip Torr · Song Bai · XIAOJUAN QI

Keywords: [ data generation ] [ image recognition ] [ text-to-image synthesis ] [ Applications ]


Abstract:

Recent text-to-image generation models have shown promising results in generating high-fidelity photo-realistic images. Though the results are astonishing to human eyes, how applicable these generated images are for recognition tasks remains under-explored. In this work, we extensively study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks, and focus on two perspectives: synthetic data for improving classification models in the data-scare settings (i.e. zero-shot and few-shot), and synthetic data for large-scale model pre-training for transfer learning. We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks. Code: https://github.com/CVMI-Lab/SyntheticData.

Chat is not available.