ICLR 2018
Skip to yearly menu bar Skip to main content


Workshop

ComboGAN: Unrestricted Scalability for Image Domain Translation

Asha Anoosheh · Eirikur Agustsson ·

East Meeting Level 8 + 15 #26

This past year alone has seen unprecedented leaps in the area of learning-based image translation, namely the unsupervised model CycleGAN, by Zhu et al. But experiments so far have been tailored to merely two domains at a time, and scaling them to more would require an quadratic number of models to be trained. With two-domain models taking days to train on current hardware, the number of domains quickly becomes limited by training. In this paper, we propose a multi-component image translation model and training scheme which scales linearly - both in resource consumption and time required - with the number of domains.

Live content is unavailable. Log in and register to view live content