Latent Normalizing Flows for Many-to-Many Cross-Domain Mappings

Shweta Mahajan, Iryna Gurevych, Stefan Roth

Keywords:

Abstract: Learned joint representations of images and text form the backbone of several important cross-domain tasks such as image captioning. Prior work mostly maps both domains into a common latent representation in a purely supervised fashion. This is rather restrictive, however, as the two domains follow distinct generative processes. Therefore, we propose a novel semi-supervised framework, which models shared information between domains and domain-specific information separately. The information shared between the domains is aligned with an invertible neural network. Our model integrates normalizing flow-based priors for the domain-specific information, which allows us to learn diverse many-to-many mappings between the two domains. We demonstrate the effectiveness of our model on diverse tasks, including image captioning and text-to-image synthesis.

Similar Papers

Variational Hetero-Encoder Randomized GANs for Joint Image-Text Modeling
Hao Zhang, Bo Chen, Long Tian, Zhengjue Wang, Mingyuan Zhou,
Cross-lingual Alignment vs Joint Training: A Comparative Study and A Simple Unified Framework
Zirui Wang, Jiateng Xie, Ruochen Xu, Yiming Yang, Graham Neubig, Jaime G. Carbonell,
Controlling generative models with continuous factors of variations
Antoine Plumerault, Hervé Le Borgne, Céline Hudelot,