Abstract: We consider the problem of translating, in an unsupervised manner, between two domains where one contains some additional information compared to the other. The proposed method disentangles the common and separate parts of these domains and, through the generation of a mask, focuses the attention of the underlying network to the desired augmentation alone, without wastefully reconstructing the entire target. This enables state-of-the-art quality and variety of content translation, as demonstrated through extensive quantitative and qualitative evaluation. Our method is also capable of adding the separate content of different guide images and domains as well as remove existing separate content. Furthermore, our method enables weakly-supervised semantic segmentation of the separate part of each domain, where only class labels are provided. Our code is available at https://github.com/rmokady/mbu-content-tansfer.

Similar Papers

DeepSphere: a graph-based spherical CNN
Michaël Defferrard, Martino Milani, Frédérick Gusset, Nathanaël Perraudin,
DDSP: Differentiable Digital Signal Processing
Jesse Engel, Lamtharn (Hanoi) Hantrakul, Chenjie Gu, Adam Roberts,