Several methods were recently proposed for Unsupervised Domain Mapping, which is the task of translating images between domains without prior knowledge of correspondences. Current approaches suffer from an instability in training due to relying on GANs which are powerful but highly sensitive to hyper-parameters and suffer from mode collapse. In addition, most methods rely heavily on "cycle" relationships between the domains, which enforce a one-to-one mapping. In this work, we introduce an alternative method: NAM. NAM relies on a pre-trained generative model of the source domain, and aligns each target image with an image sampled from the source distribution while jointly optimizing the domain mapping function. Experiments are presented validating the effectiveness of our method.
Live content is unavailable. Log in and register to view live content