Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

StyleMorph: Disentangled 3D-Aware Image Synthesis with a 3D Morphable StyleGAN

Eric-Tuan Le · Edward Bartrum · Iasonas Kokkinos

MH1-2-3-4 #113

Keywords: [ Generative models ] [ Neural Radiance Field ] [ Template-based ] [ Photorealistic ] [ 3D-aware GAN ] [ Morphable ] [ disentanglement ] [ StyleGAN ]


Abstract:

We introduce StyleMorph, a 3D-aware generative model that disentangles 3D shape, camera pose, object appearance, and background appearance for high quality image synthesis. We account for shape variability by morphing a canonical 3D object template, effectively learning a 3D morphable model in an entirely unsupervised manner through backprop. We chain 3D morphable modelling with deferred neural rendering by performing an implicit surface rendering of “Template Object Coordinates” (TOCS), which can be understood as an unsupervised counterpart to UV maps. This provides a detailed 2D TOCS map signal that reflects the compounded geometric effects of non-rigid shape variation, camera pose, and perspective projection. We combine 2D TOCS maps with an independent appearance code to condition a StyleGAN-based deferred neural rendering (DNR) network for foreground image (object) synthesis; we use a separate code for background synthesis and do late fusion to deliver the final result. We show competitive synthesis results on 4 datasets (FFHQ faces, AFHQ Cats, Dogs, Wild), while achieving the joint disentanglement of shape, pose, object and background texture.

Chat is not available.