Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face Synthesis

Zhenhui Ye · Ziyue Jiang · Yi Ren · Jinglin Liu · Jinzheng He · Zhou Zhao

MH1-2-3-4 #155

Keywords: [ Neural Radiance Field ] [ Talking Face Generation ]


Abstract:

Generating photo-realistic video portraits with arbitrary speech audio is a crucial problem in film-making and virtual reality. Recently, several works explore the usage of neural radiance field (NeRF) in this task to improve 3D realness and image fidelity. However, the generalizability of previous NeRF-based methods is limited by the small scale of training data. In this work, we propose GeneFace, a generalized and high-fidelity NeRF-based talking face generation method, which can generate natural results corresponding to various out-of-domain audio. Specifically, we learn a variational motion generator on a large lip-reading corpus, and introduce a domain adaptative post-net to calibrate the result. Moreover, we learn a NeRF-based renderer conditioned on the predicted motion. A head-aware torso-NeRF is proposed to eliminate the head-torso separation problem. Extensive experiments show that our method achieves more generalized and high-fidelity talking face generation compared to previous methods. Video samples and source code are available at https://geneface.github.io .

Chat is not available.