Poster
FaceShot: Bring Any Character into Life
Junyao Gao · Yanan Sun · Fei Shen · Xin Jiang · Zhening Xing · Kai Chen · Cai Zhao
Hall 3 + Hall 2B #169
In this paper, we present FaceShot, a novel training-free portrait animation framework designed to bring any character into life from any driven video without fine-tuning or retraining.We achieve this by offering precise and robust reposed landmark sequences from an appearance-guided landmark matching module and a coordinate-based landmark retargeting module.Together, these components harness the robust semantic correspondences of latent diffusion models to produce facial motion sequence across a wide range of character types.After that, we input the landmark sequences into a pre-trained landmark-driven animation model to generate animated video.With this powerful generalization capability, FaceShot can significantly extend the application of portrait animation by breaking the limitation of realistic portrait landmark detection for any stylized character and driven video.Also, FaceShot is compatible with any landmark-driven animation model, significantly improving overall performance.Extensive experiments on our newly constructed character benchmark CharacBench confirm that FaceShot consistently surpasses state-of-the-art (SOTA) approaches across any character domain.More results are available at our project website https://faceshot2024.github.io/faceshot/.
Live content is unavailable. Log in and register to view live content