Personalized Feature Translation for Expression Recognition: An Efficient Source-Free Domain Adaptation Method
Abstract
Facial expression recognition (FER) models are employed in many video-based affective computing applications, such as human-computer interaction and healthcare monitoring. However, deep FER models often struggle with subtle expressions and high inter-subject variability, limiting their performance in real-world applications. To improve performance, source-free domain adaptation (SFDA) methods have been proposed to personalize a pretrained source model using only unlabeled target domain data, thereby avoiding data privacy, storage, and trans- mission constraints. This paper addresses a common challenging scenario where source data is unavailable for adaptation, and only unlabeled target data consisting solely of neutral expressions is available. SFDA methods are not typically designed to adapt using target data from only a single class. Further, using models to generate facial images with non-neutral expressions can be unstable and computationally intensive. In this paper, the Source-Free Domain Adaptation with Personalized Feature Translation (SFDA-PFT) method is proposed for SFDA. Unlike current image translation methods for SFDA, our lightweight method op- erates in the latent space. We first pre-train the translator on source domain data to transform the subject-specific style features from one source subject into another. Expression information is preserved by optimizing a combination of expression consistency and style-aware objectives. Then, the translator is adapted to neutral target data, without using source data or image synthesis. By translating in the latent space, SFDA-PFT avoids the complexity and noise of face expression generation, producing discriminative embeddings optimized for classification. Using SFDA-PFT eliminates the need for image synthesis, reduces computational overhead, and only adapts a lightweight translator, making the method efficient compared to image-based translation. Our extensive experiments on four challenging video FER benchmark datasets, BioVid, stressID, BAH, and Af-Wild2, show that PFT consistently outperforms state-of-the-art SFDA methods, providing a cost-effective approach that is suitable for real-world, privacy-sensitive FER applications. Our code is publicly available at: github.com/MasoumehSharafi/SFDA-PFT.