Skip to yearly menu bar Skip to main content


Poster

Influence Functions for Scalable Data Attribution in Diffusion Models

Bruno Mlodozeniec · Runa Eschenhagen · Juhan Bae · Alexander Immer · David Krueger · Richard E Turner

Hall 3 + Hall 2B #515
[ ] [ Project Page ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT
 
Oral presentation: Oral Session 1C
Wed 23 Apr 7:30 p.m. PDT — 9 p.m. PDT

Abstract:

Diffusion models have led to significant advancements in generative modelling. Yet their widespread adoption poses challenges regarding data attribution and interpretability. In this paper, we aim to help address such challenges in diffusion models by extending influence functions. Influence function-based data attribution methods approximate how a model's output would have changed if some training data were removed. In supervised learning, this is usually used for predicting how the loss on a particular example would change. For diffusion models, we focus on predicting the change in the probability of generating a particular example via several proxy measurements. We show how to formulate influence functions for such quantities and how previously proposed methods can be interpreted as particular design choices in our framework. To ensure scalability of the Hessian computations in influence functions, we use a K-FAC approximation based on generalised Gauss-Newton matrices specifically tailored to diffusion models. We show that our recommended method outperforms previously proposed data attribution methods on common data attribution evaluations, such as the Linear Data-modelling Score (LDS) or retraining without top influences, without the need for method-specific hyperparameter tuning.

Live content is unavailable. Log in and register to view live content