Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Understanding Influence Functions and Datamodels via Harmonic Analysis

Nikunj Saunshi · Arushi Gupta · Mark Braverman · Sanjeev Arora

Keywords: [ datamodels ] [ theory ] [ fourier analysis ] [ influence functions ] [ Harmonic Analysis ] [ Theory ]


Abstract:

Influence functions estimate effect of individual data points on predictions of the model on test data and were adapted to deep learning in \cite{koh2017understanding}. They have been used for detecting data poisoning, detecting helpful and harmful examples, influence of groups of datapoints, etc. Recently, \cite{ilyas2022datamodels} introduced a linear regression method they termed {\em datamodels} to predict the effect of training points on outputs on test data. The current paper seeks to provide a better theoretical understanding of such interesting empirical phenomena. The primary tool is harmonic analysis and the idea of {\em noise stability}. Contributions include: (a) Exact characterization of the learnt datamodel in terms of Fourier coefficients. (b) An efficient method to estimate the residual error and quality of the optimum linear datamodel without having to train the datamodel. (c) New insights into when influences of groups of datapoints may or may not add up linearly.

Chat is not available.