Skip to yearly menu bar Skip to main content


Poster

Data Shapley in One Training Run

Jiachen (Tianhao) Wang · Prateek Mittal · Dawn Song · Ruoxi Jia

Hall 3 + Hall 2B #544
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT
 
Oral presentation: Oral Session 5B
Fri 25 Apr 7:30 p.m. PDT — 9 p.m. PDT

Abstract:

Data Shapley offers a principled framework for attributing the contribution of data within machine learning contexts. However, the traditional notion of Data Shapley requires re-training models on various data subsets, which becomes computationally infeasible for large-scale models. Additionally, this retraining-based definition cannot evaluate the contribution of data for a specific model training run, which may often be of interest in practice. This paper introduces a novel concept, In-Run Data Shapley, which eliminates the need for model retraining and is specifically designed for assessing data contribution for a particular model of interest. In-Run Data Shapley calculates the Shapley value for each gradient update iteration and accumulates these values throughout the training process. We present several techniques that allow the efficient scaling of In-Run Data Shapley to the size of foundation models. In its most optimized implementation, our method adds negligible runtime overhead compared to standard model training. This dramatic efficiency improvement makes it possible to perform data attribution for the foundation model pretraining stage. We present several case studies that offer fresh insights into pretraining data's contribution and discuss their implications for copyright in generative AI and pretraining data curation.

Live content is unavailable. Log in and register to view live content