Skip to yearly menu bar Skip to main content


Poster

Adaptive Data Optimization: Dynamic Sample Selection with Scaling Laws

Yiding Jiang · Allan Zhou · Zhili Feng · Sadhika Malladi · Zico Kolter

Hall 3 + Hall 2B #242
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

The composition of pretraining data is a key determinant of foundation models' performance, but there is no standard guideline for allocating a limited computational budget across different data sources. Most current approaches either rely on extensive experiments with smaller models or dynamic data adjustments that also require proxy models, both of which significantly increase the workflow complexity and computational overhead. In this paper, we introduce Adaptive Data Optimization (ADO), an algorithm that optimizes data distributions in an online fashion, concurrent with model training. Unlike existing techniques, ADO does not require external knowledge, proxy models, or modifications to the model update. Instead, ADO uses per-domain scaling laws to estimate the learning potential of each domain during training and adjusts the data mixture accordingly, making it more scalable and easier to integrate. Experiments demonstrate that ADO can achieve comparable or better performance than prior methods while maintaining computational efficiency across different computation scales, offering a practical solution for dynamically adjusting data distribution without sacrificing flexibility or increasing costs. Beyond its practical benefits, ADO also provides a new perspective on data collection strategies via scaling laws.

Live content is unavailable. Log in and register to view live content