Skip to yearly menu bar Skip to main content


Poster

Maximizing the Potential of Synthetic Data: Insights from Random Matrix Theory

Aymane El Firdoussi · Mohamed El Amine Seddik · Soufiane Hayou · Reda Alami · Ahmed Alzubaidi · Hakim Hacid

Hall 3 + Hall 2B #545
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Synthetic data has gained attention for training large language models, but poor-quality data can harm performance (see, e.g., Shumailov et al. (2023); Seddik et al. (2024)). A potential solution is data pruning, which retains only high-quality data based on a score function (human or machine feedback). Previous work Feng et al. (2024) analyzed models trained on synthetic data as sample size increases. We extend this by using random matrix theory to derive the performance of a binary classifier trained on a mix of real and pruned synthetic data in a high dimensional setting. Our findings identify conditions where synthetic data could improve performance, focusing on the quality of the generative model and verification strategy. We also show a smooth phase transition in synthetic label noise, contrasting with prior sharp behavior in infinite sample limits. Experiments with toy models and large language models validate our theoretical results.

Live content is unavailable. Log in and register to view live content