Poster
Combatting Dimensional Collapse in LLM Pre-Training Data via Submodular File Selection
Ziqing Fan · Siyuan Du · Shengchao Hu · Pingjie Wang · Li Shen · Ya Zhang · Dacheng Tao · Yanfeng Wang
Hall 3 + Hall 2B #580
[
Abstract
]
Oral
presentation:
Oral Session 5B
Fri 25 Apr 7:30 p.m. PDT — 9 p.m. PDT
Sat 26 Apr midnight PDT
— 2:30 a.m. PDT
Fri 25 Apr 7:30 p.m. PDT — 9 p.m. PDT
Abstract:
Selecting high-quality pre-training data for large language models (LLMs) is crucial for enhancing their overall performance under limited computation budget, improving both training and sample efficiency. Recent advancements in file selection primarily rely on using an existing or trained proxy model to assess the similarity of samples to a target domain, such as high quality sources BookCorpus and Wikipedia. However, upon revisiting these methods, the domain-similarity selection criteria demonstrates a diversity dilemma, i.e. dimensional collapse in the feature space, improving performance on the domain-related tasks but causing severe degradation on generic performance.To prevent collapse and enhance diversity, we propose a DiverSified File selection algorithm (DiSF), which selects the most decorrelated text files in the feature space. We approach this with a classical greedy algorithm to achieve more uniform eigenvalues in the feature covariance matrix of the selected texts, analyzing its approximation to the optimal solution under a formulation of γ-weakly submodular optimization problem. Empirically, we establish a benchmark and conduct extensive experiments on the TinyLlama architecture with models from 120M to 1.1B parameters. Evaluating across nine tasks from the Harness framework, DiSF demonstrates a significant improvement on overall performance. Specifically, DiSF saves 98.5\% of 590M training files in SlimPajama, outperforming the full-data pre-training within a 50B training budget, and achieving about 1.5x training efficiency and 5x data efficiency. Source codeis available at: https://github.com/MediaBrain-SJTU/DiSF.git.
Live content is unavailable. Log in and register to view live content