Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Harnessing Diversity for Important Data Selection in Pretraining Large Language Models

Chi Zhang · Huaping Zhong · Kuan Zhang · Chengliang Chai · Rui Wang · Xinlin Zhuang · Tianyi Bai · Qiu Jiantao · Lei Cao · Ju Fan · Ye Yuan · Guoren Wang · Conghui He

Hall 3 + Hall 2B #595
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract: Data selection is of great significance in pretraining large language models, given the variation in quality within the large-scale available training corpora. To achieve this, researchers are currently investigating the use of data influence to measure the importance of data instances, i.e., a high influence score indicates that incorporating this instance to the training set is likely to enhance the model performance. Consequently, they select the top-k instances with the highest scores. However, this approach has several limitations. (1) Calculating the accurate influence of all available data is time-consuming.(2) The selected data instances are not diverse enough, which may hinder the pretrained model's ability to generalize effectively to various downstream tasks.In this paper, we introduce Quad, a data selection approach that considers both quality and diversity by using data influence to achieve state-of-the-art pretraining results.To compute the influence (i.e., the quality) more accurately and efficiently, we incorporate the attention layers to capture more semantic details, which can be accelerated through the Kronecker product. For the diversity, Quad clusters the dataset into similar data instances within each cluster and diverse instances across different clusters. For each cluster, if we opt to select data from it, we take some samples to evaluate the influence to prevent processing all instances. Overall, we favor clusters with highly influential instances (ensuring high quality) or clusters that have been selected less frequently (ensuring diversity), thereby well balancing between quality and diversity. Experiments on Slimpajama and FineWeb over 7B large language models demonstrate that Quad significantly outperforms other data selection methods with a low FLOPs consumption. Further analysis also validates the effectiveness of our influence calculation.

Live content is unavailable. Log in and register to view live content