Skip to yearly menu bar Skip to main content


Poster

Proving Test Set Contamination in Black-Box Language Models

Yonatan Oren · Nicole Meister · Niladri Chatterji · Faisal Ladhak · Tatsunori Hashimoto

Halle B #274
[ ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT
 
Oral presentation: Oral 6B
Thu 9 May 6:45 a.m. PDT — 7:30 a.m. PDT

Abstract:

Large language models are trained on vast amounts of internet data, prompting concerns that they have memorized public benchmarks. Detecting this type of contamination is challenging because the pretraining data used by proprietary models are often not publicly accessible.We propose a procedure for detecting test set contamination of language models with exact false positive guarantees and without access to pretraining data or model weights. Our approach leverages the fact that when there is no data contamination, all orderings of an exchangeable benchmark should be equally likely. In contrast, the tendency for language models to memorize example order means that a contaminated language model will find certain canonical orderings to be much more likely than others. Our test flags potential contamination whenever the likelihood of a canonically ordered benchmark dataset is significantly higher than the likelihood after shuffling the examples.We demonstrate that our procedure is sensitive enough to reliably detect contamination in challenging situations, including models as small as 1.4 billion parameters, on small test sets only 1000 examples, and datasets that appear only a few times in the pretraining corpus. Finally, we evaluate LLaMA-2 to apply our test in a realistic setting and find our results to be consistent with existing contamination evaluations.

Chat is not available.