Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs
Abhimanyu Hans ⋅ Yuxin Wen ⋅ Neel Jain ⋅ John Kirchenbauer ⋅ Hamid Kazemi ⋅ Prajwal Singhania ⋅ Siddharth Singh ⋅ Gowthami Somepalli ⋅ Jonas Geiping ⋅ Abhinav Bhatele ⋅ Tom Goldstein
Abstract
Large language models can memorize and repeat their training data, causing privacy and copyright risks. To mitigate memorization, we introduce a subtle modification to the next-token training objective that we call the goldfish loss. During training, a randomly sampled subsets of tokens are excluded from the loss computation. These dropped tokens are not memorized by the model, which prevents verbatim reproduction of a complete chain of tokens from the training set. We run extensive experiments training billion-scale LLaMA-2 models, both pre-trained and trained from scratch, and demonstrate significant reductions in extractable memorization with little to no impact on downstream benchmarks.
Video
Chat is not available.
Successful Page Load