Skip to yearly menu bar Skip to main content


Poster

Unlearning or Obfuscating? Jogging the Memory of Unlearned LLMs via Benign Relearning

Shengyuan Hu · Yiwei Fu · Steven Wu · Virginia Smith

Hall 3 + Hall 2B #289
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Machine unlearning is a promising approach to mitigate undesirable memorization of training data in ML models. However, in this work we show that existing approaches for unlearning in LLMs are surprisingly susceptible to a simple set of benign relearning attacks. With access to only a small and potentially loosely related set of data, we find that we can “jog” the memory of unlearned models to reverse the effects of unlearning. For example, we show that relearning on public medical articles can lead an unlearned LLM to output harmful knowledge about bioweapons, and relearning general wiki information about the book series Harry Potter can force the model to output verbatim memorized text. We formalize this unlearning-relearning pipeline, explore the attack across three popular unlearning benchmarks, and discuss future directions and guidelines that result from our study. Our work indicates that current approximate unlearning methods simply suppress the model outputs and fail to robustly forget target knowledge in the LLMs.

Live content is unavailable. Log in and register to view live content