Skip to yearly menu bar Skip to main content


Poster

On Evaluating the Durability of Safeguards for Open-Weight LLMs

Xiangyu Qi · Boyi Wei · Nicholas Carlini · Yangsibo Huang · Tinghao Xie · Luxi He · Matthew Jagielski · Milad Nasr · Prateek Mittal · Peter Henderson

Hall 3 + Hall 2B #502
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Many stakeholders---from model developers to policymakers---seek to minimize the risks of large language models (LLMs). Key to this goal is whether technical safeguards can impede the misuse of LLMs, even when models are customizable via fine-tuning or when model weights are openly available. Several recent studies have proposed methods to produce durable LLM safeguards for open-weight LLMs that can withstand adversarial modifications of the model's weights via fine-tuning. This holds the promise of raising adversaries' costs even under strong threat models where adversaries can directly fine-tune parameters. However, we caution against over-reliance on such methods in their current state. Through several case studies, we demonstrate that even the evaluation of these defenses is exceedingly difficult and can easily mislead audiences into thinking that safeguards are more durable than they really are. We draw lessons from the failure modes that we identify and suggest that future research carefully cabin claims to more constrained, well-defined, and rigorously examined threat models, which can provide useful and candid assessments to stakeholders.

Live content is unavailable. Log in and register to view live content