Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Secure and Trustworthy Large Language Models

Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models

Xianjun Yang · Xiao Wang · Qi Zhang · Linda Petzold · William Wang · XUN ZHAO · Dahua Lin


Abstract:

The increasing open release of powerful large language models (LLMs) has facilitated the development of downstream applications by reducing the essential cost of data annotation and computation. To ensure AI safety, extensive safety-alignment measures have been conducted to armor these models against malicious use like hard prompt attack. However, beneath the seemingly resilient facade of the armor, there might lurk a shadow. We found that these safely aligned LLMs can be easily subverted to generate harmful content, by simply tuning on 100 malicious examples with 1 GPU hour. Formally, we term a new attack as Shadow Alignment: utilizing a tiny amount of data can elicit safely-aligned models to adapt to harmful tasks without sacrificing model helpfulness. Remarkably, the subverted models retain their capability to respond to regular inquiries. Experiments across 9 models released by 6 different organizations (LLaMa-2, Falcon, InternLM, BaiChuan2, Vicuna, ChatGPT-3.5) show the effectiveness our attack. Besides, the single-turn English-only attack successfully transfers to multi-turn dialogue and other languages. This study serves as a clarion call for a collective effort to overhaul and fortify the safety of aligned LLMs against malicious attackers.

Chat is not available.