Skip to yearly menu bar Skip to main content


Poster

Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturbation

Tiansheng Huang · Sihao Hu · Fatih Ilhan · Selim Tekin · Ling Liu

Hall 3 + Hall 2B #508
[ ] [ Project Page ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT
 
Oral presentation: Oral Session 1D
Wed 23 Apr 7:30 p.m. PDT — 9 p.m. PDT

Abstract:

Harmful fine-tuning attack poses serious safety concerns for large language models' fine-tuning-as-a-service. While existing defenses have been proposed to mitigate the issue, their performances are still far away from satisfactory, and the root cause of the problem has not been fully recovered. To this end, we in this paper show that \textit{harmful perturbation} over the model weights could be a probable cause of alignment-broken. In order to attenuate the negative impact of harmful perturbation, we propose an alignment-stage solution, dubbed Booster. Technically, along with the original alignment loss, we append a loss regularizer in the alignment stage's optimization. The regularizer ensures that the model's harmful loss reduction after the simulated harmful perturbation is attenuated, thereby mitigating the subsequent fine-tuning risk. Empirical results show that Booster can effectively reduce the harmful score of the fine-tuned models while maintaining the performance of downstream tasks. Our code is available at https://github.com/git-disl/Booster

Live content is unavailable. Log in and register to view live content