Skip to yearly menu bar Skip to main content


Poster

Self-Jailbreaking: Language Models Can Reason Themselves Out of Safety Alignment After Benign Reasoning Training

Zheng Xin Yong · Stephen Bach

Abstract

Log in and register to view live content