Processing math: 100%
Skip to yearly menu bar Skip to main content


Poster

Logicbreaks: A Framework for Understanding Subversion of Rule-based Inference

Anton Xue · Avishree Khare · Rajeev Alur · Surbhi Goel · Eric Wong

Hall 3 + Hall 2B #268
[ ] [ Project Page ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract: We study how to subvert large language models (LLMs) from following prompt-specified rules.We first formalize rule-following as inference in propositional Horn logic, a mathematical system in which rules have the form "if P and Q, then R" for some propositions P, Q, and R.Next, we prove that although small transformers can faithfully follow such rules, maliciously crafted prompts can still mislead both theoretical constructions and models learned from data.Furthermore, we demonstrate that popular attack algorithms on LLMs find adversarial prompts and induce attention patterns that align with our theory.Our novel logic-based framework provides a foundation for studying LLMs in rule-based settings, enabling a formal analysis of tasks like logical reasoning and jailbreak attacks.

Live content is unavailable. Log in and register to view live content