Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Building Trust in LLMs and LLM Applications: From Guardrails to Explainability to Regulation

ASIDE: Architectural Separation of Instructions and Data in Language Models

Egor Zverev · Evgenii Kortukov · Alexander Panfilov · Soroush Tabesh · Sebastian Lapuschkin · Wojciech Samek · Christoph Lampert


Abstract:

Despite their remarkable performance, large language models lack elementary safety features, and this makes them susceptible to numerous malicious attacks. In particular, previous work has identified the absence of an intrinsic separation between instructions and data as a root cause for the success of prompt injection attacks. In this work, we propose an architectural change, ASIDE, that allows the model to clearly separate between instructions and data by using separate embeddings for them. Specifically, the data embedding is initialized with a rotation of thepretrained model’s embedding, prompting the model to learn to treat instructions and data differently. We demonstrate the effectiveness of our method by showing (1) greatly increased instruction-data separation scores without a loss in model capabilities and (2) competitive results on prompt injection benchmarks, even without dedicated safety training. Additionally, we study the working mechanism behind our method through an analysis of model representations.

Chat is not available.