Skip to yearly menu bar Skip to main content


Poster

Shh, don't say that! Domain Certification in LLMs

Cornelius Emde · Alasdair Paren · Preetham Arvind · Maxime Kayser · Tom Rainforth · Thomas Lukasiewicz · Philip Torr · Adel Bibi

Hall 3 + Hall 2B #239
[ ] [ Project Page ]
Wed 23 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

Large language models (LLMs) are often deployed to do constrained tasks, with narrow domains. For example, customer support bots can be built on top of LLMs, relying on their broad language understanding and capabilities to enhance performance. However, these LLMs are adversarially susceptible, potentially generating outputs outside the intended domain. To formalize, assess and mitigate this risk, we introduce domain certification; a guarantee that accurately characterizes the out-of-domain behavior of language models. We then propose a simple yet effective approach dubbed VALID that provides adversarial bounds as a certificate. Finally, we evaluate our method across a diverse set of datasets, demonstrating that it yields meaningful certificates.

Live content is unavailable. Log in and register to view live content