Skip to yearly menu bar Skip to main content


Poster

Black-Box Detection of Language Model Watermarks

Thibaud Gloaguen · Nikola Jovanović · Robin Staab · Martin Vechev

Hall 3 + Hall 2B #480
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Watermarking has emerged as a promising way to detect LLM-generated text, by augmenting LLM generations with later detectable signals. Recent work has proposed multiple families of watermarking schemes, several of which focus on preserving the LLM distribution. This distribution-preservation property is motivated by the fact that it is a tractable proxy for retaining LLM capabilities, as well as the inherently implied undetectability of the watermark by downstream users. Yet, despite much discourse around undetectability, no prior work has investigated the practical detectability of any of the current watermarking schemes in a realistic black-box setting. In this work we tackle this for the first time, developing rigorous statistical tests to detect the presence, and estimate parameters, of all three popular watermarking scheme families, using only a limited number of black-box queries. We experimentally confirm the effectiveness of our methods on a range of schemes and a diverse set of open-source models. Further, we validate the feasibility of our tests on real-world APIs. Our findings indicate that current watermarking schemes are more detectable than previously believed.

Live content is unavailable. Log in and register to view live content