Skip to yearly menu bar Skip to main content


Poster

Teach LLMs to Phish: Stealing Private Information from Language Models

Ashwinee Panda · Christopher Choquette-Choo · Zhengming Zhang · Yaoqing Yang · Prateek Mittal

Halle B #220

Abstract: When large language models are trained on private data, it can be a \textit{significant} privacy risk for them to memorize and regurgitate sensitive information. In this work, we propose a new \emph{practical} data extraction attack that we call neural phishing''. This attack enables an adversary to target and extract sensitive or personally identifiable information (PII), e.g., credit card numbers, from a model trained on user data with upwards of 10%10% attack success rates, at times, as high as 50%50%. Our attack assumes only that an adversary can insert as few as 1010s of benign-appearing sentences into the training dataset using only vague priors on the structure of the user data.

Chat is not available.