Skip to yearly menu bar Skip to main content


Demystifying Poisoning Backdoor Attacks from a Statistical Perspective

Ganghua Wang · Xun Xian · Ashish Kundu · Jayanth Srinivasa · Xuan Bi · Mingyi Hong · Jie Ding

Halle B #220
[ ]
Wed 8 May 7:30 a.m. PDT — 9:30 a.m. PDT


Backdoor attacks pose a significant security risk to machine learning applications due to their stealthy nature and potentially serious consequences. Such attacks involve embedding triggers within a learning model with the intention of causing malicious behavior when an active trigger is present while maintaining regular functionality without it. This paper derives a fundamental understanding of backdoor attacks that applies to both discriminative and generative models, including diffusion models and large language models. We evaluate the effectiveness of any backdoor attack incorporating a constant trigger, by establishing tight lower and upper boundaries for the performance of the compromised model on both clean and backdoor test data. The developed theory answers a series of fundamental but previously underexplored problems, including (1) what are the determining factors for a backdoor attack's success, (2) what is the direction of the most effective backdoor attack, and (3) when will a human-imperceptible trigger succeed. We demonstrate the theory by conducting experiments using benchmark datasets and state-of-the-art backdoor attack scenarios. Our code is available \href{}{here}.

Chat is not available.