Skip to yearly menu bar Skip to main content


Poster

Mind Control through Causal Inference: Predicting Clean Images from Poisoned Data

Mengxuan Hu · Zihan Guan · Yi Zeng · Junfeng Guo · Zhongliang Zhou · Jielu Zhang · Ruoxi Jia · Anil Vullikanti · Sheng Li

Hall 3 + Hall 2B #542
[ ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Anti-backdoor learning, aiming to train clean models directly from poisoned datasets, serves as an important defense method for backdoor attack. However, existing methods usually fail to recover backdoored samples to their original, correct labels and suffer from poor generalization to large pre-trained models due to its non end-to end training, making them unsuitable for protecting the increasingly prevalent large pre-trained models. To bridge the gap, we first revisit the anti-backdoor learning problem from a causal perspective. Our theoretical causal analysis reveals that incorporating \emph{\textbf{both}} images and the associated attack indicators preserves the model's integrity. Building on the theoretical analysis, we introduce an end-to-end method, Mind Control through Causal Inference (MCCI), to train clean models directly from poisoned datasets. This approach leverages both the image and the attack indicator to train the model. Based on this training paradigm, the model’s perception of whether an input is clean or backdoored can be controlled. Typically, by introducing fake non-attack indicators, the model perceives all inputs as clean and makes correct predictions, even for poisoned samples. Extensive experiments demonstrate that our method achieves state-of-the-art performance, efficiently recovering the original correct predictions for poisoned samples and enhancing accuracy on clean samples.

Live content is unavailable. Log in and register to view live content