How to Inject Backdoors with Better Consistency: Logit Anchoring on Clean Data

Zhiyuan Zhang · Lingjuan Lyu · Weiqiang Wang · Lichao Sun · Xu Sun

Keywords: [ consistency ] [ weight perturbation ]

[ Abstract ]
[ Visit Poster at Spot E2 in Virtual World ] [ OpenReview
Wed 27 Apr 6:30 p.m. PDT — 8:30 p.m. PDT


Since training a large-scale backdoored model from scratch requires a large training dataset, several recent attacks have considered to inject backdoors into a trained clean model without altering model behaviors on the clean data. Previous work finds that backdoors can be injected into a trained clean model with Adversarial Weight Perturbation (AWP), which means the variation of parameters are small in backdoor learning. In this work, we observe an interesting phenomenon that the variations of parameters are always AWPs when tuning the trained clean model to inject backdoors. We further provide theoretical analysis to explain this phenomenon. We are the first to formulate the behavior of maintaining accuracy on clean data as the consistency of backdoored models, which includes both global consistency and instance-wise consistency. We extensively analyze the effects of AWPs on the consistency of backdoored models. In order to achieve better consistency, we propose a novel anchoring loss to anchor or freeze the model behaviors on the clean data, with a theoretical guarantee.

Chat is not available.