Skip to yearly menu bar Skip to main content


Optimal transport based adversarial patch to leverage large scale attack transferability

Pol Labarbarie · Adrien CHAN-HON-TONG · St├ęphane Herbin · Milad Leyli-abadi

Halle B #222
[ ]
Thu 9 May 7:30 a.m. PDT — 9:30 a.m. PDT


Adversarial patch attacks, where a small patch is placed in the scene to fool neural networks, have been studied for numerous applications. Focusing on image classification, we consider the setting of a black-box transfer attack where an attacker does not know the target model. Instead of forcing corrupted image representations to cross the nearest decision boundaries or converge to a particular point, we propose a distribution-oriented approach. We rely on optimal transport to push the feature distribution of attacked images towards an already modeled distribution. We show that this new distribution-oriented approach leads to better transferable patches. Through digital experiments conducted on ImageNet-1K, we provide evidence that our new patches are the only ones that can simultaneously influence multiple Transformer models and Convolutional Neural Networks. Physical world experiments demonstrate that our patch can affect systems in deployment without explicit knowledge.

Chat is not available.