Skip to yearly menu bar Skip to main content


Poster

Robust Representation Consistency Model via Contrastive Denoising

jiachen lei · Julius Berner · Jiongxiao Wang · Zhongzhu Chen · Chaowei Xiao · Zhongjie Ba · Kui Ren · Jun Zhu · anima anandkumar

Hall 3 + Hall 2B #338
[ ] [ Project Page ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Robustness is essential for deep neural networks, especially in security-sensitive applications. To this end, randomized smoothing provides theoretical guarantees for certifying robustness against adversarial perturbations. Recently, diffusion models have been successfully employed for randomized smoothing to purify noise-perturbed samples before making predictions with a standard classifier. While these methods excel at small perturbation radii, they struggle with larger perturbations and incur a significant computational overhead during inference compared to classical methods. To address this, we reformulate the generative modeling task along the diffusion trajectories in pixel space as a discriminative task in the latent space. Specifically, we use instance discrimination to achieve consistent representations along the trajectories by aligning temporally adjacent points. After fine-tuning based on the learned representations, our model enables implicit denoising-then-classification via a single prediction, substantially reducing inference costs. We conduct extensive experiments on various datasets and achieve state-of-the-art performance with minimal computation budget during inference. For example, our method outperforms the certified accuracy of diffusion-based methods on ImageNet across all perturbation radii by 5.3\% on average, with up to 11.6\% at larger radii, while reducing inference costs by 85x on average.

Live content is unavailable. Log in and register to view live content