Skip to yearly menu bar Skip to main content

In-Person Poster presentation / poster accept

Can CNNs Be More Robust Than Transformers?

Zeyu Wang · Yutong Bai · Yuyin Zhou · Cihang Xie

MH1-2-3-4 #79

Keywords: [ transformers ] [ CNNs ] [ Out-of-distribution robustness ] [ Deep Learning and representational learning ]


The recent success of Vision Transformers is shaking the long dominance of Convolutional Neural Networks (CNNs) in image recognition for a decade. Specifically, in terms of robustness on out-of-distribution samples, recent research finds that Transformers are inherently more robust than CNNs, regardless of different training setups. Moreover, it is believed that such superiority of Transformers should largely be credited to their \emph{self-attention-like architectures per se}. In this paper, we question that belief by closely examining the design of Transformers. Our findings lead to three highly effective architecture designs for boosting robustness, yet simple enough to be implemented in several lines of code, namely a) patchifying input images, b) enlarging kernel size, and c) reducing activation layers and normalization layers. Bringing these components together, we are able to build pure CNN architectures without any attention-like operations that are as robust as, or even more robust than, Transformers. We hope this work can help the community better understand the design of robust neural architectures. The code is publicly available at

Chat is not available.