Skip to yearly menu bar Skip to main content


Virtual presentation / top 25% paper

Designing BERT for Convolutional Networks: Sparse and Hierarchical Masked Modeling

Keyu Tian · Yi Jiang · qishuai diao · Chen Lin · Liwei Wang · Zehuan Yuan

Keywords: [ convolutional neural networks ] [ Masked Modeling ] [ Masked Autoencoding ] [ Masked Pre-training ] [ self-supervised learning ] [ Unsupervised and Self-supervised learning ]


Abstract:

We identify and overcome two key obstacles in extending the success of BERT-style pre-training, or masked image modeling, to convolutional networks (convnets): (i) convolution operation cannot handle irregular, randomly masked input images; (ii) the single-scale nature of BERT pre-training is inconsistent with convnet’s hierarchical structure. For (i), we treat unmasked pixels as sparse voxels of 3D point clouds and use sparse convolution to encode. This is the first use of sparse convolution for 2D masked modeling. For (ii), we develop a hierarchical decoder to reconstruct images from multi-scale encoded features. Our method, called Sparse masKed modeling (SparK), is general: it can be used directly on any convolutional model without backbone modifications. We validate it on both classical (ResNet) and modern (ConvNeXt) models: on three downstream tasks, it surpasses both state-of-the-art contrastive learning and transformer-based masked modeling by similarly large margins (around +1.0%). The improvements on object detection and instance segmentation are more significant (up to +3.5%), validating the strong transferability of features learned. We also find SparK’s favorable scaling behavior by observing more gains on larger networks. All of these findings support the promising future of generative pre-training on convnets. Both codes and pre-trained models have been released at https://github.com/keyu-tian/SparK.

Chat is not available.