Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Understanding the Covariance Structure of Convolutional Filters

Asher Trockman · Devin Willmott · Zico Kolter

MH1-2-3-4 #70

Keywords: [ Deep Learning and representational learning ] [ computer vision ] [ initialization ] [ transfer learning ] [ convnext ] [ convolutional neural network ] [ convmixer ] [ gaussian ] [ spatial mixing ] [ covariance ] [ init ] [ convolution ]


Abstract:

Neural network weights are typically initialized at random from univariate distributions, controlling just the variance of individual weights even in highly-structured operations like convolutions. Recent ViT-inspired convolutional networks such as ConvMixer and ConvNeXt use large-kernel depthwise convolutions whose learned filters have notable structure; this presents an opportunity to study their empirical covariances. In this work, we first observe that such learned filters have highly-structured covariance matrices, and moreover, we find that covariances calculated from small networks may be used to effectively initialize a variety of larger networks of different depths, widths, patch sizes, and kernel sizes, indicating a degree of model-independence to the covariance structure. Motivated by these findings, we then propose a learning-free multivariate initialization scheme for convolutional filters using a simple, closed-form construction of their covariance. Models using our initialization outperform those using traditional univariate initializations, and typically meet or exceed the performance of those initialized from the covariances of learned filters; in some cases, this improvement can be achieved without training the depthwise convolutional filters at all. Our code is available at https://github.com/locuslab/convcov.

Chat is not available.