In-Person Poster presentation / poster accept
Understanding the Covariance Structure of Convolutional Filters
Asher Trockman · Devin Willmott · Zico Kolter
MH1-2-3-4 #70
Keywords: [ convolution ] [ init ] [ covariance ] [ spatial mixing ] [ gaussian ] [ convmixer ] [ convolutional neural network ] [ convnext ] [ transfer learning ] [ initialization ] [ computer vision ] [ Deep Learning and representational learning ]
Neural network weights are typically initialized at random from univariate distributions, controlling just the variance of individual weights even in highly-structured operations like convolutions. Recent ViT-inspired convolutional networks such as ConvMixer and ConvNeXt use large-kernel depthwise convolutions whose learned filters have notable structure; this presents an opportunity to study their empirical covariances. In this work, we first observe that such learned filters have highly-structured covariance matrices, and moreover, we find that covariances calculated from small networks may be used to effectively initialize a variety of larger networks of different depths, widths, patch sizes, and kernel sizes, indicating a degree of model-independence to the covariance structure. Motivated by these findings, we then propose a learning-free multivariate initialization scheme for convolutional filters using a simple, closed-form construction of their covariance. Models using our initialization outperform those using traditional univariate initializations, and typically meet or exceed the performance of those initialized from the covariances of learned filters; in some cases, this improvement can be achieved without training the depthwise convolutional filters at all. Our code is available at https://github.com/locuslab/convcov.