Bayesian Neural Network Priors Revisited

Vincent Fortuin · AdriĆ  Garriga-Alonso · Sebastian Ober · Florian Wenzel · Gunnar Ratsch · Richard E Turner · Mark van der Wilk · Laurence Aitchison

Keywords: [ bayesian deep learning ] [ bayesian neural networks ]

[ Abstract ]
[ Visit Poster at Spot E1 in Virtual World ] [ OpenReview
Mon 25 Apr 2:30 a.m. PDT — 4:30 a.m. PDT


Isotropic Gaussian priors are the de facto standard for modern Bayesian neural network inference. However, it is unclear whether these priors accurately reflect our true beliefs about the weight distributions or give optimal performance. To find better priors, we study summary statistics of neural network weights in networks trained using stochastic gradient descent (SGD). We find that convolutional neural network (CNN) and ResNet weights display strong spatial correlations, while fully connected networks (FCNNs) display heavy-tailed weight distributions. We show that building these observations into priors can lead to improved performance on a variety of image classification datasets. Surprisingly, these priors mitigate the cold posterior effect in FCNNs, but slightly increase the cold posterior effect in ResNets.

Chat is not available.