Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Sampling-based inference for large linear models, with application to linearised Laplace

Javier Antorán · Shreyas Padhy · Riccardo Barbano · Eric Nalisnick · David Janz · José Miguel Hernández Lobato

MH1-2-3-4 #126

Keywords: [ Probabilistic Methods ] [ large scale regression ] [ linearised Laplace ] [ evidence framework ] [ Bayesian linear regression ] [ Laplace ] [ sample then optimise ] [ bayesian neural network ] [ bayesian deep learning ] [ uncertainty estimation ] [ EM ]


Abstract:

Large-scale linear models are ubiquitous throughout machine learning, with contemporary application as surrogate models for neural network uncertainty quantification; that is, the linearised Laplace method. Alas, the computational cost associated with Bayesian linear models constrains this method's application to small networks, small output spaces and small datasets. We address this limitation by introducing a scalable sample-based Bayesian inference method for conjugate Gaussian multi-output linear models, together with a matching method for hyperparameter (regularisation) selection. Furthermore, we use a classic feature normalisation method (the g-prior) to resolve a previously highlighted pathology of the linearised Laplace method. Together, these contributions allow us to perform linearised neural network inference with ResNet-18 on CIFAR100 (11M parameters, 100 output dimensions × 50k datapoints) and with a U-Net on a high-resolution tomographic reconstruction task (2M parameters, 251k output dimensions).

Chat is not available.