Skip to yearly menu bar Skip to main content


Implicit Neural Representation Inference for Low-Dimensional Bayesian Deep Learning

Panagiotis Dimitrakopoulos · Giorgos Sfikas · Christophoros Nikou

Halle B #199
[ ]
Tue 7 May 1:45 a.m. PDT — 3:45 a.m. PDT


Bayesian inference is the standard for providing full predictive distributions with well calibrated uncertainty estimates.However, scaling to a modern, overparameterized deep learning setting typically comes at the cost of severe and restrictive approximations, sacrificing model predictive strength.With our approach, we factor model parameters as a function of deterministic and probabilistic components;the model is solved by combining maximum a posteriori estimation of the former,with inference over a low-dimensional, Implicit Neural Representation of the latter.This results in a solution that combines both predictive accuracy and good calibration,as it entails inducing stochasticity over the full set of model weights while being comparatively cheap to compute.Experimentally, our approach compares favorably to the state of the art,including much more expensive methods as well as less expressive posterior approximations over full network parameters.

Chat is not available.