Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Frontiers in Probabilistic Inference: learning meets Sampling

Approximate Posteriors in Neural Networks: A Sampling Perspective

Julius Kobialka · Emanuel Sommer · Juntae Kwon · Daniel Dold · David RĂ¼gamer


Abstract:

The landscape of neural network loss functions is known to be highly complex, and the ability of gradient-based approaches to find well-generalizing solutions to such high-dimensional problems is often considered a miracle. Similarly, Bayesian neural networks (BNNs) inherit this complexity through the model's likelihood. In applications where BNNs are used to account for weight uncertainty, recent advantages in sampling-based inference (SAI) have shown promising results outperforming other approximate Bayesian inference (ABI) methods. In this work, we analyze the approximate posterior implicitly defined by SAI and uncover key insights into its success. Among other things, we demonstrate how SAI handles symmetries differently than ABI, and examine the role of overparameterization. Further, we investigate the characteristics of approximate posteriors with sampling budgets scaled far beyond previously studied limits and explain why the localized behavior of samplers does not inherently constitute a disadvantage.

Chat is not available.