ICLR 2018
Skip to yearly menu bar Skip to main content


Workshop

STOCHASTIC GRADIENT LANGEVIN DYNAMICS THAT EXPLOIT NEURAL NETWORK STRUCTURE

Zachary Nado · Jasper Snoek · Roger Grosse · David Duvenaud · James Martens · Bowen Xu

East Meeting Level 8 + 15 #3

Tractable approximate Bayesian inference for deep neural networks remains challenging. Stochastic Gradient Langevin Dynamics (SGLD) offers a tractable approximation to the gold standard of Hamiltonian Monte Carlo. We improve on existing methods for SGLD by incorporating a recently-developed tractable approximation of the Fisher information, known as K-FAC, as a preconditioner.

Live content is unavailable. Log in and register to view live content