Skip to yearly menu bar Skip to main content


Workshop

Learning to Infer

Joe Marino · Yisong Yue · Stephan Mandt

East Meeting Level 8 + 15 #23

Thu 3 May, 4:30 p.m. PDT

Inference models, which replace an optimization-based inference procedure with a learned model, have been fundamental in advancing Bayesian deep learning, the most notable example being variational auto-encoders (VAEs). In this paper, we propose iterative inference models, which learn how to optimize a variational lower bound through repeatedly encoding gradients. Our approach generalizes VAEs under certain conditions, and by viewing VAEs in the context of iterative inference, we provide further insight into several recent empirical findings. We demonstrate the inference optimization capabilities of iterative inference models, explore unique aspects of these models, and show that they outperform standard inference models on typical benchmark data sets.

Live content is unavailable. Log in and register to view live content