Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Generative Model in Machine Learning: Theory, Principle and Efficacy

Solving Bayesian inverse problems with diffusion priors and off-policy RL

Luca Scimeca · Siddarth Venkatraman · Moksh Jain · Minsu Kim · Marcin Sendera · Mohsin Hasan · Alexandre Adam · Yashar Hezaveh · Laurence Perreault-Levasseur · Yoshua Bengio · Glen Berseth · Nikolay Malkin

Keywords: [ bayesian ] [ diffusion ] [ inverse problems ] [ mcmc ]


Abstract:

This paper presents a practical application of Relative Trajectory Balance (RTB), a recently introduced off-policy reinforcement learning (RL) objective that can asymptotically solve Bayesian inverse problems optimally. We extend the original work by using RTB to train conditional diffusion model posteriors from pretrained unconditional priors for challenging linear and non-linear inverse problems in vision, and science. We use the objective alongside techniques such as off-policy backtracking exploration to improve training. Importantly, our results show that existing training-free diffusion posterior methods struggle to perform effective posterior inference in latent space due to inherent biases.

Chat is not available.