Skip to yearly menu bar Skip to main content


Poster

Delta-AI: Local objectives for amortized inference in sparse graphical models

Jean-Pierre Falet · Hae Beom Lee · Nikolay Malkin · Chen Sun · Dragos Secrieru · Dinghuai Zhang · Guillaume Lajoie · Yoshua Bengio

Halle B #169

Abstract: We present a new algorithm for amortized inference in sparse probabilistic graphical models (PGMs), which we call $\Delta$-amortized inference ($\Delta$-AI). Our approach is based on the observation that when the sampling of variables in a PGM is seen as a sequence of actions taken by an agent, sparsity of the PGM enables local credit assignment in the agent's policy learning objective. This yields a local constraint that can be turned into a local loss in the style of generative flow networks (GFlowNets) that enables off-policy training but avoids the need to instantiate all the random variables for each parameter update, thus speeding up training considerably. The $\Delta$-AI objective matches the conditional distribution of a variable given its Markov blanket in a tractable learned sampler, which has the structure of a Bayesian network, with the same conditional distribution under the target PGM. As such, the trained sampler recovers marginals and conditional distributions of interest and enables inference of partial subsets of variables. We illustrate $\Delta$-AI's effectiveness for sampling from synthetic PGMs and training latent variable models with sparse factor structure. Code: https://github.com/GFNOrg/Delta-AI.

Chat is not available.