Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Generative Models for Highly Structured Data

SIReN-VAE: Leveraging Flows and Amortized Inference for Bayesian Networks

Jacobie Mouton · Rodney Kroon


Abstract:

Initial work on variational autoencoders assumed independent latent variables with simple distributions. Subsequent work has explored incorporating more complex distributions and dependency structures: including normalizing flows in the encoder network allows latent variables to entangle non-linearly, creating a richer class of distributions for the approximate posterior, and stacking layers of latentvariables allows more complex priors to be specified for the generative model. This work explores incorporating arbitrary dependency structures, as specified by Bayesian networks, into VAEs. This is achieved by extending both the prior and inference network with graphical residual flows—residual flows that encode conditional independence by masking the weight matrices of the flow’s residualblocks. We compare our model’s performance on several synthetic datasets and show its potential in data-sparse settings.

Chat is not available.