Skip to yearly menu bar Skip to main content


Poster

Variational Autoencoders with Jointly Optimized Latent Dependency Structure

Jiawei He · Yu Gong · Joe Marino · Greg Mori · Andreas Lehrmann

Great Hall BC #22

Keywords: [ structure learning ] [ deep generative models ]


Abstract:

We propose a method for learning the dependency structure between latent variables in deep latent variable models. Our general modeling and inference framework combines the complementary strengths of deep generative models and probabilistic graphical models. In particular, we express the latent variable space of a variational autoencoder (VAE) in terms of a Bayesian network with a learned, flexible dependency structure. The network parameters, variational parameters as well as the latent topology are optimized simultaneously with a single objective. Inference is formulated via a sampling procedure that produces expectations over latent variable structures and incorporates top-down and bottom-up reasoning over latent variable values. We validate our framework in extensive experiments on MNIST, Omniglot, and CIFAR-10. Comparisons to state-of-the-art structured variational autoencoder baselines show improvements in terms of the expressiveness of the learned model.

Live content is unavailable. Log in and register to view live content