Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Trading Information between Latents in Hierarchical Variational Autoencoders

Tim Xiao · Robert Bamler

MH1-2-3-4 #106

Keywords: [ Probabilistic Methods ] [ information theory ] [ vae ] [ rate distortion theory ] [ hierarchical VAE ]


Abstract: Variational Autoencoders (VAEs) were originally motivated as probabilistic generative models in which one performs approximate Bayesian inference. The proposal of $\beta$-VAEs breaks this interpretation and generalizes VAEs to application domains beyond generative modeling (e.g., representation learning, clustering, or lossy data compression) by introducing an objective function that allows practitioners to trade off between the information content ("bit rate") of the latent representation and the distortion of reconstructed data. In this paper, we reconsider this rate/distortion trade-off in the context of hierarchical VAEs, i.e., VAEs with more than one layer of latent variables. We propose a method to control each layer's contribution to the rate independently. We identify the most general class of inference models to which our proposed method is applicable, and we derive theoretical bounds on the performance of downstream tasks as functions of the individual layers' rates. Our experiments demonstrate that the proposed method allows us to better tune hierarchical VAEs for a diverse set of practical use cases.

Chat is not available.