The Effects of Invertibility on the Representational Complexity of Encoders in Variational Autoencoders

Divyansh Pareek · Andrej Risteski

Keywords: [ deep learning theory ] [ variational autoencoders ]

[ Abstract ]
[ Visit Poster at Spot E2 in Virtual World ] [ OpenReview
Wed 27 Apr 10:30 a.m. PDT — 12:30 p.m. PDT


Training and using modern neural-network based latent-variable generative models (like Variational Autoencoders) often require simultaneously training a generative direction along with an inferential (encoding) direction, which approximates the posterior distribution over the latent variables. Thus, the question arises: how complex does the inferential model need to be, in order to be able to accurately model the posterior distribution of a given generative model? In this paper, we identify an important property of the generative map impacting the required size of the encoder. We show that if the generative map is ``strongly invertible" (in a sense we suitably formalize), the inferential model need not be much more complex. Conversely, we prove that there exist non-invertible generative maps, for which the encoding direction needs to be exponentially larger (under standard assumptions in computational complexity). Importantly, we do not require the generative model to be layerwise invertible, which a lot of the related literature assumes and isn't satisfied by many architectures used in practice (e.g. convolution and pooling based networks). Thus, we provide theoretical support for the empirical wisdom that learning deep generative models is harder when data lies on a low-dimensional manifold.

Chat is not available.