Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Structure by Architecture: Structured Representations without Regularization

Felix Leeb · Giulia Lanzillotta · Yashas Annadani · michel besserve · Stefan Bauer · Bernhard Schoelkopf

MH1-2-3-4 #78

Keywords: [ Deep Learning and representational learning ] [ Hybridization ] [ structure ] [ generative ] [ architecture ] [ regularization ] [ autoencoder ] [ disentanglement ]


Abstract:

We study the problem of self-supervised structured representation learning using autoencoders for downstream tasks such as generative modeling. Unlike most methods which rely on matching an arbitrary, relatively unstructured, prior distribution for sampling, we propose a sampling technique that relies solely on the independence of latent variables, thereby avoiding the trade-off between reconstruction quality and generative performance typically observed in VAEs. We design a novel autoencoder architecture capable of learning a structured representation without the need for aggressive regularization. Our structural decoders learn a hierarchy of latent variables, thereby ordering the information without any additional regularization or supervision. We demonstrate how these models learn a representation that improves results in a variety of downstream tasks including generation, disentanglement, and extrapolation using several challenging and natural image datasets.

Chat is not available.