Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

DAVA: Disentangling Adversarial Variational Autoencoder

Benjamin Estermann · Roger Wattenhofer

MH1-2-3-4 #143

Keywords: [ Unsupervised and Self-supervised learning ] [ varational auto-encoder ] [ Disentanglement learning ] [ generative adversarial networks ] [ curriculum learning ]


Abstract:

The use of well-disentangled representations offers many advantages for downstream tasks, e.g. an increased sample efficiency, or better interpretability.However, the quality of disentangled interpretations is often highly dependent on the choice of dataset-specific hyperparameters, in particular the regularization strength.To address this issue, we introduce DAVA, a novel training procedure for variational auto-encoders. DAVA completely alleviates the problem of hyperparameter selection.We compare DAVA to models with optimal hyperparameters.Without any hyperparameter tuning, DAVA is competitive on a diverse range of commonly used datasets.Underlying DAVA, we discover a necessary condition for unsupervised disentanglement, which we call PIPE.We demonstrate the ability of PIPE to positively predict the performance of downstream models in abstract reasoning.We also thoroughly investigate correlations with existing supervised and unsupervised metrics. The code is available at https://github.com/besterma/dava.

Chat is not available.