Skip to yearly menu bar Skip to main content


Poster

Learning Sparse Latent Representations with the Deep Copula Information Bottleneck

Aleksander Wieczorek · Mario Wieser · Damian Murezzan · Volker Roth

East Meeting level; 1,2,3 #35

Abstract:

Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data.

Live content is unavailable. Log in and register to view live content