Skip to yearly menu bar Skip to main content


Poster

Synergy Between Sufficient Changes and Sparse Mixing Procedure for Disentangled Representation Learning

Zijian Li · Shunxing Fan · Yujia Zheng · Ignavier Ng · Shaoan Xie · Guangyi Chen · Xinshuai Dong · Ruichu Cai · Kun Zhang

Hall 3 + Hall 2B #557
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Disentangled representation learning aims to uncover the latent variables underlying observed data, yet identifying these variables under mild assumptions remains challenging. Some methods rely on sufficient changes in the distribution of latent variables indicated by auxiliary variables, such as domain indices, but acquiring enough domains is often impractical. Alternative approaches exploit the structural sparsity assumption on mixing processes, but this constraint may not hold in practice. Interestingly, we find that these two seemingly unrelated assumptions can actually complement each other. Specifically, when conditioned on auxiliary variables, the sparse mixing process induces independence between latent and observed variables, which simplifies the mapping from estimated to true latent variables and hence compensates for deficiencies of auxiliary variables. Building on this insight, we propose an identifiability theory with less restrictive constraints regarding the auxiliary variables and the sparse mixing process, enhancing applicability to real-world scenarios. Additionally, we develop a generative model framework incorporating a domain encoding network and a sparse mixing constraint and provide two implementations based on variational autoencoders and generative adversarial networks. Experiment results on synthetic and real-world datasets support our theoretical results.

Live content is unavailable. Log in and register to view live content