Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Disentanglement of Correlated Factors via Hausdorff Factorized Support

Karsten Roth · Mark Ibrahim · Zeynep Akata · Pascal Vincent · Diane Bouchacourt

MH1-2-3-4 #91

Keywords: [ Deep Learning and representational learning ] [ generalization ] [ representation learning ] [ disentanglement ]


Abstract:

A grand goal in deep learning research is to learn representations capable of generalizing across distribution shifts.Disentanglement is one promising direction aimed at aligning a model's representation with the underlying factors generating the data (e.g. color or background). Existing disentanglement methods, however, rely on an often unrealistic assumption: that factors are statistically independent. In reality, factors (like object color and shape) are correlated. To address this limitation, we consider the use of a relaxed disentanglement criterion -- the Hausdorff Factorized Support (HFS) criterion -- that encourages only pairwise factorized support, rather than a factorial distribution, by minimizing a Hausdorff distance. This allows for arbitrary distributions of the factors over their support, including correlations between them. We show that the use of HFS consistently facilitates disentanglement and recovery of ground-truth factors across a variety of correlation settings and benchmarks, even under severe training correlations and correlation shifts, with in parts over +60% in relative improvement over existing disentanglement methods. In addition, we find that leveraging HFS for representation learning can even facilitate transfer to downstream tasks such as classification under distribution shifts. We hope our original approach and positive empirical results inspire further progress on the open problem of robust generalization. Code available at https://github.com/facebookresearch/disentangling-correlated-factors.

Chat is not available.