Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Deep Generative Model in Machine Learning: Theory, Principle and Efficacy

Causal Representation Learning and Inference via Mixture-Based Priors

Avinash Kori · Carles Balsells Rodas · Ben Glocker · Yingzhen Li · Francesco Locatello

Keywords: [ Diffeomorphic flows ] [ causal representations ] [ identifiability ]


Abstract:

Causal Representation Learning (CRL) aims to uncover causal symmetries in the data-generating process with minimal assumptions and data requirements. The challenge lies in identifying the causal factors and learning their relationships, which is an inherently ill-posed problem. Ensuring unique solutions, known as \emph{identifiability}, is crucial but often requires strong assumptions or acccess to interventional or counterfactual data. In this work, we propose a novel approach that partitions the latent space: one component captures causal factors using diffeomorphic flows to model causal mechanisms, while the other accounts for exogenous noise. This structured decomposition enables our model to scale effectively to high-dimensional data and deep architectures. We establish theoretical guarantees for CRL by proving the identifiability of both causal factors and exogenous noise. Empirical results across multiple datasets validate our theoretical findings.

Chat is not available.