Skip to yearly menu bar Skip to main content

Workshop: Workshop on the Elements of Reasoning: Objects, Structure and Causality

Invariant Causal Representation Learning for Generalization in Imitation and Reinforcement Learning

Chaochao Lu · José Miguel Hernández Lobato · Bernhard Schoelkopf


A fundamental challenge in imitation and reinforcement learning is to learn policies, representations, or dynamics that do not build on spurious correlations and generalize beyond the specific environments that they were trained on. We investigate these generalization problems from a unified view. For this, we propose a general framework to tackle them with theoretical guarantees on both identifiability and generalizability under mild assumptions on environmental changes. By leveraging a diverse set of training environments, we construct a data representation that ignores any spurious features and consistently predicts target variables well across environments. Following this approach, we build invariant predictors in terms of policy, representations, and dynamics. We theoretically show that the resulting policies, representations, and dynamics are able to generalize to unseen environments. Extensive experiments on both synthetic and real-world datasets show that our methods attain improved generalization over a variety of baselines.

Chat is not available.