In-Person Poster presentation / top 25% paper

Multifactor Sequential Disentanglement via Structured Koopman Autoencoders

Nimrod Berman · Ilan Naiman · Omri Azencot

MH1-2-3-4 #41

Keywords: [ Deep Learning and representational learning ] [ Sequential Disentanglement ] [ Koopman methods ]

[ Abstract ]
[ OpenReview
Tue 2 May 7:30 a.m. PDT — 9:30 a.m. PDT
Oral presentation: Oral 4 Track 6: Deep Learning and representational learning- Reinforcement Learning
Tue 2 May 6 a.m. PDT — 7:30 a.m. PDT


Disentangling complex data to its latent factors of variation is a fundamental task in representation learning. Existing work on sequential disentanglement mostly provides two factor representations, i.e., it separates the data to time-varying and time-invariant factors. In contrast, we consider multifactor disentanglement in which multiple (more than two) semantic disentangled components are generated. Key to our approach is a strong inductive bias where we assume that the underlying dynamics can be represented linearly in the latent space. Under this assumption, it becomes natural to exploit the recently introduced Koopman autoencoder models. However, disentangled representations are not guaranteed in Koopman approaches, and thus we propose a novel spectral loss term which leads to structured Koopman matrices and disentanglement. Overall, we propose a simple and easy to code new deep model that is fully unsupervised and it supports multifactor disentanglement. We showcase new disentangling abilities such as swapping of individual static factors between characters, and an incremental swap of disentangled factors from the source to the target. Moreover, we evaluate our method extensively on two factor standard benchmark tasks where we significantly improve over competing unsupervised approaches, and we perform competitively in comparison to weakly- and self-supervised state-of-the-art approaches. The code is available at

Chat is not available.