Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Bridging the Gap to Real-World Object-Centric Learning

Maximilian Seitzer · Max Horn · Andrii Zadaianchuk · Dominik Zietlow · Tianjun Xiao · Carl-Johann Simon-Gabriel · Tong He · Zheng Zhang · Bernhard Schoelkopf · Thomas Brox · Francesco Locatello

MH1-2-3-4 #72

Keywords: [ Deep Learning and representational learning ] [ object-centric learning ] [ self-supervised learning ] [ object discovery ] [ unsupervised learning ] [ vision transformer ]


Abstract:

Humans naturally decompose their environment into entities at the appropriate level of abstraction to act in the world. Allowing machine learning algorithms to derive this decomposition in an unsupervised way has become an important line of research. However, current methods are restricted to simulated data or require additional information in the form of motion or depth in order to successfully discover objects. In this work, we overcome this limitation by showing that reconstructing features from models trained in a self-supervised manner is a sufficient training signal for object-centric representations to arise in a fully unsupervised way. Our approach, DINOSAUR, significantly out-performs existing object-centric learning models on simulated data and is the first unsupervised object-centric model that scales to real world-datasets such as COCO and PASCAL VOC. DINOSAUR is conceptually simple and shows competitive performance compared to more involved pipelines from the computer vision literature.

Chat is not available.