Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

Learning What and Where: Disentangling Location and Identity Tracking Without Supervision

Manuel Traub · Sebastian Otte · Tobias Menge · Matthias Karlbauer · Jannik Thuemmel · Martin V. Butz

Keywords: [ unsupervised learning ] [ Object Permanence ] [ binding problem ] [ CATER ] [ Unsupervised and Self-supervised learning ]


Abstract:

Our brain can almost effortlessly decompose visual data streams into background and salient objects. Moreover, it can anticipate object motion and interactions, which are crucial abilities for conceptual planning and reasoning. Recent object reasoning datasets, such as CATER, have revealed fundamental shortcomings of current vision-based AI systems, particularly when targeting explicit object representations, object permanence, and object reasoning. Here we introduce a self-supervised LOCation and Identity tracking system (Loci), which excels on the CATER tracking challenge. Inspired by the dorsal and ventral pathways in the brain, Loci tackles the binding problem by processing separate, slot-wise encodings of 'what' and 'where'. Loci's predictive coding-like processing encourages active error minimization, such that individual slots tend to encode individual objects. Interactions between objects and object dynamics are processed in the disentangled latent space. Truncated backpropagation through time combined with forward eligibility accumulation significantly speeds up learning and improves memory efficiency. Besides exhibiting superior performance in current benchmarks, Loci effectively extracts objects from video streams and separates them into location and Gestalt components. We believe that this separation offers a representation that will facilitate effective planning and reasoning on conceptual levels.

Chat is not available.