Skip to yearly menu bar Skip to main content


Poster

PooDLe🐩: Pooled and dense self-supervised learning from naturalistic videos

Alex N. Wang · Christopher Hoang · Yuwen Xiong · Yann LeCun · Mengye Ren

Hall 3 + Hall 2B #336
[ ] [ Project Page ]
Thu 24 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

Self-supervised learning has driven significant progress in learning from single-subject, iconic images.However, there are still unanswered questions about the use of minimally-curated, naturalistic video data, which contain dense scenes with many independent objects, imbalanced class distributions, and varying object sizes.In this paper, we propose PooDLe, a self-supervised learning method that combines an invariance-based objective on pooled representations with a dense SSL objective that enforces equivariance to optical flow warping.Our results show that a unified objective applied at multiple feature scales is essential for learning effective image representations from naturalistic videos.We validate our method with experiments on the BDD100K driving video dataset and the Walking Tours first-person video dataset, demonstrating its ability to capture spatial understanding from a dense objective and semantic understanding via a pooled representation objective.

Live content is unavailable. Log in and register to view live content