Skip to yearly menu bar Skip to main content


Poster

DINOv2: Learning Robust Visual Features without Supervision

Pierre Fernandez · Piotr Bojanowski · Gabriel Synnaeve · Marc Szafraniec · Maxime Oquab · Armand Joulin · Hu Xu · Wojciech Galuba · Vasu Sharma · Timothée Darcet · Michael Rabbat · Russell Howes · Ishan Misra · Shang-Wen Li · Mahmoud Assran · Alaaeldin Ali · Herve Jegou · Po-Yao Huang · Nicolas Ballas · Théo Moutakanni · Huy Vo · Vasil Khalidov · Daniel HAZIZA · Francisco Massa · Patrick Labatut · Julien Mairal

Hall 3 + Hall 2B #325
[ ] [ Project Page ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP on most of the benchmarks at image and pixel levels.

Live content is unavailable. Log in and register to view live content