Skip to yearly menu bar Skip to main content


Poster

EffoVPR: Effective Foundation Model Utilization for Visual Place Recognition

Issar Tzachor · Boaz Lerner · Matan Levy · Michael Green · Tal Berkovitz Shalev · Gavriel Habib · Dvir Samuel · Noam Zailer · Or Shimshi · Nir Darshan · Rami Ben-Ari

Hall 3 + Hall 2B #96
[ ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

The task of Visual Place Recognition (VPR) is to predict the location of a query image from a database of geo-tagged images. Recent studies in VPR have highlighted the significant advantage of employing pre-trained foundation models like DINOv2 for the VPR task. However, these models are often deemed inadequate for VPR without further fine-tuning on VPR-specific data.In this paper, we present an effective approach to harness the potential of a foundation model for VPR. We show that features extracted from self-attention layers can act as a powerful re-ranker for VPR, even in a zero-shot setting. Our method not only outperforms previous zero-shot approaches but also introduces results competitive with several supervised methods.We then show that a single-stage approach utilizing internal ViT layers for pooling can produce global features that achieve state-of-the-art performance, with impressive feature compactness down to 128D. Moreover, integrating our local foundation features for re-ranking further widens this performance gap. Our method also demonstrates exceptional robustness and generalization, setting new state-of-the-art performance, while handling challenging conditions such as occlusion, day-night transitions, and seasonal variations.

Live content is unavailable. Log in and register to view live content