Skip to yearly menu bar Skip to main content


Poster

MamBEV: Enabling State Space Models to Learn Birds-Eye-View Representations

Hongyu Ke · Jack Morris · Kentaro Oguchi · Xiaofei Cao · Yongkang Liu · Haoxin Wang · Yi Ding

Hall 3 + Hall 2B #86
[ ] [ Project Page ]
Fri 25 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

3D visual perception tasks, such as 3D detection from multi-camera images, are essential components of autonomous driving and assistance systems. However, designing computationally efficient methods remains a significant challenge. In this paper, we propose a Mamba-based framework called MamBEV, which learns unified Bird's Eye View (BEV) representations using linear spatio-temporal SSM-based attention. This approach supports multiple 3D perception tasks with significantly improved computational and memory efficiency. Furthermore, we introduce SSM based cross-attention, analogous to standard cross attention, where BEV query representations can interact with relevant image features. Extensive experiments demonstrate MamBEV's promising performance across diverse visual perception metrics, highlighting its advantages in input scaling efficiency compared to existing benchmark models.

Live content is unavailable. Log in and register to view live content