Skip to yearly menu bar Skip to main content


Poster

3D Vision-Language Gaussian Splatting

Qucheng Peng · Benjamin Planche · Zhongpai Gao · Meng Zheng · Anwesa Choudhuri · Terrence Chen · Chen Chen · Ziyan Wu

Hall 3 + Hall 2B #104
[ ]
Sat 26 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

Recent advancements in 3D reconstruction methods and vision-language models have propelled the development of multi-modal 3D scene understanding, which has vital applications in robotics, autonomous driving, and virtual/augmented reality. However, current multi-modal scene understanding approaches have naively embedded semantic representations into 3D reconstruction methods without striking a balance between visual and language modalities, which leads to unsatisfying semantic rasterization of translucent or reflective objects, as well as over-fitting on color modality. To alleviate these limitations, we propose a solution that adequately handles the distinct visual and semantic modalities, i.e., a 3D vision-language Gaussian splatting model for scene understanding, to put emphasis on the representation learning of language modality. We propose a novel cross-modal rasterizer, using modality fusion along with a smoothed semantic indicator for enhancing semantic rasterization. We also employ a camera-view blending technique to improve semantic consistency between existing and synthesized views, thereby effectively mitigating over-fitting. Extensive experiments demonstrate that our method achieves state-of-the-art performance in open-vocabulary semantic segmentation, surpassing existing methods by a significant margin.

Live content is unavailable. Log in and register to view live content