Skip to yearly menu bar Skip to main content


Poster

CityAnchor: City-scale 3D Visual Grounding with Multi-modality LLMs

Jinpeng Li · Haiping Wang · Jiabin chen · Yuan Liu · Zhiyang Dou · Yuexin Ma · Sibei Yang · Yuan Li · Wenping Wang · Zhen Dong · Bisheng Yang

Hall 3 + Hall 2B #83
[ ] [ Project Page ]
Wed 23 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

In this paper, we present a 3D visual grounding method called CityAnchor for localizing an urban object in a city-scale point cloud. Recent developments in multiview reconstruction enable us to reconstruct city-scale point clouds but how to conduct visual grounding on such a large-scale urban point cloud remains an open problem. Previous 3D visual grounding system mainly concentrates on localizing an object in an image or a small-scale point cloud, which is not accurate and efficient enough to scale up to a city-scale point cloud. We address this problem with a multi-modality LLM which consists of two stages, a coarse localization and a fine-grained matching. Given the text descriptions, the coarse localization stage locates possible regions on a projected 2D map of the point cloud while the fine-grained matching stage accurately determines the most matched object in these possible regions. We conduct experiments on the CityRefer dataset and a new synthetic dataset annotated by us, both of which demonstrate our method can produce accurate 3D visual grounding on a city-scale 3D point cloud.

Live content is unavailable. Log in and register to view live content