Skip to yearly menu bar Skip to main content


Virtual presentation / poster accept

CO3: Cooperative Unsupervised 3D Representation Learning for Autonomous Driving

Runjian Chen · Yao Mu · Runsen Xu · Wenqi Shao · Chenhan Jiang · Hang Xu · Yu Qiao · Zhenguo Li · Ping Luo

Keywords: [ Contextual Shape Prediction ] [ autonomous driving ] [ unsupervised representation learning ] [ Cooperative Contrastive Learning ] [ Unsupervised and Self-supervised learning ]


Abstract:

Unsupervised contrastive learning for indoor-scene point clouds has achieved great successes. However, unsupervised representation learning on outdoor-scene point clouds remains challenging because previous methods need to reconstruct the whole scene and capture partial views for the contrastive objective. This is infeasible in outdoor scenes with moving objects, obstacles, and sensors. In this paper, we propose CO3, namely {Co}operative {Co}ntrastive Learning and {Co}ntextual Shape Prediction, to learn 3D representation for outdoor-scene point clouds in an unsupervised manner. CO3 has several merits compared to existing methods. (1) It utilizes LiDAR point clouds from vehicle-side and infrastructure-side to build views that differ enough but meanwhile maintain common semantic information for contrastive learning, which are more appropriate than views built by previous methods. (2) Alongside the contrastive objective, we propose contextual shape prediction to bring more task-relevant information for unsupervised 3D point cloud representation learning and we also provide a theoretical analysis for this pre-training goal. (3) As compared to previous methods, representation learned by CO3 is able to be transferred to different outdoor scene dataset collected by different type of LiDAR sensors. (4) CO3 improves current state-of-the-art methods on Once, KITTI and NuScenes datasets by up to 2.58 mAP in 3D object detection task and 3.54 mIoU in LiDAR semantic segmentation task. Codes and models will be released.

Chat is not available.