Skip to yearly menu bar Skip to main content


Poster

Real2Code: Reconstruct Articulated Objects via Code Generation

Mandi Zhao · Yijia Weng · Dominik Bauer · Shuran Song

Hall 3 + Hall 2B #550
[ ]
Thu 24 Apr midnight PDT — 2:30 a.m. PDT

Abstract:

We present Real2Code, a novel approach to reconstructing articulated objects via code generation. Given visual observations of an object, we first reconstruct its part geometry using image segmentation and shape completion. We represent these object parts with oriented bounding boxes, from which a fine-tuned large language model (LLM) predicts joint articulation as code. By leveraging pre-trained vision and language models, our approach scales elegantly with the number of articulated parts, and generalizes from synthetic training data to real world objects in unstructured environments. Experimental results demonstrate that Real2Code significantly outperforms the previous state-of-the-art in terms of reconstruction accuracy, and is the first approach to extrapolate beyond objects' structural complexity in the training set, as we show for objects with up to 10 articulated parts. When incorporated with a stereo reconstruction model, Real2Code moreover generalizes to real-world objects, given only a handful of multi-view RGB images and without the need for depth or camera information.

Live content is unavailable. Log in and register to view live content