Skip to yearly menu bar Skip to main content


Poster

SINGAPO: Single Image Controlled Generation of Articulated Parts in Objects

Jiayi Liu · Denys Iliash · Angel Chang · Manolis Savva · Ali Mahdavi Amiri

Hall 3 + Hall 2B #97
[ ] [ Project Page ]
Fri 25 Apr 7 p.m. PDT — 9:30 p.m. PDT

Abstract:

We address the challenge of creating 3D assets for household articulated objects from a single image.Prior work on articulated object creation either requires multi-view multi-state input, or only allows coarse control over the generation process.These limitations hinder the scalability and practicality for articulated object modeling.In this work, we propose a method to generate articulated objects from a single image.Observing the object in a resting state from an arbitrary view, our method generates an articulated object that is visually consistent with the input image.To capture the ambiguity in part shape and motion posed by a single view of the object, we design a diffusion model that learns the plausible variations of objects in terms of geometry and kinematics.To tackle the complexity of generating structured data with attributes in multiple domains, we design a pipeline that produces articulated objects from high-level structure to geometric details in a coarse-to-fine manner, where we use a part connectivity graph and part abstraction as proxies.Our experiments show that our method outperforms the state-of-the-art in articulated object creation by a large margin in terms of the generated object realism, resemblance to the input image, and reconstruction quality.

Live content is unavailable. Log in and register to view live content