Skip to yearly menu bar Skip to main content


Poster

Disentangling 3D Prototypical Networks for Few-Shot Concept Learning

Mihir Prabhudesai · Shamit Lal · Darshan Patil · Hsiao-Yu Tung · Adam Harley · Katerina Fragkiadaki

Keywords: [ disentanglement ] [ Few Shot Learning ] [ 3D vision ] [ vqa ]


Abstract:

We present neural architectures that disentangle RGB-D images into objects’ shapes and styles and a map of the background scene, and explore their applications for few-shot 3D object detection and few-shot concept classification. Our networks incorporate architectural biases that reflect the image formation process, 3D geometry of the world scene, and shape-style interplay. They are trained end-to-end self-supervised by predicting views in static scenes, alongside a small number of 3D object boxes. Objects and scenes are represented in terms of 3D feature grids in the bottleneck of the network. We show the proposed 3D neural representations are compositional: they can generate novel 3D scene feature maps by mixing object shapes and styles, resizing and adding the resulting object 3D feature maps over background scene feature maps. We show object detectors trained on hallucinated 3D neural scenes generalize better to novel environments. We show classifiers for object categories, color, materials, and spatial relationships trained over the disentangled 3D feature sub-spaces generalize better with dramatically fewer exemplars over the current state-of-the-art, and enable a visual question answering system that uses them as its modules to generalize one-shot to novel objects in the scene.

Chat is not available.