ICLR 2018
Skip to yearly menu bar Skip to main content


Workshop

3D-Scene-GAN: Three-dimensional Scene Reconstruction with Generative Adversarial Networks

Chong Yu · Yun Wang

East Meeting Level 8 + 15 #14

Three-dimensional (3D) Reconstruction is a vital and challenging research topic in advanced computer graphics and computer vision due to the intrinsic complexity and computation cost. Existing methods often produce holes, distortions and obscure parts in the reconstructed 3D models which are not adequate for real usage. The focus of this paper is to achieve high quality 3D reconstruction performance of complicated scene by adopting Generative Adversarial Network (GAN). We propose a novel workflow, namely 3D-Scene-GAN, which can iteratively improve any raw 3D reconstructed models consisting of meshes and textures. 3D-Scene-GAN is a weakly semi-supervised model. It only takes real-time 2D observation images as the supervision, and doesn’t rely on prior knowledge of shape models or any referenced observations. Finally, through the qualitative and quantitative experiments, 3D-Scene-GAN shows compelling advantages over the state-of-the-art methods: balanced rank estimation (BRE) scores are improved by 30%-100% on ICL-NUIM dataset, and 36%-190% on SUN3D dataset. And the mean distance error (MDR) also outperforms other state-of-the-art methods on benchmarks.

Live content is unavailable. Log in and register to view live content