arXiv Analytics

Sign in

arXiv:2007.15627 [cs.CV]AbstractReferencesReviewsResources

Unsupervised Continuous Object Representation Networks for Novel View Synthesis

Nicolai Häni, Selim Engin, Jun-Jee Chao, Volkan Isler

Published 2020-07-30Version 1

Novel View Synthesis (NVS) is concerned with the generation of novel views of a scene from one or more source images. NVS requires explicit reasoning about 3D object structure and unseen parts of the scene. As a result, current approaches rely on supervised training with either 3D models or multiple target images. We present Unsupervised Continuous Object Representation Networks (UniCORN), which encode the geometry and appearance of a 3D scene using a neural 3D representation. Our model is trained with only two source images per object, requiring no ground truth 3D models or target view supervision. Despite being unsupervised, UniCORN achieves comparable results to the state-of-the-art on challenging tasks, including novel view synthesis and single-view 3D reconstruction.

Related articles: Most relevant | Search more
arXiv:1605.03557 [cs.CV] (Published 2016-05-11)
View Synthesis by Appearance Flow
arXiv:1909.12224 [cs.CV] (Published 2019-09-26)
Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis
arXiv:2007.10618 [cs.CV] (Published 2020-07-21)
Novel View Synthesis on Unpaired Data by Conditional Deformable Variational Auto-Encoder