arXiv Analytics

Sign in

arXiv:2209.05277 [cs.CV]AbstractReferencesReviewsResources

StructNeRF: Neural Radiance Fields for Indoor Scenes with Structural Hints

Zheng Chen, Chen Wang, Yuan-Chen Guo, Song-Hai Zhang

Published 2022-09-12Version 1

Neural Radiance Fields (NeRF) achieve photo-realistic view synthesis with densely captured input images. However, the geometry of NeRF is extremely under-constrained given sparse views, resulting in significant degradation of novel view synthesis quality. Inspired by self-supervised depth estimation methods, we propose StructNeRF, a solution to novel view synthesis for indoor scenes with sparse inputs. StructNeRF leverages the structural hints naturally embedded in multi-view inputs to handle the unconstrained geometry issue in NeRF. Specifically, it tackles the texture and non-texture regions respectively: a patch-based multi-view consistent photometric loss is proposed to constrain the geometry of textured regions; for non-textured ones, we explicitly restrict them to be 3D consistent planes. Through the dense self-supervised depth constraints, our method improves both the geometry and the view synthesis performance of NeRF without any additional training on external data. Extensive experiments on several real-world datasets demonstrate that StructNeRF surpasses state-of-the-art methods for indoor scenes with sparse inputs both quantitatively and qualitatively.

Related articles: Most relevant | Search more
arXiv:2212.11966 [cs.CV] (Published 2022-12-22)
Removing Objects From Neural Radiance Fields
arXiv:2111.09996 [cs.CV] (Published 2021-11-19, updated 2022-04-26)
LOLNeRF: Learn from One Look
arXiv:2311.01065 [cs.CV] (Published 2023-11-02)
Novel View Synthesis from a Single RGBD Image for Indoor Scenes