arXiv Analytics

Sign in

arXiv:2008.02268 [cs.CV]AbstractReferencesReviewsResources

NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections

Ricardo Martin-Brualla, Noha Radwan, Mehdi S. M. Sajjadi, Jonathan T. Barron, Alexey Dosovitskiy, Daniel Duckworth

Published 2020-08-05Version 1

We present a learning-based method for synthesizing novel views of complex outdoor scenes using only unstructured collections of in-the-wild photographs. We build on neural radiance fields (NeRF), which uses the weights of a multilayer perceptron to implicitly model the volumetric density and color of a scene. While NeRF works well on images of static subjects captured under controlled settings, it is incapable of modeling many ubiquitous, real-world phenomena in uncontrolled images, such as variable illumination or transient occluders. In this work, we introduce a series of extensions to NeRF to address these issues, thereby allowing for accurate reconstructions from unstructured image collections taken from the internet. We apply our system, which we dub NeRF-W, to internet photo collections of famous landmarks, thereby producing photorealistic, spatially consistent scene representations despite unknown and confounding factors, resulting in significant improvement over the state of the art.

Related articles: Most relevant | Search more
arXiv:2012.02190 [cs.CV] (Published 2020-12-03)
pixelNeRF: Neural Radiance Fields from One or Few Images
arXiv:2308.03772 [cs.CV] (Published 2023-07-27)
Improved Neural Radiance Fields Using Pseudo-depth and Fusion
arXiv:2211.12285 [cs.CV] (Published 2022-11-22)
Exact-NeRF: An Exploration of a Precise Volumetric Parameterization for Neural Radiance Fields