arXiv Analytics

Sign in

arXiv:2206.11896 [cs.CV]AbstractReferencesReviewsResources

EventNeRF: Neural Radiance Fields from a Single Colour Event Camera

Viktor Rudnev, Mohamed Elgharib, Christian Theobalt, Vladislav Golyanik

Published 2022-06-23Version 1

Learning coordinate-based volumetric 3D scene representations such as neural radiance fields (NeRF) has been so far studied assuming RGB or RGB-D images as inputs. At the same time, it is known from the neuroscience literature that human visual system (HVS) is tailored to process asynchronous brightness changes rather than synchronous RGB images, in order to build and continuously update mental 3D representations of the surroundings for navigation and survival. Visual sensors that were inspired by HVS principles are event cameras. Thus, events are sparse and asynchronous per-pixel brightness (or colour channel) change signals. In contrast to existing works on neural 3D scene representation learning, this paper approaches the problem from a new perspective. We demonstrate that it is possible to learn NeRF suitable for novel-view synthesis in the RGB space from asynchronous event streams. Our models achieve high visual accuracy of the rendered novel views of challenging scenes in the RGB space, even though they are trained with substantially fewer data (i.e., event streams from a single event camera moving around the object) and more efficiently (due to the inherent sparsity of event streams) than the existing NeRF models trained with RGB images. We will release our datasets and the source code, see https://4dqv.mpi-inf.mpg.de/EventNeRF/.

Related articles: Most relevant | Search more
arXiv:2111.15490 [cs.CV] (Published 2021-11-30, updated 2022-03-20)
FENeRF: Face Editing in Neural Radiance Fields
arXiv:2008.02268 [cs.CV] (Published 2020-08-05)
NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections
arXiv:2012.02190 [cs.CV] (Published 2020-12-03)
pixelNeRF: Neural Radiance Fields from One or Few Images