arXiv Analytics

Sign in

arXiv:2407.02174 [cs.CV]AbstractReferencesReviewsResources

BeNeRF: Neural Radiance Fields from a Single Blurry Image and Event Stream

Wenpu Li, Pian Wan, Peng Wang, Jinhang Li, Yi Zhou, Peidong Liu

Published 2024-07-02Version 1

Neural implicit representation of visual scenes has attracted a lot of attention in recent research of computer vision and graphics. Most prior methods focus on how to reconstruct 3D scene representation from a set of images. In this work, we demonstrate the possibility to recover the neural radiance fields (NeRF) from a single blurry image and its corresponding event stream. We model the camera motion with a cubic B-Spline in SE(3) space. Both the blurry image and the brightness change within a time interval, can then be synthesized from the 3D scene representation given the 6-DoF poses interpolated from the cubic B-Spline. Our method can jointly learn both the implicit neural scene representation and recover the camera motion by minimizing the differences between the synthesized data and the real measurements without pre-computed camera poses from COLMAP. We evaluate the proposed method with both synthetic and real datasets. The experimental results demonstrate that we are able to render view-consistent latent sharp images from the learned NeRF and bring a blurry image alive in high quality. Code and data are available at https://github.com/WU-CVGL/BeNeRF.

Related articles: Most relevant | Search more
arXiv:2211.12285 [cs.CV] (Published 2022-11-22)
Exact-NeRF: An Exploration of a Precise Volumetric Parameterization for Neural Radiance Fields
arXiv:2308.09386 [cs.CV] (Published 2023-08-18)
DReg-NeRF: Deep Registration for Neural Radiance Fields
arXiv:2306.03000 [cs.CV] (Published 2023-06-05, updated 2023-08-16)
BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields