E2NeRF: Event Enhanced Neural Radiance Fields from Blurry Images

Yunshan Qi1       Lin Zhu2 *       Yu Zhang3       Jia Li1,4 *

1Beihang University    2Beijing Institute of Technology    3SenseTime and Tetras.AI    4Peng Cheng Laboratory

Accepted by International Conference on Computer Vision (ICCV) 2023

The Framework of E2NeRF

The Results of E2NeRF

Abstract

Neural Radiance Fields (NeRF) achieves impressive rendering performance by learning volumetric 3D representation from several images of different views. However, it is difficult to reconstruct a sharp NeRF from blurry input as often occurred in the wild. To solve this problem, we propose a novel Event-Enhanced NeRF (E2NeRF) by utilizing the combination data of a bio-inspired event camera and a standard RGB camera. To effectively introduce event stream into the learning process of neural volumetric representation, we propose a blur rendering loss and an event rendering loss, which guide the network via modelling real blur process and event generation process, respectively. Moreover, a camera pose estimation framework for real-world data is built with the guidance of event stream to generalize the method to practical applications. In contrast to previous image-based or event-based NeRF, our framework effectively utilizes the internal relationship between events and images. As a result, E2NeRF not only achieves image deblurring but also achieves high-quality novel view image generation. Extensive experiments on both synthetic data and real-world data demonstrate that E2NeRF can effectively learn a sharp NeRF from blurry images, especially in complex and low-light scenes.

Supplementary Video

Qualitative Comparison on Synthetic Data

Qualitative Comparison on Real Data