E3NeRF: Efficient Event-Enhanced Neural Radiance Fields from Blurry Images

Yunshan Qi1       Jia Li1 *       Yifan Zhao1       Yu Zhang3       Lin Zhu2 *

1Beihang University    2Beijing Institute of Technology    3SenseTime and Tetras.AI   

The Motivation of E3NeRF

Abstract

Neural Radiance Fields (NeRF) achieves impressive novel view rendering performance by learning implicit 3D representation from sparse view images. However, it is difficult to reconstruct a sharp NeRF from blurry input that often occurs in the wild. To solve this problem, we propose a novel Efficient Event-Enhanced NeRF (E3NeRF), reconstructing sharp NeRF by utilizing both blurry images and corresponding event streams. A blur rendering loss and an event rendering loss are introduced, which guide the NeRF training via modeling the physical image motion blur process and event generation process, respectively. To improve the efficiency of the framework, we further leverage the latent spatial-temporal blur information in the event stream to evenly distribute training over temporal blur and focus training on spatial blur. Moreover, a camera pose estimation framework for real-world data is built with the guidance of the events, generalizing the method to more practical applications. Compared to previous image-based and event-based NeRF works, our framework makes more profound use of the internal relationship between events and images. Extensive experiments on both synthetic data and real-world data demonstrate that E3NeRF can effectively learn a sharp NeRF from blurry images, especially for high-speed non-uniform motion and low-light scenes.

The Framework of E3NeRF

The Results of E3NeRF

Video Results of Real-World-Challenge Dataset

Video Results of Synthetic Severely Shaking Dataset