Learning to See in the Dark with Events

Song Zhang, Yu Zhang, Zhe Jiang, Dongqing Zou, Jimmy Ren, Bin Zhou ;

Abstract


Imaging in the dark environment is important for many real-world applications like video surveillance. Recently, the development of Event Cameras raises promising directions in solving this task thanks to its High Dynamic Range (HDR) and low requirement of computational sources. However, such cameras record sparse, asynchronous intensity changes of the scene (called events), instead of canonical images. In this paper, we propose learning to see in the dark by translating the HDR events in low light to canonical sharp images as if captured in day light. Since it is extremely challenging to collect paired event-image training data, a novel unsupervised domain adaptation network is proposed which explicitly separates domain-invariant features (e.g. scene structures) from the domain-specific ones (e.g. detailed textures) to ease representation learning. A detail enhancing branch is proposed to reconstruct day light-specific features from the domain-invariant representations in a residual manner, regularized by a ranking loss. To evaluate the proposed approach, a novel large-scale dataset is captured with a DAVIS240C camera with both day/low light events and intensity images. Experiments on this dataset show that the proposed domain adaptation approach achieves superior performance than various state-of-the-art architectures."

Related Material


[pdf]