Fast Image Reconstruction with an Event Camera
C. Scheerlinck, H. Rebecq, D. Gehrig, N. Barnes, R. Mahony, D. Scaramuzza
Winter Conference on Applications of Computer Vision (WACV), 2020
PDF Video Code Model Talk Poster BibTex
Abstract. Event cameras are powerful new sensors able to capture high dynamic range with microsecond temporal resolution and no motion blur. Their strength is detecting brightness changes (called events) rather than capturing direct brightness images; however, algorithms can be used to convert events into usable image representations for applications such as classification. Previous works rely on hand-crafted spatial and temporal smoothing techniques to reconstruct images from events. State-of-the-art video reconstruction has recently been achieved using neural networks that are large (10M parameters) and computationally expensive, requiring 30ms for a forward-pass at 640 × 480 resolution on a modern GPU. We propose a novel neural network architecture for video reconstruction from events that is smaller (38k vs. 10M parameters) and faster (10ms vs. 30ms) than state-of-the-art with minimal impact to performance.
DOI: 10.1109/WACV45572.2020.9093366
FireNet can be run using this code and model. If you wish to train FireNet for yourself, please see: Event CNN minimal.
Datasets:
Computer Vision Foundation Open Access page (incl. PDF).
Reference:
- C. Scheerlinck, H. Rebecq, D. Gehrig, N. Barnes, R. Mahony, D. Scaramuzza, “Fast Image Reconstruction with an Event Camera”, Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 156-163.