arXiv Analytics

Sign in

arXiv:2009.11044 [cs.CV]AbstractReferencesReviewsResources

Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem Formulation

Dimche Kostadinov, Davide Scaramuzza

Published 2020-09-23Version 1

Event-based cameras record an asynchronous stream of per-pixel brightness changes. As such, they have numerous advantages over the standard frame-based cameras, including high temporal resolution, high dynamic range, and no motion blur. Due to the asynchronous nature, efficient learning of compact representation for event data is challenging. While it remains not explored the extent to which the spatial and temporal event "information" is useful for pattern recognition tasks. In this paper, we focus on single-layer architectures. We analyze the performance of two general problem formulations: the direct and the inverse, for unsupervised feature learning from local event data (local volumes of events described in space-time). We identify and show the main advantages of each approach. Theoretically, we analyze guarantees for an optimal solution, possibility for asynchronous, parallel parameter update, and the computational complexity. We present numerical experiments for object recognition. We evaluate the solution under the direct and the inverse problem and give a comparison with the state-of-the-art methods. Our empirical results highlight the advantages of both approaches for representation learning from event data. We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods from the same class of methods.

Journal: IAPR IEEE/Computer Society International Conference on Pattern Recognition (ICPR), Milan, 2021
Categories: cs.CV, cs.AI, cs.LG, cs.RO
Related articles: Most relevant | Search more
arXiv:2207.14671 [cs.CV] (Published 2022-07-29)
High Dynamic Range and Super-Resolution from Raw Image Bursts
arXiv:2006.11840 [cs.CV] (Published 2020-06-21)
Quanta Burst Photography
arXiv:1709.06310 [cs.CV] (Published 2017-09-19)
Hybrid, Frame and Event based Visual Inertial Odometry for Robust, Autonomous Navigation of Quadrotors