{ "id": "2008.02191", "version": "v1", "published": "2020-08-05T15:38:18.000Z", "updated": "2020-08-05T15:38:18.000Z", "title": "Active Perception using Light Curtains for Autonomous Driving", "authors": [ "Siddharth Ancha", "Yaadhav Raaj", "Peiyun Hu", "Srinivasa G. Narasimhan", "David Held" ], "comment": "Published at the European Conference on Computer Vision (ECCV), 2020", "categories": [ "cs.CV", "cs.LG", "cs.RO" ], "abstract": "Most real-world 3D sensors such as LiDARs perform fixed scans of the entire environment, while being decoupled from the recognition system that processes the sensor data. In this work, we propose a method for 3D object recognition using light curtains, a resource-efficient controllable sensor that measures depth at user-specified locations in the environment. Crucially, we propose using prediction uncertainty of a deep learning based 3D point cloud detector to guide active perception. Given a neural network's uncertainty, we derive an optimization objective to place light curtains using the principle of maximizing information gain. Then, we develop a novel and efficient optimization algorithm to maximize this objective by encoding the physical constraints of the device into a constraint graph and optimizing with dynamic programming. We show how a 3D detector can be trained to detect objects in a scene by sequentially placing uncertainty-guided light curtains to successively improve detection accuracy. Code and details can be found on the project webpage: http://siddancha.github.io/projects/active-perception-light-curtains.", "revisions": [ { "version": "v1", "updated": "2020-08-05T15:38:18.000Z" } ], "analyses": { "keywords": [ "active perception", "placing uncertainty-guided light curtains", "autonomous driving", "3d point cloud detector", "real-world 3d sensors" ], "tags": [ "conference paper" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }