{ "id": "1705.00754", "version": "v1", "published": "2017-05-02T01:21:58.000Z", "updated": "2017-05-02T01:21:58.000Z", "title": "Dense-Captioning Events in Videos", "authors": [ "Ranjay Krishna", "Kenji Hata", "Frederic Ren", "Li Fei-Fei", "Juan Carlos Niebles" ], "comment": "16 pages, 16 figures", "categories": [ "cs.CV" ], "abstract": "Most natural videos contain numerous events. For example, in a video of a \"man playing a piano\", the video might also contain \"another man dancing\" or \"a crowd clapping\". We introduce the task of dense-captioning events, which involves both detecting and describing events in a video. We propose a new model that is able to identify all events in a single pass of the video while simultaneously describing the detected events with natural language. Our model introduces a variant of an existing proposal module that is designed to capture both short as well as long events that span minutes. To capture the dependencies between the events in a video, our model introduces a new captioning module that uses contextual information from past and future events to jointly describe all events. We also introduce ActivityNet Captions, a large-scale benchmark for dense-captioning events. ActivityNet Captions contains 20k videos amounting to 849 video hours with 100k total descriptions, each with it's unique start and end time. Finally, we report performances of our model for dense-captioning events, video retrieval and localization.", "revisions": [ { "version": "v1", "updated": "2017-05-02T01:21:58.000Z" } ], "analyses": { "keywords": [ "dense-captioning events", "contains 20k videos amounting", "activitynet captions contains 20k videos", "natural videos contain numerous events" ], "note": { "typesetting": "TeX", "pages": 16, "language": "en", "license": "arXiv", "status": "editable" } } }