arXiv Analytics

Sign in

arXiv:1806.09278 [cs.CV]AbstractReferencesReviewsResources

Best Vision Technologies Submission to ActivityNet Challenge 2018-Task: Dense-Captioning Events in Videos

Yuan Liu, Moyini Yao

Published 2018-06-25Version 1

This note describes the details of our solution to the dense-captioning events in videos task of ActivityNet Challenge 2018. Specifically, we solve this problem with a two-stage way, i.e., first temporal event proposal and then sentence generation. For temporal event proposal, we directly leverage the three-stage workflow in [13, 16]. For sentence generation, we capitalize on LSTM-based captioning framework with temporal attention mechanism (dubbed as LSTM-T). Moreover, the input visual sequence to the LSTM-based video captioning model is comprised of RGB and optical flow images. At inference, we adopt a late fusion scheme to fuse the two LSTM-based captioning models for sentence generation.

Comments: Rank 2 in ActivityNet Captions Challenge 2018
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2006.11693 [cs.CV] (Published 2020-06-21)
Dense-Captioning Events in Videos: SYSU Submission to ActivityNet Challenge 2020
arXiv:1907.12223 [cs.CV] (Published 2019-07-29)
Multi-Granularity Fusion Network for Proposal and Activity Localization: Submission to ActivityNet Challenge 2019 Task 1 and Task 2
arXiv:1710.08011 [cs.CV] (Published 2017-10-22)
ActivityNet Challenge 2017 Summary