arXiv Analytics

Sign in

arXiv:1605.08110 [cs.CV]AbstractReferencesReviewsResources

Video Summarization with Long Short-term Memory

Ke Zhang, Wei-Lun Chao, Fei Sha, Kristen Grauman

Published 2016-05-26Version 1

We propose a novel supervised learning technique for summarizing videos by automatically selecting keyframes or key subshots. Casting the problem as a structured prediction problem on sequential data, our main idea is to use Long Short-Term Memory (LSTM), a special type of recurrent neural networks to model the variable-range dependencies entailed in the task of video summarization. Our learning models attain the state-of-the-art results on two benchmark video datasets. Detailed analysis justifies the design of the models. In particular, we show that it is crucial to take into consideration the sequential structures in videos and model them. Besides advances in modeling techniques, we introduce techniques to address the need of a large number of annotated data for training complex learning models. There, our main idea is to exploit the existence of auxiliary annotated video datasets, albeit heterogeneous in visual styles and contents. Specifically, we show domain adaptation techniques can improve summarization by reducing the discrepancies in statistical properties across those datasets.

Related articles: Most relevant | Search more
arXiv:2010.15740 [cs.CV] (Published 2020-10-29)
Recurrent Neural Networks for video object detection
arXiv:1704.04055 [cs.CV] (Published 2017-04-13)
Land Cover Classification via Multi-temporal Spatial Data by Recurrent Neural Networks
arXiv:1312.4569 [cs.CV] (Published 2013-11-05, updated 2014-03-10)
Dropout improves Recurrent Neural Networks for Handwriting Recognition