arXiv Analytics

Sign in

arXiv:2401.02052 [cs.CV]AbstractReferencesReviewsResources

Encoder-Decoder Based Long Short-Term Memory (LSTM) Model for Video Captioning

Sikiru Adewale, Tosin Ige, Bolanle Hafiz Matti

Published 2023-10-02Version 1

This work demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness.

Related articles: Most relevant | Search more
arXiv:2201.09153 [cs.CV] (Published 2022-01-23)
An Integrated Approach for Video Captioning and Applications
arXiv:1803.01457 [cs.CV] (Published 2018-03-05)
Less Is More: Picking Informative Frames for Video Captioning
arXiv:1601.08188 [cs.CV] (Published 2016-01-29)
Lipreading with Long Short-Term Memory