arXiv Analytics

Sign in

arXiv:2010.12968 [cs.CV]AbstractReferencesReviewsResources

Video Understanding based on Human Action and Group Activity Recognition

Zijian Kuang, Xinran Tie

Published 2020-10-24Version 1

A lot of previous work, such as video captioning, has shown promising performance in producing general video understanding. However, it is still challenging to generate a fine-grained description of human actions and their interactions using state-of-the-art video captioning techniques. The detailed description of human actions and group activities is essential information, which can be used in real-time CCTV video surveillance, health care, sports video analysis, etc. In this study, we will propose and improve the video understanding method based on the Group Activity Recognition model by learning Actor Relation Graph (ARG).We will enhance the functionality and the performance of the ARG based model to perform a better video understanding by applying approaches such as increasing human object detection accuracy with YOLO, increasing process speed by reducing the input image size, and applying ResNet in the CNN layer.We will also introduce a visualization model that will visualize each input video frame with predicted bounding boxes on each human object and predicted "video captioning" to describe each individual's action and their collective activity.

Related articles: Most relevant | Search more
arXiv:2312.06720 [cs.CV] (Published 2023-12-11)
Audio-Visual LLM for Video Understanding
arXiv:1711.06330 [cs.CV] (Published 2017-11-16)
Attend and Interact: Higher-Order Object Interactions for Video Understanding
arXiv:2403.14743 [cs.CV] (Published 2024-03-21)
VURF: A General-purpose Reasoning and Self-refinement Framework for Video Understanding