arXiv Analytics

Sign in

arXiv:2403.11535 [cs.CV]AbstractReferencesReviewsResources

AICL: Action In-Context Learning for Video Diffusion Model

Jianzhi Liu, Junchen Zhu, Lianli Gao, Heng Tao Shen, Jingkuan Song

Published 2024-03-18, updated 2024-08-23Version 2

The open-domain video generation models are constrained by the scale of the training video datasets, and some less common actions still cannot be generated. Some researchers explore video editing methods and achieve action generation by editing the spatial information of the same action video. However, this method mechanically generates identical actions without understanding, which does not align with the characteristics of open-domain scenarios. In this paper, we propose AICL, which empowers the generative model with the ability to understand action information in reference videos, similar to how humans do, through in-context learning. Extensive experiments demonstrate that AICL effectively captures the action and achieves state-of-the-art generation performance across three typical video diffusion models on five metrics when using randomly selected categories from non-training datasets.

Related articles: Most relevant | Search more
arXiv:2401.06578 [cs.CV] (Published 2024-01-12)
360DVD: Controllable Panorama Video Generation with 360-Degree Video Diffusion Model
arXiv:2409.07452 [cs.CV] (Published 2024-09-11)
Hi3D: Pursuing High-Resolution Image-to-3D Generation with Video Diffusion Models
arXiv:2302.01329 [cs.CV] (Published 2023-02-02)
Dreamix: Video Diffusion Models are General Video Editors
Eyal Molad et al.