{ "id": "2403.11535", "version": "v2", "published": "2024-03-18T07:41:19.000Z", "updated": "2024-08-23T07:02:50.000Z", "title": "AICL: Action In-Context Learning for Video Diffusion Model", "authors": [ "Jianzhi Liu", "Junchen Zhu", "Lianli Gao", "Heng Tao Shen", "Jingkuan Song" ], "categories": [ "cs.CV" ], "abstract": "The open-domain video generation models are constrained by the scale of the training video datasets, and some less common actions still cannot be generated. Some researchers explore video editing methods and achieve action generation by editing the spatial information of the same action video. However, this method mechanically generates identical actions without understanding, which does not align with the characteristics of open-domain scenarios. In this paper, we propose AICL, which empowers the generative model with the ability to understand action information in reference videos, similar to how humans do, through in-context learning. Extensive experiments demonstrate that AICL effectively captures the action and achieves state-of-the-art generation performance across three typical video diffusion models on five metrics when using randomly selected categories from non-training datasets.", "revisions": [ { "version": "v2", "updated": "2024-08-23T07:02:50.000Z" } ], "analyses": { "keywords": [ "video diffusion model", "action in-context learning", "mechanically generates identical actions", "open-domain video generation models", "achieves state-of-the-art generation performance" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }