arXiv Analytics

Sign in

arXiv:2302.01329 [cs.CV]AbstractReferencesReviewsResources

Dreamix: Video Diffusion Models are General Video Editors

Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, Yedid Hoshen

Published 2023-02-02Version 1

Text-driven image and video diffusion models have recently achieved unprecedented generation realism. While diffusion models have been successfully applied for image editing, very few works have done so for video editing. We present the first diffusion-based method that is able to perform text-based motion and appearance editing of general videos. Our approach uses a video diffusion model to combine, at inference time, the low-resolution spatio-temporal information from the original video with new, high resolution information that it synthesized to align with the guiding text prompt. As obtaining high-fidelity to the original video requires retaining some of its high-resolution information, we add a preliminary stage of finetuning the model on the original video, significantly boosting fidelity. We propose to improve motion editability by a new, mixed objective that jointly finetunes with full temporal attention and with temporal attention masking. We further introduce a new framework for image animation. We first transform the image into a coarse video by simple image processing operations such as replication and perspective geometric projections, and then use our general video editor to animate it. As a further application, we can use our method for subject-driven video generation. Extensive qualitative and numerical experiments showcase the remarkable editing ability of our method and establish its superior performance compared to baseline methods.

Related articles: Most relevant | Search more
arXiv:2403.12034 [cs.CV] (Published 2024-03-18, updated 2024-07-18)
VFusion3D: Learning Scalable 3D Generative Models from Video Diffusion Models
arXiv:2401.06578 [cs.CV] (Published 2024-01-12)
360DVD: Controllable Panorama Video Generation with 360-Degree Video Diffusion Model
arXiv:2312.02813 [cs.CV] (Published 2023-12-05)
BIVDiff: A Training-Free Framework for General-Purpose Video Synthesis via Bridging Image and Video Diffusion Models