arXiv Analytics

Sign in

arXiv:2203.14074 [cs.CV]AbstractReferencesReviewsResources

V3GAN: Decomposing Background, Foreground and Motion for Video Generation

Arti Keshari, Sonam Gupta, Sukhendu Das

Published 2022-03-26Version 1

Video generation is a challenging task that requires modeling plausible spatial and temporal dynamics in a video. Inspired by how humans perceive a video by grouping a scene into moving and stationary components, we propose a method that decomposes the task of video generation into the synthesis of foreground, background and motion. Foreground and background together describe the appearance, whereas motion specifies how the foreground moves in a video over time. We propose V3GAN, a novel three-branch generative adversarial network where two branches model foreground and background information, while the third branch models the temporal information without any supervision. The foreground branch is augmented with our novel feature-level masking layer that aids in learning an accurate mask for foreground and background separation. To encourage motion consistency, we further propose a shuffling loss for the video discriminator. Extensive quantitative and qualitative analysis on synthetic as well as real-world benchmark datasets demonstrates that V3GAN outperforms the state-of-the-art methods by a significant margin.

Related articles: Most relevant | Search more
arXiv:1707.04993 [cs.CV] (Published 2017-07-17)
MoCoGAN: Decomposing Motion and Content for Video Generation
arXiv:2410.22979 [cs.CV] (Published 2024-10-30)
LumiSculpt: A Consistency Lighting Control Network for Video Generation
arXiv:2404.13026 [cs.CV] (Published 2024-04-19)
PhysDreamer: Physics-Based Interaction with 3D Objects via Video Generation