arXiv Analytics

Sign in

arXiv:2104.10157 [cs.CV]AbstractReferencesReviewsResources

VideoGPT: Video Generation using VQ-VAE and Transformers

Wilson Yan, Yunzhi Zhang, Pieter Abbeel, Aravind Srinivas

Published 2021-04-20Version 1

We present VideoGPT: a conceptually simple architecture for scaling likelihood based generative modeling to natural videos. VideoGPT uses VQ-VAE that learns downsampled discrete latent representations of a raw video by employing 3D convolutions and axial self-attention. A simple GPT-like architecture is then used to autoregressively model the discrete latents using spatio-temporal position encodings. Despite the simplicity in formulation and ease of training, our architecture is able to generate samples competitive with state-of-the-art GAN models for video generation on the BAIR Robot dataset, and generate high fidelity natural images from UCF-101 and Tumbler GIF Dataset (TGIF). We hope our proposed architecture serves as a reproducible reference for a minimalistic implementation of transformer based video generation models. Samples and code are available at https://wilson1yan.github.io/videogpt/index.html

Comments: Project website: https://wilson1yan.github.io/videogpt/index.html
Categories: cs.CV, cs.LG
Related articles: Most relevant | Search more
arXiv:2405.15881 [cs.CV] (Published 2024-05-24)
Scaling Diffusion Mamba with Bidirectional SSMs for Efficient Image and Video Generation
arXiv:1907.00274 [cs.CV] (Published 2019-06-29)
NetTailor: Tuning the Architecture, Not Just the Weights
arXiv:2202.14020 [cs.CV] (Published 2022-02-28)
State-of-the-Art in the Architecture, Methods and Applications of StyleGAN