arXiv Analytics

Sign in

arXiv:2205.09113 [cs.CV]AbstractReferencesReviewsResources

Masked Autoencoders As Spatiotemporal Learners

Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, Kaiming He

Published 2022-05-18Version 1

This paper studies a conceptually simple extension of Masked Autoencoders (MAE) to spatiotemporal representation learning from videos. We randomly mask out spacetime patches in videos and learn an autoencoder to reconstruct them in pixels. Interestingly, we show that our MAE method can learn strong representations with almost no inductive bias on spacetime (only except for patch and positional embeddings), and spacetime-agnostic random masking performs the best. We observe that the optimal masking ratio is as high as 90% (vs. 75% on images), supporting the hypothesis that this ratio is related to information redundancy of the data. A high masking ratio leads to a large speedup, e.g., > 4x in wall-clock time or even more. We report competitive results on several challenging video datasets using vanilla Vision Transformers. We observe that MAE can outperform supervised pre-training by large margins. We further report encouraging results of training on real-world, uncurated Instagram data. Our study suggests that the general framework of masked autoencoding (BERT, MAE, etc.) can be a unified methodology for representation learning with minimal domain knowledge.

Related articles: Most relevant | Search more
arXiv:2209.07522 [cs.CV] (Published 2022-09-15)
Test-Time Training with Masked Autoencoders
arXiv:2205.11090 [cs.CV] (Published 2022-05-23)
FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders
Kai Wang et al.
arXiv:2401.14391 [cs.CV] (Published 2024-01-25)
Rethinking Patch Dependence for Masked Autoencoders
Letian Fu et al.