arXiv Analytics

Sign in

arXiv:2009.06732 [cs.LG]AbstractReferencesReviewsResources

Efficient Transformers: A Survey

Yi Tay, Mostafa Dehghani, Dara Bahri, Donald Metzler

Published 2020-09-14Version 1

Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of \emph{"X-former"} models have been proposed - Reformer, Linformer, Performer, Longformer, to name a few - which improve upon the original Transformer architecture, many of which make improvements around computational and memory \emph{efficiency}. With the aim of helping the avid researcher navigate this flurry, this paper characterizes a large and thoughtful selection of recent efficiency-flavored "X-former" models, providing an organized and comprehensive overview of existing work and models across multiple domains.

Related articles: Most relevant | Search more
arXiv:2109.08668 [cs.LG] (Published 2021-09-17)
Primer: Searching for Efficient Transformers for Language Modeling
arXiv:2301.13310 [cs.LG] (Published 2023-01-30)
Alternating Updates for Efficient Transformers
arXiv:2310.02041 [cs.LG] (Published 2023-10-03)
The Inhibitor: ReLU and Addition-Based Attention for Efficient Transformers