arXiv Analytics

Sign in

arXiv:2308.07633 [cs.CL]AbstractReferencesReviewsResources

A Survey on Model Compression for Large Language Models

Xunyu Zhu, Jian Li, Yong Liu, Can Ma, Weiping Wang

Published 2023-08-15Version 1

Large Language Models (LLMs) have revolutionized natural language processing tasks with remarkable success. However, their formidable size and computational demands present significant challenges for practical deployment, especially in resource-constrained environments. As these challenges become increasingly pertinent, the field of model compression has emerged as a pivotal research area to alleviate these limitations. This paper presents a comprehensive survey that navigates the landscape of model compression techniques tailored specifically for LLMs. Addressing the imperative need for efficient deployment, we delve into various methodologies, encompassing quantization, pruning, knowledge distillation, and more. Within each of these techniques, we highlight recent advancements and innovative approaches that contribute to the evolving landscape of LLM research. Furthermore, we explore benchmarking strategies and evaluation metrics that are essential for assessing the effectiveness of compressed LLMs. By providing insights into the latest developments and practical implications, this survey serves as an invaluable resource for both researchers and practitioners. As LLMs continue to evolve, this survey aims to facilitate enhanced efficiency and real-world applicability, establishing a foundation for future advancements in the field.

Related articles: Most relevant | Search more
arXiv:2307.10169 [cs.CL] (Published 2023-07-19)
Challenges and Applications of Large Language Models
arXiv:2311.13857 [cs.CL] (Published 2023-11-23)
Challenges of Large Language Models for Mental Health Counseling
arXiv:2312.07751 [cs.CL] (Published 2023-11-09)
Large Human Language Models: A Need and the Challenges