arXiv Analytics

Sign in

arXiv:2303.08302 [cs.LG]AbstractReferencesReviewsResources

A Comprehensive Study on Post-Training Quantization for Large Language Models

Zhewei Yao, Cheng Li, Xiaoxia Wu, Stephen Youn, Yuxiong He

Published 2023-03-15Version 1

Post-training quantization (\ptq) had been recently shown as a compromising method to reduce the memory consumption and/or compute cost for large language models. However, a comprehensive study about the effect of different quantization schemes, different model families, different \ptq methods, different quantization bit precision, etc, is still missing. In this work, we provide an extensive study on those components over tens of thousands of zero-shot experiments. Our results show that (1) Fine-grained quantization and \ptq methods (instead of naive round-to-nearest quantization) are necessary to achieve good accuracy and (2) Higher bits (e.g., 5 bits) with coarse-grained quantization is more powerful than lower bits (e.g., 4 bits) with very fine-grained quantization (whose effective bits is similar to 5-bits). We also present recommendations about how to utilize quantization for \llms with different sizes, and leave suggestions of future opportunities and system work that are not resolved in this work.

Related articles: Most relevant | Search more
arXiv:2306.05052 [cs.LG] (Published 2023-06-08)
Interpretable Medical Diagnostics with Structured Data Extraction by Large Language Models
arXiv:2306.03438 [cs.LG] (Published 2023-06-06)
Large Language Models of Code Fail at Completing Code with Potential Bugs
arXiv:2305.05176 [cs.LG] (Published 2023-05-09)
FrugalGPT: How to Use Large Language Models While Reducing Cost and Improving Performance