arXiv Analytics

Sign in

arXiv:2211.10438 [cs.CL]AbstractReferencesReviewsResources

SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models

Guangxuan Xiao, Ji Lin, Mickael Seznec, Julien Demouth, Song Han

Published 2022-11-18Version 1

Large language models (LLMs) show excellent performance but are compute- and memory-intensive. Quantization can reduce memory and accelerate inference. However, for LLMs beyond 100 billion parameters, existing methods cannot maintain accuracy or do not run efficiently on hardware. We propose SmoothQuant, a training-free, accuracy-preserving, and general-purpose post-training quantization (PTQ) solution to enable 8-bit weight, 8-bit activation (W8A8) quantization for LLMs that can be implemented efficiently. We observe that systematic outliers appear at fixed activation channels. Based on the fact that weights are easy to quantize while activations are not, SmoothQuant smooths the activation outliers by migrating the quantization difficulty from activations to weights with a mathematically equivalent transformation. SmoothQuant enables an INT8 quantization of both weights and activations for all the GEMMs in LLMs, including OPT-175B, BLOOM-176B and GLM-130B. SmoothQuant has better hardware efficiency than existing techniques using mixed-precision activation quantization or weight-only quantization. We demonstrate up to 1.56x speedup and 2x memory reduction for LLMs with negligible loss in accuracy. Thanks to the hardware-friendly design, we integrate SmoothQuant into FasterTransformer, a state-of-the-art LLM serving framework, and achieve faster inference speed with half the number of GPUs compared to FP16. Our work offers a turn-key solution that reduces hardware costs and democratizes LLMs. Code will be released at: https://github.com/mit-han-lab/smoothquant.

Comments: The first two authors contributed equally to this work
Categories: cs.CL, cs.AI, cs.LG
Related articles: Most relevant | Search more
arXiv:2202.00828 [cs.CL] (Published 2022-02-02)
Co-training Improves Prompt-based Learning for Large Language Models
arXiv:2205.08184 [cs.CL] (Published 2022-05-17)
SKILL: Structured Knowledge Infusion for Large Language Models
arXiv:2211.02069 [cs.CL] (Published 2022-11-03)
LMentry: A Language Model Benchmark of Elementary Language Tasks