arXiv Analytics

Sign in

arXiv:1906.04721 [cs.LG]AbstractReferencesReviewsResources

Data-Free Quantization through Weight Equalization and Bias Correction

Markus Nagel, Mart van Baalen, Tijmen Blankevoort, Max Welling

Published 2019-06-11Version 1

We introduce a data-free quantization method for deep neural networks that does not require fine-tuning or hyperparameter selection. It achieves near-original model performance on common computer vision architectures and tasks. 8-bit fixed-point quantization is essential for efficient inference in modern deep learning hardware architectures. However, quantizing models to run in 8-bit is a non-trivial task, frequently leading to either significant performance reduction or engineering time spent on training a network to be amenable to quantization. Our approach relies on equalizing the weight ranges in the network by making use of a scale-equivariance property of activation functions. In addition the method corrects biases in the error that are introduced during quantization. This improves quantization accuracy performance, and can be applied ubiquitously to almost any model with a straight-forward API call. For common architectures, such as the MobileNet family, we achieve state-of-the-art quantized model performance. We further show that the method also extends to other computer vision architectures and tasks such as semantic segmentation and object detection.

Related articles: Most relevant | Search more
arXiv:2011.09899 [cs.LG] (Published 2020-11-19)
Learning in School: Multi-teacher Knowledge Inversion for Data-Free Quantization
arXiv:2306.00280 [cs.LG] (Published 2023-06-01)
Towards Bias Correction of FedAvg over Nonuniform and Time-Varying Communications
arXiv:2203.16470 [cs.LG] (Published 2022-03-30)
Remember to correct the bias when using deep learning for regression!