arXiv Analytics

Sign in

arXiv:1806.08342 [cs.LG]AbstractReferencesReviewsResources

Quantizing deep convolutional networks for efficient inference: A whitepaper

Raghuraman Krishnamoorthi

Published 2018-06-21Version 1

We present an overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations. Per-channel quantization of weights and per-layer quantization of activations to 8-bits of precision post-training produces classification accuracies within 2% of floating point networks for a wide variety of CNN architectures. Model sizes can be reduced by a factor of 4 by quantizing weights to 8-bits, even when 8-bit arithmetic is not supported. This can be achieved with simple, post training quantization of weights.We benchmark latencies of quantized networks on CPUs and DSPs and observe a speedup of 2x-3x for quantized implementations compared to floating point on CPUs. Speedups of up to 10x are observed on specialized processors with fixed point SIMD capabilities, like the Qualcomm QDSPs with HVX. Quantization-aware training can provide further improvements, reducing the gap to floating point to 1% at 8-bit precision. Quantization-aware training also allows for reducing the precision of weights to four bits with accuracy losses ranging from 2% to 10%, with higher accuracy drop for smaller networks.We introduce tools in TensorFlow and TensorFlowLite for quantizing convolutional networks and review best practices for quantization-aware training to obtain high accuracy with quantized weights and activations. We recommend that per-channel quantization of weights and per-layer quantization of activations be the preferred quantization scheme for hardware acceleration and kernel optimization. We also propose that future processors and hardware accelerators for optimized inference support precisions of 4, 8 and 16 bits.

Related articles: Most relevant | Search more
arXiv:1809.09244 [cs.LG] (Published 2018-09-24)
No Multiplication? No Floating Point? No Problem! Training Networks for Efficient Inference
arXiv:1902.06822 [cs.LG] (Published 2019-02-18)
Low-bit Quantization of Neural Networks for Efficient Inference
arXiv:1905.12334 [cs.LG] (Published 2019-05-29)
Mixed Precision Training With 8-bit Floating Point