arXiv Analytics

Sign in

arXiv:2308.07209 [cs.LG]AbstractReferencesReviewsResources

Unified Data-Free Compression: Pruning and Quantization without Fine-Tuning

Shipeng Bai, Jun Chen, Xintian Shen, Yixuan Qian, Yong Liu

Published 2023-08-14Version 1

Structured pruning and quantization are promising approaches for reducing the inference time and memory footprint of neural networks. However, most existing methods require the original training dataset to fine-tune the model. This not only brings heavy resource consumption but also is not possible for applications with sensitive or proprietary data due to privacy and security concerns. Therefore, a few data-free methods are proposed to address this problem, but they perform data-free pruning and quantization separately, which does not explore the complementarity of pruning and quantization. In this paper, we propose a novel framework named Unified Data-Free Compression(UDFC), which performs pruning and quantization simultaneously without any data and fine-tuning process. Specifically, UDFC starts with the assumption that the partial information of a damaged(e.g., pruned or quantized) channel can be preserved by a linear combination of other channels, and then derives the reconstruction form from the assumption to restore the information loss due to compression. Finally, we formulate the reconstruction error between the original network and its compressed network, and theoretically deduce the closed-form solution. We evaluate the UDFC on the large-scale image classification task and obtain significant improvements over various network architectures and compression methods. For example, we achieve a 20.54% accuracy improvement on ImageNet dataset compared to SOTA method with 30% pruning ratio and 6-bit quantization on ResNet-34.

Related articles: Most relevant | Search more
arXiv:2405.20935 [cs.LG] (Published 2024-05-31)
Effective Interplay between Sparsity and Quantization: From Theory to Practice
arXiv:2105.02221 [cs.LG] (Published 2021-05-05)
How Fine-Tuning Allows for Effective Meta-Learning
arXiv:2104.12528 [cs.LG] (Published 2021-04-26)
Spatio-Temporal Pruning and Quantization for Low-latency Spiking Neural Networks