arXiv Analytics

Sign in

arXiv:2104.01303 [cs.LG]AbstractReferencesReviewsResources

Tight Compression: Compressing CNN Through Fine-Grained Pruning and Weight Permutation for Efficient Implementation

Xizi Chen, Jingyang Zhu, Jingbo Jiang, Chi-Ying Tsui

Published 2021-04-03Version 1

The unstructured sparsity after pruning poses a challenge to the efficient implementation of deep learning models in existing regular architectures like systolic arrays. On the other hand, coarse-grained structured pruning is suitable for implementation in regular architectures but tends to have higher accuracy loss than unstructured pruning when the pruned models are of the same size. In this work, we propose a model compression method based on a novel weight permutation scheme to fully exploit the fine-grained weight sparsity in the hardware design. Through permutation, the optimal arrangement of the weight matrix is obtained, and the sparse weight matrix is further compressed to a small and dense format to make full use of the hardware resources. Two pruning granularities are explored. In addition to the unstructured weight pruning, we also propose a more fine-grained subword-level pruning to further improve the compression performance. Compared to the state-of-the-art works, the matrix compression rate is significantly improved from 5.88x to 14.13x. As a result, the throughput and energy efficiency are improved by 2.75 and 1.86 times, respectively.

Comments: Previous Conference Version: 2020 57th ACM/IEEE Design Automation Conference (DAC)
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:1705.02583 [cs.LG] (Published 2017-05-07)
A Design Methodology for Efficient Implementation of Deconvolutional Neural Networks on an FPGA
arXiv:2105.01196 [cs.LG] (Published 2021-05-03)
EBIC.JL -- an Efficient Implementation of Evolutionary Biclustering Algorithm in Julia
arXiv:2205.01457 [cs.LG] (Published 2022-05-03)
Efficient implementation of incremental proximal-point methods