{ "id": "2207.02970", "version": "v1", "published": "2022-07-06T21:04:53.000Z", "updated": "2022-07-06T21:04:53.000Z", "title": "Network Binarization via Contrastive Learning", "authors": [ "Yuzhang Shang", "Dan Xu", "Ziliang Zong", "Yan Yan" ], "comment": "Accepted to ECCV 2022", "categories": [ "cs.CV", "cs.LG" ], "abstract": "Neural network binarization accelerates deep models by quantizing their weights and activations into 1-bit. However, there is still a huge performance gap between Binary Neural Networks (BNNs) and their full-precision (FP) counterparts. As the quantization error caused by weights binarization has been reduced in earlier works, the activations binarization becomes the major obstacle for further improvement of the accuracy. BNN characterises a unique and interesting structure, where the binary and latent FP activations exist in the same forward pass (\\textit{i.e.} $\\text{Binarize}(\\mathbf{a}_F) = \\mathbf{a}_B$). To mitigate the information degradation caused by the binarization operation from FP to binary activations, we establish a novel contrastive learning framework while training BNNs through the lens of Mutual Information (MI) maximization. MI is introduced as the metric to measure the information shared between binary and FP activations, which assists binarization with contrastive learning. Specifically, the representation ability of the BNNs is greatly strengthened via pulling the positive pairs with binary and FP activations from the same input samples, as well as pushing negative pairs from different samples (the number of negative pairs can be exponentially large). This benefits the downstream tasks, not only classification but also segmentation and depth estimation,~\\textit{etc}. The experimental results show that our method can be implemented as a pile-up module on existing state-of-the-art binarization methods and can remarkably improve the performance over them on CIFAR-10/100 and ImageNet, in addition to the great generalization ability on NYUD-v2.", "revisions": [ { "version": "v1", "updated": "2022-07-06T21:04:53.000Z" } ], "analyses": { "keywords": [ "contrastive learning", "fp activations", "neural network binarization accelerates deep", "network binarization accelerates deep models", "existing state-of-the-art binarization methods" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable" } } }