arXiv Analytics

Sign in

arXiv:2005.10243 [cs.CV]AbstractReferencesReviewsResources

What makes for good views for contrastive learning

Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola

Published 2020-05-20Version 1

Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning. Despite its success, the influence of different view choices has been less studied. In this paper, we use empirical analysis to better understand the importance of view selection, and argue that we should reduce the mutual information (MI) between views while keeping task-relevant information intact. To verify this hypothesis, we devise unsupervised and semi-supervised frameworks that learn effective views by aiming to reduce their MI. We also consider data augmentation as a way to reduce MI, and show that increasing data augmentation indeed leads to decreasing MI and improves downstream classification accuracy. As a by-product, we also achieve a new state-of-the-art accuracy on unsupervised pre-training for ImageNet classification ($73\%$ top-1 linear readoff with a ResNet-50). In addition, transferring our models to PASCAL VOC object detection and COCO instance segmentation consistently outperforms supervised pre-training. Code:http://github.com/HobbitLong/PyContrast

Related articles: Most relevant | Search more
arXiv:2206.01646 [cs.CV] (Published 2022-06-03)
Rethinking Positive Sampling for Contrastive Learning with Kernel
arXiv:2008.01334 [cs.CV] (Published 2020-08-04)
Context Encoding for Video Retrieval with Contrastive Learning
arXiv:2207.02970 [cs.CV] (Published 2022-07-06)
Network Binarization via Contrastive Learning