arXiv Analytics

Sign in

arXiv:2008.05659 [cs.CV]AbstractReferencesReviewsResources

What Should Not Be Contrastive in Contrastive Learning

Tete Xiao, Xiaolong Wang, Alexei A. Efros, Trevor Darrell

Published 2020-08-13Version 1

Recent self-supervised contrastive methods have been able to produce impressive transferable visual representations by learning to be invariant to different data augmentations. However, these methods implicitly assume a particular set of representational invariances (e.g., invariance to color), and can perform poorly when a downstream task violates this assumption (e.g., distinguishing red vs. yellow cars). We introduce a contrastive learning framework which does not require prior knowledge of specific, task-dependent invariances. Our model learns to capture varying and invariant factors for visual representations by constructing separate embedding spaces, each of which is invariant to all but one augmentation. We use a multi-head network with a shared backbone which captures information across each augmentation and alone outperforms all baselines on downstream tasks. We further find that the concatenation of the invariant and varying spaces performs best across all tasks we investigate, including coarse-grained, fine-grained, and few-shot downstream classification tasks, and various data corruptions.

Related articles: Most relevant | Search more
arXiv:2012.09071 [cs.CV] (Published 2020-12-16)
Joint Generative and Contrastive Learning for Unsupervised Person Re-identification
arXiv:2106.03719 [cs.CV] (Published 2021-06-07)
Incremental False Negative Detection for Contrastive Learning
arXiv:2206.12925 [cs.CV] (Published 2022-06-26)
Vision Transformer for Contrastive Clustering