arXiv Analytics

Sign in

arXiv:2007.15651 [cs.CV]AbstractReferencesReviewsResources

Contrastive Learning for Unpaired Image-to-Image Translation

Taesung Park, Alexei A. Efros, Richard Zhang, Jun-Yan Zhu

Published 2020-07-30Version 1

In image-to-image translation, each patch in the output should reflect the content of the corresponding patch in the input, independent of domain. We propose a straightforward method for doing so -- maximizing mutual information between the two, using a framework based on contrastive learning. The method encourages two elements (corresponding patches) to map to a similar point in a learned feature space, relative to other elements (other patches) in the dataset, referred to as negatives. We explore several critical design choices for making contrastive learning effective in the image synthesis setting. Notably, we use a multilayer, patch-based approach, rather than operate on entire images. Furthermore, we draw negatives from within the input image itself, rather than from the rest of the dataset. We demonstrate that our framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time. In addition, our method can even be extended to the training setting where each "domain" is only a single image.

Comments: ECCV 2020. Please visit https://taesungp.github.io/ContrastiveUnpairedTranslation/ for introduction videos and more
Categories: cs.CV, cs.LG
Related articles: Most relevant | Search more
arXiv:2208.06412 [cs.CV] (Published 2022-08-12)
Contrastive Learning for Object Detection
arXiv:2005.10243 [cs.CV] (Published 2020-05-20)
What makes for good views for contrastive learning
arXiv:2008.01334 [cs.CV] (Published 2020-08-04)
Context Encoding for Video Retrieval with Contrastive Learning