arXiv Analytics

Sign in

arXiv:2003.04858 [cs.CV]AbstractReferencesReviewsResources

Unpaired Image-to-Image Translation using Adversarial Consistency Loss

Yihao Zhao, Ruihai Wu, Hao Dong

Published 2020-03-10Version 1

Unpaired image-to-image translation is a class of vision problems whose goal is to find the mapping between different image domains using unpaired training data. Cycle-consistency loss is a widely used constraint for such problems. However, due to the strict pixel-level constraint, it cannot perform geometric changes, remove large objects, or ignore irrelevant texture. In this paper, we propose a novel adversarial-consistency loss for image-to-image translation. This loss does not require the translated image to be translated back to be a specific source image but can encourage the translated images to retain important features of the source images and overcome the drawbacks of cycle-consistency loss noted above. Our method achieves state-of-the-art results on three challenging tasks: glasses removal, male-to-female translation, and selfie-to-anime translation.

Related articles: Most relevant | Search more
arXiv:2007.15651 [cs.CV] (Published 2020-07-30)
Contrastive Learning for Unpaired Image-to-Image Translation
arXiv:2004.00161 [cs.CV] (Published 2020-03-31)
Towards Lifelong Self-Supervision For Unpaired Image-to-Image Translation
arXiv:1903.04294 [cs.CV] (Published 2019-03-08)
Mix and match networks: multi-domain alignment for unpaired image-to-image translation