arXiv Analytics

Sign in

arXiv:2106.01656 [cs.CV]AbstractReferencesReviewsResources

Generalized Domain Adaptation

Yu Mitsuzumi, Go Irie, Daiki Ikami, Takashi Shibata

Published 2021-06-03Version 1

Many variants of unsupervised domain adaptation (UDA) problems have been proposed and solved individually. Its side effect is that a method that works for one variant is often ineffective for or not even applicable to another, which has prevented practical applications. In this paper, we give a general representation of UDA problems, named Generalized Domain Adaptation (GDA). GDA covers the major variants as special cases, which allows us to organize them in a comprehensive framework. Moreover, this generalization leads to a new challenging setting where existing methods fail, such as when domain labels are unknown, and class labels are only partially given to each domain. We propose a novel approach to the new setting. The key to our approach is self-supervised class-destructive learning, which enables the learning of class-invariant representations and domain-adversarial classifiers without using any domain labels. Extensive experiments using three benchmark datasets demonstrate that our method outperforms the state-of-the-art UDA methods in the new setting and that it is competitive in existing UDA variations as well.

Comments: Accepted by CVPR 2021. Code is available at https://github.com/nttcslab/Generalized-Domain-Adaptation
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2006.06500 [cs.CV] (Published 2020-06-11)
Rethinking the Truly Unsupervised Image-to-Image Translation
arXiv:1907.12342 [cs.CV] (Published 2019-07-29)
Meta Learning for Task-Driven Video Summarization
arXiv:1911.07661 [cs.CV] (Published 2019-11-18)
Domain Generalization Using a Mixture of Multiple Latent Domains