arXiv Analytics

Sign in

arXiv:1812.00893 [cs.CV]AbstractReferencesReviewsResources

Domain Alignment with Triplets

Weijian Deng, Liang Zheng, Jianbin Jiao

Published 2018-12-03Version 1

Deep domain adaptation methods can reduce the distribution discrepancy by learning domain-invariant embedddings. However, these methods only focus on aligning the whole data distributions, without considering the class-level relations among source and target images. Thus, a target embeddings of a bird might be aligned to source embeddings of an airplane. This semantic misalignment can directly degrade the classifier performance on the target dataset. To alleviate this problem, we present a similarity constrained alignment (SCA) method for unsupervised domain adaptation. When aligning the distributions in the embedding space, SCA enforces a similarity-preserving constraint to maintain class-level relations among the source and target images, i.e., if a source image and a target image are of the same class label, their corresponding embeddings are supposed to be aligned nearby, and vise versa. In the absence of target labels, we assign pseudo labels for target images. Given labeled source images and pseudo-labeled target images, the similarity-preserving constraint can be implemented by minimizing the triplet loss. With the joint supervision of domain alignment loss and similarity-preserving constraint, we train a network to obtain domain-invariant embeddings with two critical characteristics, intra-class compactness and inter-class separability. Extensive experiments conducted on the two datasets well demonstrate the effectiveness of SCA.

Comments: 10 pages;This version is not fully edited and will be updated soon
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2204.13339 [cs.CV] (Published 2022-04-28)
An Overview of Color Transfer and Style Transfer for Images and Videos
arXiv:2303.14644 [cs.CV] (Published 2023-03-26)
Affordance Grounding from Demonstration Video to Target Image
arXiv:2409.06481 [cs.CV] (Published 2024-09-09)
NeIn: Telling What You Don't Want