arXiv Analytics

Sign in

arXiv:1910.12417 [cs.LG]AbstractReferencesReviewsResources

Deep causal representation learning for unsupervised domain adaptation

Raha Moraffah, Kai Shu, Adrienne Raglin, Huan Liu

Published 2019-10-28Version 1

Studies show that the representations learned by deep neural networks can be transferred to similar prediction tasks in other domains for which we do not have enough labeled data. However, as we transition to higher layers in the model, the representations become more task-specific and less generalizable. Recent research on deep domain adaptation proposed to mitigate this problem by forcing the deep model to learn more transferable feature representations across domains. This is achieved by incorporating domain adaptation methods into deep learning pipeline. The majority of existing models learn the transferable feature representations which are highly correlated with the outcome. However, correlations are not always transferable. In this paper, we propose a novel deep causal representation learning framework for unsupervised domain adaptation, in which we propose to learn domain-invariant causal representations of the input from the source domain. We simulate a virtual target domain using reweighted samples from the source domain and estimate the causal effect of features on the outcomes. The extensive comparative study demonstrates the strengths of the proposed model for unsupervised domain adaptation via causal representations.

Related articles: Most relevant | Search more
arXiv:2007.07695 [cs.LG] (Published 2020-07-15)
Label Propagation with Augmented Anchors: A Simple Semi-Supervised Learning baseline for Unsupervised Domain Adaptation
arXiv:1206.6438 [cs.LG] (Published 2012-06-27)
Information-Theoretical Learning of Discriminative Clusters for Unsupervised Domain Adaptation
arXiv:2303.08720 [cs.LG] (Published 2023-03-15)
Practicality of generalization guarantees for unsupervised domain adaptation with neural networks