arXiv Analytics

Sign in

arXiv:2205.04641 [cs.LG]AbstractReferencesReviewsResources

On Causality in Domain Adaptation and Semi-Supervised Learning: an Information-Theoretic Analysis

Xuetong Wu, Mingming Gong, Jonathan H. Manton, Uwe Aickelin, Jingge Zhu

Published 2022-05-10Version 1

The establishment of the link between causality and unsupervised domain adaptation (UDA)/semi-supervised learning (SSL) has led to methodological advances in these learning problems in recent years. However, a formal theory that explains the role of causality in the generalization performance of UDA/SSL is still lacking. In this paper, we consider the UDA/SSL setting where we access m labeled source data and n unlabeled target data as training instances under a parametric probabilistic model. We study the learning performance (e.g., excess risk) of prediction in the target domain. Specifically, we distinguish two scenarios: the learning problem is called causal learning if the feature is the cause and the label is the effect, and is called anti-causal learning otherwise. We show that in causal learning, the excess risk depends on the size of the source sample at a rate of O(1/m) only if the labelling distribution between the source and target domains remains unchanged. In anti-causal learning, we show that the unlabeled data dominate the performance at a rate of typically O(1/n). Our analysis is based on the notion of potential outcome random variables and information theory. These results bring out the relationship between the data sample size and the hardness of the learning problem with different causal mechanisms.

Comments: 26 pages including appendix, 3 figures, 1 table
Categories: cs.LG, cs.IT, math.IT
Related articles: Most relevant | Search more
arXiv:2005.08697 [cs.LG] (Published 2020-05-18)
Information-theoretic analysis for transfer learning
arXiv:2006.03689 [cs.LG] (Published 2020-06-05)
Anomaly Detection with Domain Adaptation
arXiv:1507.00504 [cs.LG] (Published 2015-07-02)
Optimal Transport for Domain Adaptation