arXiv Analytics

Sign in

arXiv:2305.16440 [cs.LG]AbstractReferencesReviewsResources

Representation Transfer Learning via Multiple Pre-trained models for Linear Regression

Navjot Singh, Suhas Diggavi

Published 2023-05-25Version 1

In this paper, we consider the problem of learning a linear regression model on a data domain of interest (target) given few samples. To aid learning, we are provided with a set of pre-trained regression models that are trained on potentially different data domains (sources). Assuming a representation structure for the data generating linear models at the sources and the target domains, we propose a representation transfer based learning method for constructing the target model. The proposed scheme is comprised of two phases: (i) utilizing the different source representations to construct a representation that is adapted to the target data, and (ii) using the obtained model as an initialization to a fine-tuning procedure that re-trains the entire (over-parameterized) regression model on the target data. For each phase of the training method, we provide excess risk bounds for the learned model compared to the true data generating target model. The derived bounds show a gain in sample complexity for our proposed method compared to the baseline method of not leveraging source representations when achieving the same excess risk, therefore, theoretically demonstrating the effectiveness of transfer learning for linear regression.

Related articles: Most relevant | Search more
arXiv:1206.4678 [cs.LG] (Published 2012-06-18)
Linear Regression with Limited Observation
arXiv:1912.03036 [cs.LG] (Published 2019-12-06)
Improved PAC-Bayesian Bounds for Linear Regression
arXiv:1904.08544 [cs.LG] (Published 2019-04-18)
Memory-Sample Tradeoffs for Linear Regression with Small Error