arXiv Analytics

Sign in

arXiv:1209.0738 [cs.LG]AbstractReferencesReviewsResources

Sparse coding for multitask and transfer learning

Andreas Maurer, Massimiliano Pontil, Bernardino Romera-Paredes

Published 2012-09-04, updated 2014-06-16Version 3

We investigate the use of sparse coding and dictionary learning in the context of multitask and transfer learning. The central assumption of our learning method is that the tasks parameters are well approximated by sparse linear combinations of the atoms of a dictionary on a high or infinite dimensional space. This assumption, together with the large quantity of available data in the multitask and transfer learning settings, allows a principled choice of the dictionary. We provide bounds on the generalization error of this approach, for both settings. Numerical experiments on one synthetic and two real datasets show the advantage of our method over single task learning, a previous method based on orthogonal and dense representation of the tasks and a related method learning task grouping.

Comments: International Conference on Machine Learning 2013
Categories: cs.LG, stat.ML
Subjects: 68Q32, 68T05, 97C30, 46N30
Related articles: Most relevant | Search more
arXiv:2302.12715 [cs.LG] (Published 2023-02-24)
Hiding Data Helps: On the Benefits of Masking for Sparse Coding
arXiv:1904.04334 [cs.LG] (Published 2019-04-08)
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
arXiv:1902.08835 [cs.LG] (Published 2019-02-23)
Transfer Learning for Non-Intrusive Load Monitoring