arXiv Analytics

Sign in

arXiv:2209.15224 [stat.ML]AbstractReferencesReviewsResources

Unsupervised Multi-task and Transfer Learning on Gaussian Mixture Models

Ye Tian, Haolei Weng, Yang Feng

Published 2022-09-30Version 1

Unsupervised learning has been widely used in many real-world applications. One of the simplest and most important unsupervised learning models is the Gaussian mixture model (GMM). In this work, we study the multi-task learning problem on GMMs, which aims to leverage potentially similar GMM parameter structures among tasks to obtain improved learning performance compared to single-task learning. We propose a multi-task GMM learning procedure based on the EM algorithm that not only can effectively utilize unknown similarity between related tasks but is also robust against a fraction of outlier tasks from arbitrary sources. The proposed procedure is shown to achieve minimax optimal rate of convergence for both parameter estimation error and the excess mis-clustering error, in a wide range of regimes. Moreover, we generalize our approach to tackle the problem of transfer learning for GMMs, where similar theoretical results are derived. Finally, we demonstrate the effectiveness of our methods through simulations and a real data analysis. To the best of our knowledge, this is the first work studying multi-task and transfer learning on GMMs with theoretical guarantees.

Related articles: Most relevant | Search more
arXiv:2408.16189 [stat.ML] (Published 2024-08-29)
A More Unified Theory of Transfer Learning
arXiv:1911.04285 [stat.ML] (Published 2019-11-08)
Maximum a-Posteriori Estimation for the Gaussian Mixture Model via Mixed Integer Nonlinear Programming
arXiv:1508.06388 [stat.ML] (Published 2015-08-26)
Gaussian Mixture Models with Component Means Constrained in Pre-selected Subspaces