arXiv Analytics

Sign in

arXiv:1902.10644 [cs.LG]AbstractReferencesReviewsResources

Provable Guarantees for Gradient-Based Meta-Learning

Mikhail Khodak, Maria-Florina Balcan, Ameet Talwalkar

Published 2019-02-27Version 1

We study the problem of meta-learning through the lens of online convex optimization, developing a meta-algorithm bridging the gap between popular gradient-based meta-learning and classical regularization-based multi-task transfer methods. Our method is the first to simultaneously satisfy good sample efficiency guarantees in the convex setting, with generalization bounds that improve with task-similarity, while also being computationally scalable to modern deep learning architectures and the many-task setting. Despite its simplicity, the algorithm matches, up to a constant factor, a lower bound on the performance of any such parameter-transfer method under natural task similarity assumptions. We use experiments in both convex and deep learning settings to verify and demonstrate the applicability of our theory.

Related articles: Most relevant | Search more
arXiv:2309.04339 [cs.LG] (Published 2023-09-08)
Online Submodular Maximization via Online Convex Optimization
arXiv:2102.09305 [cs.LG] (Published 2021-02-18)
Boosting for Online Convex Optimization
arXiv:2009.14436 [cs.LG] (Published 2020-09-30)
Online Convex Optimization in Changing Environments and its Application to Resource Allocation