arXiv Analytics

Sign in

arXiv:2006.10350 [cs.LG]AbstractReferencesReviewsResources

Kernel methods through the roof: handling billions of points efficiently

Giacomo Meanti, Luigi Carratino, Lorenzo Rosasco, Alessandro Rudi

Published 2020-06-18Version 1

Kernel methods provide an elegant and principled approach to nonparametric learning, but so far could hardly be used in large scale problems, since na\"ive implementations scale poorly with data size. Recent advances have shown the benefits of a number of algorithmic ideas, for example combining optimization, numerical linear algebra and random projections. Here, we push these efforts further to develop and test a solver that takes full advantage of GPU hardware. Towards this end, we designed a preconditioned gradient solver for kernel methods exploiting both GPU acceleration and parallelization with multiple GPUs, implementing out-of-core variants of common linear algebra operations to guarantee optimal hardware utilization. Further, we optimize the numerical precision of different operations and maximize efficiency of matrix-vector multiplications. As a result we can experimentally show dramatic speedups on datasets with billions of points, while still guaranteeing state of the art performance. Additionally, we make our software available as an easy to use library.

Related articles: Most relevant | Search more
arXiv:2406.06101 [cs.LG] (Published 2024-06-10)
On the Consistency of Kernel Methods with Dependent Observations
arXiv:2007.14706 [cs.LG] (Published 2020-07-29)
Kernel Methods and their derivatives: Concept and perspectives for the Earth system sciences
arXiv:1902.10176 [cs.LG] (Published 2019-02-26)
A Memoization Framework for Scaling Submodular Optimization to Large Scale Problems