arXiv Analytics

Sign in

arXiv:2310.13821 [cs.LG]AbstractReferencesReviewsResources

Geometric Learning with Positively Decomposable Kernels

Nathael Da Costa, Cyrus Mostajeran, Juan-Pablo Ortega, Salem Said

Published 2023-10-20Version 1

Kernel methods are powerful tools in machine learning. Classical kernel methods are based on positive-definite kernels, which map data spaces into reproducing kernel Hilbert spaces (RKHS). For non-Euclidean data spaces, positive-definite kernels are difficult to come by. In this case, we propose the use of reproducing kernel Krein space (RKKS) based methods, which require only kernels that admit a positive decomposition. We show that one does not need to access this decomposition in order to learn in RKKS. We then investigate the conditions under which a kernel is positively decomposable. We show that invariant kernels admit a positive decomposition on homogeneous spaces under tractable regularity assumptions. This makes them much easier to construct than positive-definite kernels, providing a route for learning with kernels for non-Euclidean data. By the same token, this provides theoretical foundations for RKKS-based methods in general.

Related articles: Most relevant | Search more
arXiv:2406.06101 [cs.LG] (Published 2024-06-10)
On the Consistency of Kernel Methods with Dependent Observations
arXiv:2007.14706 [cs.LG] (Published 2020-07-29)
Kernel Methods and their derivatives: Concept and perspectives for the Earth system sciences
arXiv:2006.10350 [cs.LG] (Published 2020-06-18)
Kernel methods through the roof: handling billions of points efficiently