arXiv Analytics

Sign in

arXiv:1802.04350 [cs.LG]AbstractReferencesReviewsResources

On the Sample Complexity of Learning from a Sequence of Experiments

Longyun Guo, Jean Honorio, John Morgan

Published 2018-02-12Version 1

We analyze the sample complexity of a new problem: learning from a sequence of experiments. In this problem, the learner should choose a hypothesis that performs well with respect to an infinite sequence of experiments, and their related data distributions. In practice, the learner can only perform m experiments with a total of N samples drawn from those data distributions. By using a Rademacher complexity approach, we show that the gap between the training and generation error is O((m/N)^0.5). We also provide some examples for linear prediction, two-layer neural networks and kernel methods.

Related articles: Most relevant | Search more
arXiv:1402.4844 [cs.LG] (Published 2014-02-19, updated 2016-05-26)
Subspace Learning with Partial Information
arXiv:1207.1366 [cs.LG] (Published 2012-07-04)
Learning Factor Graphs in Polynomial Time & Sample Complexity
arXiv:2406.06101 [cs.LG] (Published 2024-06-10)
On the Consistency of Kernel Methods with Dependent Observations