arXiv Analytics

Sign in

arXiv:1402.4844 [cs.LG]AbstractReferencesReviewsResources

Subspace Learning with Partial Information

Alon Gonen, Dan Rosenbaum, Yonina Eldar, Shai Shalev-Shwartz

Published 2014-02-19, updated 2016-05-26Version 2

The goal of subspace learning is to find a $k$-dimensional subspace of $\mathbb{R}^d$, such that the expected squared distance between instance vectors and the subspace is as small as possible. In this paper we study subspace learning in a partial information setting, in which the learner can only observe $r \le d$ attributes from each instance vector. We propose several efficient algorithms for this task, and analyze their sample complexity

Related articles: Most relevant | Search more
arXiv:2002.10021 [cs.LG] (Published 2020-02-24)
How Transferable are the Representations Learned by Deep Q Agents?
arXiv:1206.6461 [cs.LG] (Published 2012-06-27)
On the Sample Complexity of Reinforcement Learning with a Generative Model
arXiv:1906.00264 [cs.LG] (Published 2019-06-01)
Graph-based Discriminators: Sample Complexity and Expressiveness