arXiv Analytics

Sign in

arXiv:1703.00893 [cs.LG]AbstractReferencesReviewsResources

Being Robust (in High Dimensions) Can Be Practical

Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Ankur Moitra, Alistair Stewart

Published 2017-03-02Version 1

Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.

Related articles: Most relevant | Search more
arXiv:1604.05307 [cs.LG] (Published 2016-04-18)
Learning Sparse Additive Models with Interactions in High Dimensions
arXiv:1706.06549 [cs.LG] (Published 2017-06-20)
Inference in Deep Networks in High Dimensions
arXiv:1605.00609 [cs.LG] (Published 2016-05-02)
Algorithms for Learning Sparse Additive Models with Interactions in High Dimensions