arXiv Analytics

Sign in

arXiv:2310.20285 [cs.LG]AbstractReferencesReviewsResources

Accelerating Generalized Linear Models by Trading off Computation for Uncertainty

Lukas Tatzel, Jonathan Wenger, Frank Schneider, Philipp Hennig

Published 2023-10-31Version 1

Bayesian Generalized Linear Models (GLMs) define a flexible probabilistic framework to model categorical, ordinal and continuous data, and are widely used in practice. However, exact inference in GLMs is prohibitively expensive for large datasets, thus requiring approximations in practice. The resulting approximation error adversely impacts the reliability of the model and is not accounted for in the uncertainty of the prediction. In this work, we introduce a family of iterative methods that explicitly model this error. They are uniquely suited to parallel modern computing hardware, efficiently recycle computations, and compress information to reduce both the time and memory requirements for GLMs. As we demonstrate on a realistically large classification problem, our method significantly accelerates training by explicitly trading off reduced computation for increased uncertainty.

Comments: Main text: 10 pages, 6 figures; Supplements: 13 pages, 2 figures
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1810.06530 [cs.LG] (Published 2018-10-15)
Successor Uncertainties: exploration and uncertainty in temporal difference learning
arXiv:2006.10562 [cs.LG] (Published 2020-06-18)
Uncertainty in Gradient Boosting via Ensembles
arXiv:2010.03753 [cs.LG] (Published 2020-10-08)
Uncertainty in Neural Processes