arXiv Analytics

Sign in

arXiv:2302.04019 [cs.LG]AbstractReferencesReviewsResources

Fortuna: A Library for Uncertainty Quantification in Deep Learning

Gianluca Detommaso, Alberto Gasparin, Michele Donini, Matthias Seeger, Andrew Gordon Wilson, Cedric Archambeau

Published 2023-02-08Version 1

We present Fortuna, an open-source library for uncertainty quantification in deep learning. Fortuna supports a range of calibration techniques, such as conformal prediction that can be applied to any trained neural network to generate reliable uncertainty estimates, and scalable Bayesian inference methods that can be applied to Flax-based deep neural networks trained from scratch for improved uncertainty quantification and accuracy. By providing a coherent framework for advanced uncertainty quantification methods, Fortuna simplifies the process of benchmarking and helps practitioners build robust AI systems.

Related articles: Most relevant | Search more
arXiv:1801.07648 [cs.LG] (Published 2018-01-23)
Clustering with Deep Learning: Taxonomy and New Methods
arXiv:1802.01528 [cs.LG] (Published 2018-02-05)
The Matrix Calculus You Need For Deep Learning
arXiv:1705.03341 [cs.LG] (Published 2017-05-09)
Stable Architectures for Deep Neural Networks