arXiv Analytics

Sign in

arXiv:2403.10642 [cs.LG]AbstractReferencesReviewsResources

Using Uncertainty Quantification to Characterize and Improve Out-of-Domain Learning for PDEs

S. Chandra Mouli, Danielle C. Maddix, Shima Alizadeh, Gaurav Gupta, Andrew Stuart, Michael W. Mahoney, Yuyang Wang

Published 2024-03-15Version 1

Existing work in scientific machine learning (SciML) has shown that data-driven learning of solution operators can provide a fast approximate alternative to classical numerical partial differential equation (PDE) solvers. Of these, Neural Operators (NOs) have emerged as particularly promising. We observe that several uncertainty quantification (UQ) methods for NOs fail for test inputs that are even moderately out-of-domain (OOD), even when the model approximates the solution well for in-domain tasks. To address this limitation, we show that ensembling several NOs can identify high-error regions and provide good uncertainty estimates that are well-correlated with prediction errors. Based on this, we propose a cost-effective alternative, DiverseNO, that mimics the properties of the ensemble by encouraging diverse predictions from its multiple heads in the last feed-forward layer. We then introduce Operator-ProbConserv, a method that uses these well-calibrated UQ estimates within the ProbConserv framework to update the model. Our empirical results show that Operator-ProbConserv enhances OOD model performance for a variety of challenging PDE problems and satisfies physical constraints such as conservation laws.

Related articles: Most relevant | Search more
arXiv:2306.14430 [cs.LG] (Published 2023-06-26)
Enhanced multi-fidelity modelling for digital twin and uncertainty quantification
arXiv:2406.10775 [cs.LG] (Published 2024-06-16)
A Rate-Distortion View of Uncertainty Quantification
arXiv:2402.12264 [cs.LG] (Published 2024-02-19)
Uncertainty quantification in fine-tuned LLMs using LoRA ensembles