arXiv Analytics

Sign in

arXiv:2311.05795 [cs.LG]AbstractReferencesReviewsResources

Improvements on Uncertainty Quantification for Node Classification via Distance-Based Regularization

Russell Alan Hart, Linlin Yu, Yifei Lou, Feng Chen

Published 2023-11-10Version 1

Deep neural networks have achieved significant success in the last decades, but they are not well-calibrated and often produce unreliable predictions. A large number of literature relies on uncertainty quantification to evaluate the reliability of a learning model, which is particularly important for applications of out-of-distribution (OOD) detection and misclassification detection. We are interested in uncertainty quantification for interdependent node-level classification. We start our analysis based on graph posterior networks (GPNs) that optimize the uncertainty cross-entropy (UCE)-based loss function. We describe the theoretical limitations of the widely-used UCE loss. To alleviate the identified drawbacks, we propose a distance-based regularization that encourages clustered OOD nodes to remain clustered in the latent space. We conduct extensive comparison experiments on eight standard datasets and demonstrate that the proposed regularization outperforms the state-of-the-art in both OOD detection and misclassification detection.

Related articles: Most relevant | Search more
arXiv:1812.09632 [cs.LG] (Published 2018-12-23)
Uncertainty Quantification for Kernel Methods
arXiv:1912.09592 [cs.LG] (Published 2019-12-19)
Graph Convolutional Networks: analysis, improvements and results
arXiv:1608.09014 [cs.LG] (Published 2016-08-31)
A Tutorial on Online Supervised Learning with Applications to Node Classification in Social Networks