arXiv Analytics

Sign in

arXiv:1803.07710 [cs.LG]AbstractReferencesReviewsResources

Inference in Probabilistic Graphical Models by Graph Neural Networks

KiJung Yoon, Renjie Liao, Yuwen Xiong, Lisa Zhang, Ethan Fetaya, Raquel Urtasun, Richard Zemel, Xaq Pitkow

Published 2018-03-21Version 1

A useful computation when acting in a complex environment is to infer the marginal probabilities or most probable states of task-relevant variables. Probabilistic graphical models can efficiently represent the structure of such complex data, but performing these inferences is generally difficult. Message-passing algorithms, such as belief propagation, are a natural way to disseminate evidence amongst correlated variables while exploiting the graph structure, but these algorithms can struggle when the conditional dependency graphs contain loops. Here we use Graph Neural Networks (GNNs) to learn a message-passing algorithm that solves these inference tasks. We first show that the architecture of GNNs is well-matched to inference tasks. We then demonstrate the efficacy of this inference approach by training GNNs on an ensemble of graphical models and showing that they substantially outperform belief propagation on loopy graphs. Our message-passing algorithms generalize out of the training set to larger graphs and graphs with different structure.

Related articles: Most relevant | Search more
arXiv:1206.5291 [cs.LG] (Published 2012-06-20)
Improved Dynamic Schedules for Belief Propagation
arXiv:2111.00734 [cs.LG] (Published 2021-11-01, updated 2022-02-24)
Robust Deep Learning from Crowds with Belief Propagation
arXiv:1912.10206 [cs.LG] (Published 2019-12-21)
How Robust Are Graph Neural Networks to Structural Noise?