arXiv Analytics

Sign in

arXiv:2003.06560 [cs.LG]AbstractReferencesReviewsResources

Evaluating Logical Generalization in Graph Neural Networks

Koustuv Sinha, Shagun Sodhani, Joelle Pineau, William L. Hamilton

Published 2020-03-14Version 1

Recent research has highlighted the role of relational inductive biases in building learning agents that can generalize and reason in a compositional manner. However, while relational learning algorithms such as graph neural networks (GNNs) show promise, we do not understand how effectively these approaches can adapt to new tasks. In this work, we study the task of logical generalization using GNNs by designing a benchmark suite grounded in first-order logic. Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics, represented as knowledge graphs. GraphLog consists of relation prediction tasks on 57 distinct logical domains. We use GraphLog to evaluate GNNs in three different setups: single-task supervised learning, multi-task pretraining, and continual learning. Unlike previous benchmarks, our approach allows us to precisely control the logical relationship between the different tasks. We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training, and our results highlight new challenges for the design of GNN models. We publicly release the dataset and code used to generate and interact with the dataset at https://www.cs.mcgill.ca/~ksinha4/graphlog.

Related articles: Most relevant | Search more
arXiv:1908.07110 [cs.LG] (Published 2019-08-19)
Graph Neural Networks with High-order Feature Interactions
arXiv:2004.09808 [cs.LG] (Published 2020-04-21)
Perturb More, Trap More: Understanding Behaviors of Graph Neural Networks
arXiv:2004.11934 [cs.LG] (Published 2020-04-24)
Explainable Unsupervised Change-point Detection via Graph Neural Networks