arXiv Analytics

Sign in

arXiv:2301.10956 [cs.LG]AbstractReferencesReviewsResources

Graph Neural Networks can Recover the Hidden Features Solely from the Graph Structure

Ryoma Sato

Published 2023-01-26Version 1

Graph Neural Networks (GNNs) are popular models for graph learning problems. GNNs show strong empirical performance in many practical tasks. However, the theoretical properties have not been completely elucidated. In this paper, we investigate whether GNNs can exploit the graph structure from the perspective of the expressive power of GNNs. In our analysis, we consider graph generation processes that are controlled by hidden node features, which contain all information about the graph structure. A typical example of this framework is kNN graphs constructed from the hidden features. In our main results, we show that GNNs can recover the hidden node features from the input graph alone, even when all node features, including the hidden features themselves and any indirect hints, are unavailable. GNNs can further use the recovered node features for downstream tasks. These results show that GNNs can fully exploit the graph structure by themselves, and in effect, GNNs can use both the hidden and explicit node features for downstream tasks. In the experiments, we confirm the validity of our results by showing that GNNs can accurately recover the hidden features using a GNN architecture built based on our theoretical analysis.

Related articles: Most relevant | Search more
arXiv:2403.07185 [cs.LG] (Published 2024-03-11)
Uncertainty in Graph Neural Networks: A Survey
arXiv:2409.05100 [cs.LG] (Published 2024-09-08)
MaxCutPool: differentiable feature-aware Maxcut for pooling in graph neural networks
arXiv:2310.10362 [cs.LG] (Published 2023-10-16, updated 2024-05-29)
Self-Pro: Self-Prompt and Tuning Framework for Graph Neural Networks