arXiv Analytics

Sign in

arXiv:1907.02204 [cs.LG]AbstractReferencesReviewsResources

Improving Attention Mechanism in Graph Neural Networks via Cardinality Preservation

Shuo Zhang, Lei Xie

Published 2019-07-04Version 1

Graph Neural Networks (GNNs) are powerful to learn the representation of graph-structured data. Most of the GNNs use the message-passing scheme, where the embedding of a node is iteratively updated by aggregating the information of its neighbors. To achieve a better expressive capability of node influences, attention mechanism has grown to become a popular way to assign trainable weights of a node's neighbors in the aggregation. However, though the attention-based GNNs have achieved state-of-the-art results on several tasks, a clear understanding of their discriminative capacities is missing. In this work, we present a theoretical analysis of the representational properties of the GNN that adopts attention mechanism as an aggregator. In the analysis, we show all of the cases when those GNNs always fail to distinguish distinct structures. The finding shows existing attention-based aggregators fail to preserve the cardinality of the multiset of node feature vectors in the aggregation, thus limits their discriminative ability. To improve the performance of attention-based GNNs, we propose two cardinality preserved modifications that can be applied to any kind of attention mechanisms. We evaluate them in our GNN framework on benchmark datasets for graph classification. The results validate the improvements and show the competitive performance of our models.

Related articles: Most relevant | Search more
arXiv:1908.07110 [cs.LG] (Published 2019-08-19)
Graph Neural Networks with High-order Feature Interactions
arXiv:2003.04078 [cs.LG] (Published 2020-03-09)
A Survey on The Expressive Power of Graph Neural Networks
arXiv:2003.01795 [cs.LG] (Published 2020-03-03)
Graphon Pooling in Graph Neural Networks