학술논문
Adaptive Propagation Graph Convolutional Network
Document Type
Periodical
Author
Source
IEEE Transactions on Neural Networks and Learning Systems IEEE Trans. Neural Netw. Learning Syst. Neural Networks and Learning Systems, IEEE Transactions on. 32(10):4755-4760 Oct, 2021
Subject
Language
ISSN
2162-237X
2162-2388
2162-2388
Abstract
Graph convolutional networks (GCNs) are a family of neural network models that perform inference on graph data by interleaving vertexwise operations and message-passing exchanges across nodes. Concerning the latter, two key questions arise: 1) how to design a differentiable exchange protocol (e.g., a one-hop Laplacian smoothing in the original GCN) and 2) how to characterize the tradeoff in complexity with respect to the local updates. In this brief, we show that the state-of-the-art results can be achieved by adapting the number of communication steps independently at every node. In particular, we endow each node with a halting unit (inspired by Graves’ adaptive computation time [1]) that after every exchange decides whether to continue communicating or not. We show that the proposed adaptive propagation GCN (AP-GCN) achieves superior or similar results to the best proposed models so far on a number of benchmarks while requiring a small overhead in terms of additional parameters. We also investigate a regularization term to enforce an explicit tradeoff between communication and accuracy. The code for the AP-GCN experiments is released as an open-source library.