Researchr is a web site for finding, collecting, sharing, and reviewing scientific publications, for researchers by researchers. Sign up for an account to create a profile with publication list, tag and review your related work, and share bibliographies with your co-authors. . On the Bottleneck of Graph Neural Networks and its Practical Implications. In
9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
[doi] AbstractAbstract is missing. @article{Alon2021OnTB, title={On the Bottleneck of Graph Neural Networks and its Practical Implications}, author={Uri Alon and Eran Yahav}, journal={ArXiv}, year={2021}, volume={abs/2006.05205} } Graph neural networks (GNNs) were shown to effectively learn from highly structured data containing elements (nodes) with relationships (edges) between them. GNN variants differ in how each node in the graph absorbs the information flowing from its neighbor nodes. In this paper, we highlight an inherent problem in GNNs: the mechanism of propagating information between neighbors creates a bottleneck when every node aggregates messages from its neighbors. This bottleneck causes the over-squashing… Figures and Tables from this paper179 CitationsReferencesSHOWING 1-10 OF 39 REFERENCES How Powerful are Graph Neural Networks?
This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs. Abstract: Since the proposal of the graph neural network (GNN) by Gori et al. (2005) and Scarselli et al. (2008), one of the major problems in training GNNs was their struggle to propagate information between distant nodes in the graph. We propose a new explanation for this problem: GNNs are susceptible to a bottleneck when aggregating messages across a long path. This bottleneck causes the over-squashing of exponentially growing information into fixed-size vectors. As a result, GNNs fail to propagate messages originating from distant nodes and perform poorly when the prediction task depends on long-range interaction. In this paper, we highlight the inherent problem of over-squashing in GNNs: we demonstrate that the bottleneck hinders popular GNNs from fitting long-range signals in the training data; we further show that GNNs that absorb incoming edges equally, such as GCN and GIN, are more susceptible to over-squashing than GAT and GGNN; finally, we show that prior work, which extensively tuned GNN models of long-range problems, suffers from over-squashing, and that breaking the bottleneck improves their state-of-the-art results without any tuning or additional weights. Our code is available at https://github.com/tech-srl/bottleneck/ . This is the official implementation of the paper: On the Bottleneck of Graph Neural Networks and its Practical Implications (ICLR'2021), which introduces the over-squashing problem of GNNs. By
Uri Alon and Eran Yahav. See also the [video], [poster] and
[slides]. this repository is divided into three sub-projects: This project was designed to be useful in experimenting with new GNN architectures and new solutions for the over-squashing problem. Feel free to open an issue with any questions. The Tree-NeighborsMatch problemRequirementsDependenciesThis project is based on PyTorch 1.4.0 and the PyTorch Geometric library.
The Verify that importing the dependencies goes without errors:
HardwareTraining on large trees (depth=8) might require ~60GB of RAM and about 10GB of GPU memory. GPU memory can be compromised by using a smaller batch size and using the For example, instead of running:
The following uses gradient accumulation, and takes less GPU memory:
Reproducing ExperimentsTo run a single experiment from the paper, run: And see the available flags. For example, to train a GGNN with depth=4, run:
To train a GNN across all depths, run one of the following:
ResultsThe results of running the above scripts are (Section 4.1 in the paper):
Experiment with other GNN typesTo experiment with other GNN types:
CitationIf you want to cite this work, please use this bibtex entry:
What is a bottleneck in neural network?The bottleneck in a neural network is just a layer with fewer neurons than the layer below or above it. Having such a layer encourages the network to compress feature representations (of salient features for the target variable) to best fit in the available space.
What is over smoothing problem?the performance of GNNs does not improve as the number of layers increases. This effect is called oversmoothing.
|