ON THE DISTRIBUTION ALIGNMENT OF PROPAGATION IN GRAPH NEURAL NETWORKS

On the distribution alignment of propagation in graph neural networks

On the distribution alignment of propagation in graph neural networks

Blog Article

Graph neural networks (GNNs) have been widely adopted for modeling graph-structure data.Most existing GNN studies have focused on designing different strategies to propagate information over the graph structures.After systematic investigations, we observe that the propagation step in GNNs matters, but its resultant performance improvement is insensitive to the location where we apply it.

Our tenma vineyard empirical examination further shows that the performance improvement brought by propagation mostly comes from a phenomenon of distribution alignment, i.e., propagation over graphs actually results in the alignment of the underlying distributions between the training and test sets.

The findings are instrumental to understand GNNs, e.g., why decoupled GNNs can work raspberry hershey kisses as good as standard GNNs.

11 Source code: https://github.com/THUDM/DistAlign-GNNs.

Report this page