论文标题
通过在消息传递中注入池来增强图形神经网络
Boosting Graph Neural Networks by Injecting Pooling in Message Passing
论文作者
论文摘要
由于消息通话(MP)层的开发,图形神经网络(GNN)领域取得了巨大的成功,该层通过将节点与邻居结合在一起以解决可变大小和无序图来更新节点的表示。尽管MP GNN取得了成果,但当节点表示过于相似甚至无法区分时,它们的表现可能会遭受过度平滑的影响。此外,据报道,随着GNN层的增加,固有的图形结构被平滑。灵感来自图像处理中使用的边缘性双边滤波器,我们提出了一个新的,适应性和强大的MP框架,以防止过度光滑。我们的双边MP通过利用节点的类信息来估算成对模块化梯度,并在应用汇总函数时使用梯度进一步保留全局图结构。我们提出的计划可以推广到所有普通的MP GNN。使用四个最先进的MP GNN在五个中型基准数据集上进行实验表明,双侧MP通过减轻过度平滑效果来提高性能。通过检查定量测量值,我们还验证了拟议机制在防止过度平滑问题方面的有效性。
There has been tremendous success in the field of graph neural networks (GNNs) as a result of the development of the message-passing (MP) layer, which updates the representation of a node by combining it with its neighbors to address variable-size and unordered graphs. Despite the fruitful progress of MP GNNs, their performance can suffer from over-smoothing, when node representations become too similar and even indistinguishable from one another. Furthermore, it has been reported that intrinsic graph structures are smoothed out as the GNN layer increases. Inspired by the edge-preserving bilateral filters used in image processing, we propose a new, adaptable, and powerful MP framework to prevent over-smoothing. Our bilateral-MP estimates a pairwise modular gradient by utilizing the class information of nodes, and further preserves the global graph structure by using the gradient when the aggregating function is applied. Our proposed scheme can be generalized to all ordinary MP GNNs. Experiments on five medium-size benchmark datasets using four state-of-the-art MP GNNs indicate that the bilateral-MP improves performance by alleviating over-smoothing. By inspecting quantitative measurements, we additionally validate the effectiveness of the proposed mechanism in preventing the over-smoothing issue.