论文标题

图形的随机神经网络,用于图形上的半监督学习

Graph Random Neural Network for Semi-Supervised Learning on Graphs

论文作者

Feng, Wenzheng, Zhang, Jie, Dong, Yuxiao, Han, Yu, Luan, Huanbo, Xu, Qian, Yang, Qiang, Kharlamov, Evgeny, Tang, Jie

论文摘要

我们研究了图形上半监督学习的问题,该图表神经网络(GNN)已被广泛探索。但是,大多数现有的GNN固有地遭受了超平滑,非舒适性和弱化节点的局限性,当时标记的节点很少。在本文中,我们提出了一个简单而有效的框架 - 绘制随机神经网络(GRAND) - 以解决这些问题。在Grand中,我们首先设计了一种随机传播策略来执行图形数据。然后,我们利用一致性正则化来优化不同数据增强范围内未标记节点的预测一致性。在图基准数据集上进行的广泛实验表明,在半监督节点分类方面,巨大的大大优于最先进的GNN基准。最后,我们表明,与现有的GNN相比,大减轻了过度平滑和非舒适性的问题,表现出更好的概括行为。 Grand的源代码可在https://github.com/grand20/grand上公开获得。

We study the problem of semi-supervised learning on graphs, for which graph neural networks (GNNs) have been extensively explored. However, most existing GNNs inherently suffer from the limitations of over-smoothing, non-robustness, and weak-generalization when labeled nodes are scarce. In this paper, we propose a simple yet effective framework -- GRAPH RANDOM NEURAL NETWORKS (GRAND) -- to address these issues. In GRAND, we first design a random propagation strategy to perform graph data augmentation. Then we leverage consistency regularization to optimize the prediction consistency of unlabeled nodes across different data augmentations. Extensive experiments on graph benchmark datasets suggest that GRAND significantly outperforms state-of-the-art GNN baselines on semi-supervised node classification. Finally, we show that GRAND mitigates the issues of over-smoothing and non-robustness, exhibiting better generalization behavior than existing GNNs. The source code of GRAND is publicly available at https://github.com/Grand20/grand.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源