论文标题
具有随机节点初始化的图形神经网络的惊人功能
The Surprising Power of Graph Neural Networks with Random Node Initialization
论文作者
论文摘要
图神经网络(GNN)是用于表示关系数据的表示模型。但是,标准GNN的表达能力受到限制,因为它们无法区分超出Weisfeiler-Lean图同构象征性启发式功能的图形。为了打破这种表达障碍,GNN通过随机节点初始化(RNI)增强了,其中的想法是训练和运行具有随机初始节点特征的模型。在这项工作中,我们通过RNI分析了GNN的表达能力,并证明了这些模型是通用的,这是GNN不依赖计算上要求高阶属性的第一个结果。即使具有部分随机的初始节点特征,这种普遍性结果也能够保持,并保留了GNNS的不变性属性。然后,我们根据精心构造的数据集对RNI对GNN的影响进行经验分析。我们的经验发现支持GNN具有RNI优于标准GNN的出色表现。
Graph neural networks (GNNs) are effective models for representation learning on relational data. However, standard GNNs are limited in their expressive power, as they cannot distinguish graphs beyond the capability of the Weisfeiler-Leman graph isomorphism heuristic. In order to break this expressiveness barrier, GNNs have been enhanced with random node initialization (RNI), where the idea is to train and run the models with randomized initial node features. In this work, we analyze the expressive power of GNNs with RNI, and prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties. This universality result holds even with partially randomized initial node features, and preserves the invariance properties of GNNs in expectation. We then empirically analyze the effect of RNI on GNNs, based on carefully constructed datasets. Our empirical findings support the superior performance of GNNs with RNI over standard GNNs.