论文标题
图形神经网络的表示能力:通过代数分析提高表达性
Representation Power of Graph Neural Networks: Improved Expressivity via Algebraic Analysis
论文作者
论文摘要
尽管图神经网络(GNNS)取得了显着的成功,但普遍的信念是它们的表示能力有限,并且最多能像Weisfeiler-Lehman(WL)算法一样表现力。在本文中,我们认为相反,并表明具有匿名输入的标准GNN产生比WL算法更多的歧视表示形式。我们的新颖分析采用线性代数工具,并表征GNN在图形算子的特征值分解方面的表示能力。我们证明,GNN能够从白色非信息输入中产生独特的输出,因为至少所有具有不同特征值的图形。我们还表明,具有白色输入的简单卷积体系结构,产生均衡的特征,这些功能计算图表中的封闭路径,并且证明比WL表示更具表现力。图形同构和图分类数据集的彻底实验分析证实了我们的理论结果,并证明了所提出的方法的有效性。
Despite the remarkable success of Graph Neural Networks (GNNs), the common belief is that their representation power is limited and that they are at most as expressive as the Weisfeiler-Lehman (WL) algorithm. In this paper, we argue the opposite and show that standard GNNs, with anonymous inputs, produce more discriminative representations than the WL algorithm. Our novel analysis employs linear algebraic tools and characterizes the representation power of GNNs with respect to the eigenvalue decomposition of the graph operators. We prove that GNNs are able to generate distinctive outputs from white uninformative inputs, for, at least, all graphs that have different eigenvalues. We also show that simple convolutional architectures with white inputs, produce equivariant features that count the closed paths in the graph and are provably more expressive than the WL representations. Thorough experimental analysis on graph isomorphism and graph classification datasets corroborates our theoretical results and demonstrates the effectiveness of the proposed approach.