论文标题

通过信息收缩和图形扩展的镜头在GNN中过度争分

Oversquashing in GNNs through the lens of information contraction and graph expansion

论文作者

Banerjee, Pradeep Kr., Karhadkar, Kedar, Wang, Yu Guang, Alon, Uri, Montúfar, Guido

论文摘要

正如最近的作品中观察到的那样,通信图神经网络(GNN)中信号传播的质量强烈影响其表现力。特别是,对于依靠远程相互作用的预测任务,节点特征的递归聚集可能会导致不希望的现象称为“过句”。我们提出了一个基于信息收缩的分析过度句子的框架。我们的分析以可靠计算的模型为指导,这是由于von Neumann引起的,该模型在嘈杂的计算图中提供了新的洞察力作为信号淬灭的新见解。在此基础上,我们提出了一个旨在减轻过度量化的算法的图形。我们的算法采用了由扩展器图构造动机的随机局部边缘翻转原始的。我们将算法的光谱扩展特性与现有基于曲率的非本地重新布线策略的光谱膨胀属性进行了比较。合成实验表明,尽管我们的算法通常具有较慢的膨胀速率,但总体计算较便宜,可以准确地保留节点度,并且永远不会断开图形的连接。

The quality of signal propagation in message-passing graph neural networks (GNNs) strongly influences their expressivity as has been observed in recent works. In particular, for prediction tasks relying on long-range interactions, recursive aggregation of node features can lead to an undesired phenomenon called "oversquashing". We present a framework for analyzing oversquashing based on information contraction. Our analysis is guided by a model of reliable computation due to von Neumann that lends a new insight into oversquashing as signal quenching in noisy computation graphs. Building on this, we propose a graph rewiring algorithm aimed at alleviating oversquashing. Our algorithm employs a random local edge flip primitive motivated by an expander graph construction. We compare the spectral expansion properties of our algorithm with that of an existing curvature-based non-local rewiring strategy. Synthetic experiments show that while our algorithm in general has a slower rate of expansion, it is overall computationally cheaper, preserves the node degrees exactly and never disconnects the graph.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源