论文标题

消息传递神经网络的概括分析在大型随机图上

Generalization Analysis of Message Passing Neural Networks on Large Random Graphs

论文作者

Maskey, Sohir, Levie, Ron, Lee, Yunseok, Kutyniok, Gitta

论文摘要

传递神经网络(MPNN)自引入以来,它作为将卷积神经网络引入到图形结构数据的概括以来的流行程度急剧上升,现在被认为是解决各种以图形为中心的问题的最先进工具。我们研究图形分类和回归中MPNN的概括误差。我们假设不同类别的图是从不同的随机图模型中采样的。我们表明,当在从这种分布中采样的数据集上训练MPNN时,概括差距会增加MPNN的复杂性,而不仅在训练样本的数量方面降低,而且降低了图表中的节点的平均数量。这表明,只要图形很大,具有高复杂性的MPNN如何从图形的小数据集中概括。概括结合是从均匀收敛结果得出的,该结果表明,应用于图的任何MPNN近似于该图离散的几何模型上应用的MPNN。

Message passing neural networks (MPNN) have seen a steep rise in popularity since their introduction as generalizations of convolutional neural networks to graph-structured data, and are now considered state-of-the-art tools for solving a large variety of graph-focused problems. We study the generalization error of MPNNs in graph classification and regression. We assume that graphs of different classes are sampled from different random graph models. We show that, when training a MPNN on a dataset sampled from such a distribution, the generalization gap increases in the complexity of the MPNN, and decreases, not only with respect to the number of training samples, but also with the average number of nodes in the graphs. This shows how a MPNN with high complexity can generalize from a small dataset of graphs, as long as the graphs are large. The generalization bound is derived from a uniform convergence result, that shows that any MPNN, applied on a graph, approximates the MPNN applied on the geometric model that the graph discretizes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源