论文标题
用高阶图神经网络计数子结构:可能性和不可能结果
Counting Substructures with Higher-Order Graph Neural Networks: Possibility and Impossibility Results
论文作者
论文摘要
虽然消息传递图形神经网络(GNN)已成为越来越流行的学习架构,但最近的作品揭示了其表现力的重要缺点。作为响应,已经提出了几种高阶GNN,这些GNN显着提高了表达能力,尽管其计算成本很大。在这一差距的激励下,我们探索了替代策略和下限。特别是,我们分析了当地社区的一种新的递归合并技术,该技术允许不同的计算成本和表达能力折衷。首先,我们证明该模型可以计算大小$ k $的子图,从而克服了低阶GNN的已知限制。其次,我们展示了与现有的高阶GNN相比,递归池如何利用稀疏性以降低计算复杂性。更一般而言,我们提供了一个(接近)匹配的信息理论下限,用于计数子图的计数图表,这些图形表示衍生(sub-)图的表示。我们还讨论了时间复杂性的下限。
While message passing Graph Neural Networks (GNNs) have become increasingly popular architectures for learning with graphs, recent works have revealed important shortcomings in their expressive power. In response, several higher-order GNNs have been proposed that substantially increase the expressive power, albeit at a large computational cost. Motivated by this gap, we explore alternative strategies and lower bounds. In particular, we analyze a new recursive pooling technique of local neighborhoods that allows different tradeoffs of computational cost and expressive power. First, we prove that this model can count subgraphs of size $k$, and thereby overcomes a known limitation of low-order GNNs. Second, we show how recursive pooling can exploit sparsity to reduce the computational complexity compared to the existing higher-order GNNs. More generally, we provide a (near) matching information-theoretic lower bound for counting subgraphs with graph representations that pool over representations of derived (sub-)graphs. We also discuss lower bounds on time complexity.