论文标题
从卷积混合物中学习图形结构
Learning Graph Structure from Convolutional Mixtures
论文作者
论文摘要
诸如图形神经网络之类的机器学习框架通常依赖于给定的固定图来利用关系电感偏见,从而有效地从网络数据中学习。但是,当所述图是(部分)未观察到的,嘈杂或动态性时,从数据推断图形结构的问题就变得相关。在本文中,我们假设观察到的图形和潜在图之间存在图形卷积关系,并将图形学习任务作为网络逆(反向卷积)问题。代替基于特征分类的光谱方法或迭代优化解决方案,我们展开并截断近端梯度迭代,以达到一个参数化的神经网络体系结构,我们称之为图形反卷积网络(GDN)。 GDN可以以监督方式学习图形的分布,通过适应损失函数来执行链接预测或边缘重量回归任务,并且它们本质上是归纳的。我们使用监督设置中的合成数据来证实GDN的出色图形恢复性能及其对较大图的概括。此外,我们证明了GDN在现实世界神经影像和社交网络数据集上的鲁棒性和代表性。
Machine learning frameworks such as graph neural networks typically rely on a given, fixed graph to exploit relational inductive biases and thus effectively learn from network data. However, when said graphs are (partially) unobserved, noisy, or dynamic, the problem of inferring graph structure from data becomes relevant. In this paper, we postulate a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem. In lieu of eigendecomposition-based spectral methods or iterative optimization solutions, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN). GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive. We corroborate GDN's superior graph recovery performance and its generalization to larger graphs using synthetic data in supervised settings. Furthermore, we demonstrate the robustness and representation power of GDNs on real world neuroimaging and social network datasets.