论文标题
通过卷积神经网络学习深图表
Learning Deep Graph Representations via Convolutional Neural Networks
论文作者
论文摘要
在许多情况下,出现了图形结构的数据。一个基本问题是量化诸如分类等任务的图形相似之处。 R-convolution图核是将图分解为子结构并比较它们的正极函数。有效实施此想法的一个问题是,子结构不是独立的,这导致了高维特征空间。此外,图内的核心无法捕获顶点之间的高阶复杂相互作用。为了减轻这两个问题,我们提出了一个名为DeepMap的框架,以了解图形图图的深度表示。图形的学习深度表示是一个密集且低维的矢量,可在顶点邻域中捕获复杂的高阶相互作用。 DeepMap通过生成对齐的顶点序列并为每个顶点构建接收场,将卷积神经网络(CNN)扩展到任意图。我们从经验上验证了各种图形分类基准的DeepMap,并证明它可以实现最新的性能。
Graph-structured data arise in many scenarios. A fundamental problem is to quantify the similarities of graphs for tasks such as classification. R-convolution graph kernels are positive-semidefinite functions that decompose graphs into substructures and compare them. One problem in the effective implementation of this idea is that the substructures are not independent, which leads to high-dimensional feature space. In addition, graph kernels cannot capture the high-order complex interactions between vertices. To mitigate these two problems, we propose a framework called DeepMap to learn deep representations for graph feature maps. The learned deep representation for a graph is a dense and low-dimensional vector that captures complex high-order interactions in a vertex neighborhood. DeepMap extends Convolutional Neural Networks (CNNs) to arbitrary graphs by generating aligned vertex sequences and building the receptive field for each vertex. We empirically validate DeepMap on various graph classification benchmarks and demonstrate that it achieves state-of-the-art performance.