论文标题
图形自动编码器通过邻里Wasserstein重建
Graph Auto-Encoder Via Neighborhood Wasserstein Reconstruction
论文作者
论文摘要
Graph神经网络(GNN)最近引起了大量的研究注意,主要是在半监督学习的情况下。如果任务不合时宜的表示或根本无法进行监督,则自动编码器框架派上用场,具有自然的图形重建目标,用于无监督的GNN培训。但是,现有的图形自动编码器旨在重建直接链接,因此以这种方式训练的GNN只能针对面向接近的图形挖掘任务进行优化,并且在拓扑结构很重要时会不足。在这项工作中,我们重新访问了GNN的图形编码过程,该过程基本上学会了将每个节点的邻域信息编码为一个嵌入向量,并提出了一个新颖的图形解码器,以通过邻里WASSERSERTEIN重建(NWR)(NWR)重建有关接近性和结构的整个邻域信息。具体而言,从每个节点的GNN嵌入中,NWR共同预测其节点程度和邻居特征分布,该分布预测基于Wasserstein距离采用了最佳传输损失。在合成和现实世界网络数据集上进行的广泛实验表明,使用NWR学到的无监督节点表示在以结构为导向的图形挖掘任务中具有更大的优势,同时还可以在面向接近性的方面实现竞争性能。
Graph neural networks (GNNs) have drawn significant research attention recently, mostly under the setting of semi-supervised learning. When task-agnostic representations are preferred or supervision is simply unavailable, the auto-encoder framework comes in handy with a natural graph reconstruction objective for unsupervised GNN training. However, existing graph auto-encoders are designed to reconstruct the direct links, so GNNs trained in this way are only optimized towards proximity-oriented graph mining tasks, and will fall short when the topological structures matter. In this work, we revisit the graph encoding process of GNNs which essentially learns to encode the neighborhood information of each node into an embedding vector, and propose a novel graph decoder to reconstruct the entire neighborhood information regarding both proximity and structure via Neighborhood Wasserstein Reconstruction (NWR). Specifically, from the GNN embedding of each node, NWR jointly predicts its node degree and neighbor feature distribution, where the distribution prediction adopts an optimal-transport loss based on the Wasserstein distance. Extensive experiments on both synthetic and real-world network datasets show that the unsupervised node representations learned with NWR have much more advantageous in structure-oriented graph mining tasks, while also achieving competitive performance in proximity-oriented ones.