论文标题
稀疏-DYN:通过基于事件的稀疏时间注意网络的稀疏动态图多代理学习
Sparse-Dyn: Sparse Dynamic Graph Multi-representation Learning via Event-based Sparse Temporal Attention Network
论文作者
论文摘要
动态图神经网络已被广泛用于建模和表示图结构数据。当前的动态表示学习的重点是离散学习,这会导致时间信息损失或涉及重型计算的连续学习。在这项工作中,我们提出了一个新型的动态图神经网络稀疏dyn。它将时间信息自适应地编码为具有相等数量的时间流行结构的一系列贴片。因此,在避免使用会导致信息丢失的快照时,它也达到了较好的时间粒度,这与连续网络所能提供的相近。此外,我们还设计了一个轻巧的模块,即稀疏的时间变压器,以通过结构邻域和时间动力学来计算节点表示。由于简化了完全连接的注意力连接,因此计算成本远远低于当前最新成本。链接预测实验均在连续图和离散图数据集上进行。通过与几个最先进的嵌入基线的图表进行比较,实验结果表明,稀疏-DYN具有更快的推理速度,同时具有竞争性能。
Dynamic graph neural networks have been widely used in modeling and representation learning of graph structure data. Current dynamic representation learning focuses on either discrete learning which results in temporal information loss or continuous learning that involves heavy computation. In this work, we proposed a novel dynamic graph neural network, Sparse-Dyn. It adaptively encodes temporal information into a sequence of patches with an equal amount of temporal-topological structure. Therefore, while avoiding the use of snapshots which causes information loss, it also achieves a finer time granularity, which is close to what continuous networks could provide. In addition, we also designed a lightweight module, Sparse Temporal Transformer, to compute node representations through both structural neighborhoods and temporal dynamics. Since the fully-connected attention conjunction is simplified, the computation cost is far lower than the current state-of-the-arts. Link prediction experiments are conducted on both continuous and discrete graph datasets. Through comparing with several state-of-the-art graph embedding baselines, the experimental results demonstrate that Sparse-Dyn has a faster inference speed while having competitive performance.