论文标题

遗体:离散时间动态图的删除表示形式学习

DyTed: Disentangled Representation Learning for Discrete-time Dynamic Graph

论文作者

Zhang, Kaike, Cao, Qi, Fang, Gaolin, Xu, Bingbing, Zou, Hongjian, Shen, Huawei, Cheng, Xueqi

论文摘要

近年来,对动态图的无监督表示学习引起了很多研究的关注。与静态图相比,动态图是节点的内在稳定特性和与时间相关的动态偏好的综合实施例。但是,现有方法通常将这两种类型的信息混合到单个表示空间中,这可能会导致解释不良,稳健性较低,并且在应用于不同下游任务时的能力有限。为了解决上述问题,在本文中,我们提出了一个新型的分离表示的表示框架,用于离散时间动态图,即丧失。我们专门设计了一个时间段的对比学习任务,以及结构对比度学习,分别有效地确定了时间不变和随时间变化的表示。为了进一步增强这两种类型的表示的分离,我们从信息理论的角度提出了在对抗性学习框架下的分离感知歧视者。关于腾讯和五个常用公共数据集的广泛实验表明,作为可以应用于现有方法的一般框架,可以在各种下游任务上实现最先进的绩效,并且对噪音更加强大。

Unsupervised representation learning for dynamic graphs has attracted a lot of research attention in recent years. Compared with static graph, the dynamic graph is a comprehensive embodiment of both the intrinsic stable characteristics of nodes and the time-related dynamic preference. However, existing methods generally mix these two types of information into a single representation space, which may lead to poor explanation, less robustness, and a limited ability when applied to different downstream tasks. To solve the above problems, in this paper, we propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed. We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations respectively. To further enhance the disentanglement of these two types of representation, we propose a disentanglement-aware discriminator under an adversarial learning framework from the perspective of information theory. Extensive experiments on Tencent and five commonly used public datasets demonstrate that DyTed, as a general framework that can be applied to existing methods, achieves state-of-the-art performance on various downstream tasks, as well as be more robust against noise.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源