论文标题

基于折扣的马尔可夫扩散过程

A Simple Hypergraph Kernel Convolution based on Discounted Markov Diffusion Process

论文作者

Li, Fuyang, Zhang, Jiying, Xiao, Xi, Zhang, Bin, Luo, Dijun

论文摘要

离散结构的内核评估捕获语义和固有拓扑信息的对象之间的成对相似性。离散结构上的现有内核仅由拓扑信息(例如图形的邻接矩阵)开发,而无需考虑对象的原始属性。本文提出了两个阶段范式,以汇总有关离散结构的全面信息,从而折扣马尔可夫扩散可学习的内核(DMDLK)。具体而言,基于DMDLK的基本投影,我们设计了一个简单的HyperGraph内核卷积(SHKC),以用于顶点的隐藏表示。 SHKC可以调整扩散步骤,而不是堆叠卷积层以从远程社区中汇总信息,从而防止现有超晶卷积的过度光滑问题。此外,我们利用统一的稳定性定理在转导学习中,从理论角度分析SHKC的有效性和概括能力的关键因素。用于节点分类任务的几个基准数据集上的实验结果验证了SHKC优于最先进方法。

Kernels on discrete structures evaluate pairwise similarities between objects which capture semantics and inherent topology information. Existing kernels on discrete structures are only developed by topology information(such as adjacency matrix of graphs), without considering original attributes of objects. This paper proposes a two-phase paradigm to aggregate comprehensive information on discrete structures leading to a Discount Markov Diffusion Learnable Kernel (DMDLK). Specifically, based on the underlying projection of DMDLK, we design a Simple Hypergraph Kernel Convolution (SHKC) for hidden representation of vertices. SHKC can adjust diffusion steps rather than stacking convolution layers to aggregate information from long-range neighborhoods which prevents over-smoothing issues of existing hypergraph convolutions. Moreover, we utilize the uniform stability bound theorem in transductive learning to analyze critical factors for the effectiveness and generalization ability of SHKC from a theoretical perspective. The experimental results on several benchmark datasets for node classification tasks verified the superior performance of SHKC over state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源