论文标题

层改良的图形卷积网络供推荐

Layer-refined Graph Convolutional Networks for Recommendation

论文作者

Zhou, Xin, Lin, Donghui, Liu, Yong, Miao, Chunyan

论文摘要

使用图形卷积网络(GCN)的推荐模型已经达到了最先进的性能,因为它们可以集成节点信息和用户数据交互图的拓扑结构。但是,这些基于GCN的推荐模型不仅在堆叠太多层时过度光滑,而且还承受了由于用户 - 项目交互中存在噪声而产生的性能变性。在本文中,我们首先确定了当前基于GCN的模型中过度平滑和解决方案崩溃的建议困境。具体而言,这些模型通常汇总所有图层嵌入,以进行节点更新,并在几层中实现其最佳推荐性能,因为过度光滑。相反,如果我们将可学习的权重放在节点更新的层嵌入式上,那么重量空间将始终崩溃到固定点,在该点上,自我层的重量几乎可以容纳所有。我们提出了一种被称为Layergcn的层改造的GCN模型,该模型在信息传播和GCN的节点更新过程中完善了图层表示。此外,先前基于GCN的建议模型汇总了邻居的所有传入信息,而不会区分噪声节点,从而恶化了建议性能。我们的模型进一步修复了遵循程度敏感的概率而不是均匀分布的用户项目相互作用图的边缘。实验结果表明,提出的模型在具有快速训练收敛的四个公共数据集上胜过最先进的模型。该建议方法的实现代码可在https://github.com/enoche/imrec上获得。

Recommendation models utilizing Graph Convolutional Networks (GCNs) have achieved state-of-the-art performance, as they can integrate both the node information and the topological structure of the user-item interaction graph. However, these GCN-based recommendation models not only suffer from over-smoothing when stacking too many layers but also bear performance degeneration resulting from the existence of noise in user-item interactions. In this paper, we first identify a recommendation dilemma of over-smoothing and solution collapsing in current GCN-based models. Specifically, these models usually aggregate all layer embeddings for node updating and achieve their best recommendation performance within a few layers because of over-smoothing. Conversely, if we place learnable weights on layer embeddings for node updating, the weight space will always collapse to a fixed point, at which the weighting of the ego layer almost holds all. We propose a layer-refined GCN model, dubbed LayerGCN, that refines layer representations during information propagation and node updating of GCN. Moreover, previous GCN-based recommendation models aggregate all incoming information from neighbors without distinguishing the noise nodes, which deteriorates the recommendation performance. Our model further prunes the edges of the user-item interaction graph following a degree-sensitive probability instead of the uniform distribution. Experimental results show that the proposed model outperforms the state-of-the-art models significantly on four public datasets with fast training convergence. The implementation code of the proposed method is available at https://github.com/enoche/ImRec.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源