论文标题

CMNREC:带有块加速内存网络的顺序建议

CmnRec: Sequential Recommendations with Chunk-accelerated Memory Network

论文作者

Qu, Shilin, Yuan, Fajie, Guo, Guibing, Zhang, Liguang, Wei, Wei

论文摘要

最近,基于内存的神经推荐人(MNR)在顺序建议的任务中表现出了较高的预测准确性,尤其是用于建模长期项目依赖性。但是,典型的MNR需要复杂的内存访问操作,即通过控制器(例如RNN)在每个时间步骤中写和阅读。这些频繁的操作将大大增加网络培训时间,从而导致在工业规模的推荐系统上部署的困难。在本文中,我们提出了一个新型的一般块框架,以显着加速MNR。具体而言,我们的框架将近端信息单元划分为块,并在某些时间步骤执行内存访问,从而可以大大减少内存操作的数量。我们研究了实施有效块的两种方法,即定期块(PEC)和时间敏感的块(TSC),以保留和恢复序列中重要的经常性信号。由于块加速的MNR模型比单个时间步长更多的信息单元,因此它可以在很大程度上消除项目序列中噪声的影响,从而提高MNR的稳定性。这样,提议的块机制不仅可以导致更快的训练和预测,而且还可以稍好一些。在三个现实世界数据集(Weishi,ML-10M和ML-LATEST)上的实验结果表明,我们的块框架大大减少了运行时间(例如,训练最高为7倍,ML-LATEST推断ML-LATEST的推断),并获得MNR的推断,并实现竞争性竞争性。

Recently, Memory-based Neural Recommenders (MNR) have demonstrated superior predictive accuracy in the task of sequential recommendations, particularly for modeling long-term item dependencies. However, typical MNR requires complex memory access operations, i.e., both writing and reading via a controller (e.g., RNN) at every time step. Those frequent operations will dramatically increase the network training time, resulting in the difficulty in being deployed on industrial-scale recommender systems. In this paper, we present a novel general Chunk framework to accelerate MNR significantly. Specifically, our framework divides proximal information units into chunks, and performs memory access at certain time steps, whereby the number of memory operations can be greatly reduced. We investigate two ways to implement effective chunking, i.e., PEriodic Chunk (PEC) and Time-Sensitive Chunk (TSC), to preserve and recover important recurrent signals in the sequence. Since chunk-accelerated MNR models take into account more proximal information units than that from a single timestep, it can remove the influence of noise in the item sequence to a large extent, and thus improve the stability of MNR. In this way, the proposed chunk mechanism can lead to not only faster training and prediction, but even slightly better results. The experimental results on three real-world datasets (weishi, ml-10M and ml-latest) show that our chunk framework notably reduces the running time (e.g., with up to 7x for training & 10x for inference on ml-latest) of MNR, and meantime achieves competitive performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源