论文标题
与隐式反馈的推荐快速自适应加权矩阵分解
Fast Adaptively Weighted Matrix Factorization for Recommendation with Implicit Feedback
论文作者
论文摘要
由于缺乏可靠的观察到的负数据,隐式反馈的建议是一项高度挑战的任务。隐性建议的一种流行而有效的方法是将未观察到的数据视为负面,但信心下降。自然,如何分配置信度以及如何处理大量未观察到的数据是隐式推荐模型的两个关键问题。但是,现有方法要么通过手动分配简单的置信度来追求快速学习,这缺乏灵活性,并可能在评估用户的偏好时会产生经验偏见;或适应性地推断个性化的信心权重,但效率低。为了实现自适应权重分配和有效的模型学习,我们提出了基于变异自动编码器的快速加权矩阵分解(FAWMF)。个性化的数据置信度权重可以通过参数化的神经网络(函数)自适应地分配,并且可以从数据中推断网络。此外,为了支持FAWMF的快速和稳定学习,已经开发了一种新的基于特定批次的学习算法FBGD,该学习算法对所有反馈数据进行了训练,但其复杂性与观察到的数据的数量是线性的。对现实世界数据集的广泛实验证明了拟议的FAWMF及其学习算法FBGD的优势。
Recommendation from implicit feedback is a highly challenging task due to the lack of the reliable observed negative data. A popular and effective approach for implicit recommendation is to treat unobserved data as negative but downweight their confidence. Naturally, how to assign confidence weights and how to handle the large number of the unobserved data are two key problems for implicit recommendation models. However, existing methods either pursuit fast learning by manually assigning simple confidence weights, which lacks flexibility and may create empirical bias in evaluating user's preference; or adaptively infer personalized confidence weights but suffer from low efficiency. To achieve both adaptive weights assignment and efficient model learning, we propose a fast adaptively weighted matrix factorization (FAWMF) based on variational auto-encoder. The personalized data confidence weights are adaptively assigned with a parameterized neural network (function) and the network can be inferred from the data. Further, to support fast and stable learning of FAWMF, a new specific batch-based learning algorithm fBGD has been developed, which trains on all feedback data but its complexity is linear to the number of observed data. Extensive experiments on real-world datasets demonstrate the superiority of the proposed FAWMF and its learning algorithm fBGD.