论文标题
建议语义意图学习的因果分解
Causal Disentanglement for Semantics-Aware Intent Learning in Recommendation
论文作者
论文摘要
经过观察互动数据培训的传统推荐模型在广泛的应用程序中产生了很大的影响,它面临偏见问题,这些问题涵盖了用户的真实意图,从而恶化了建议效果。现有方法跟踪此问题是消除了强大建议的偏见,例如,通过重新加权培训样本或学习分解表示。通过揭示偏差产生的原因效应消除了偏见,解开的表示方法消除了偏见。但是,如何为用户设计的语义感知和无偏表示的真实意图是在很大程度上没有探索的。为了弥合差距,我们是第一个从因果的角度提出一种称为CADSI的公正和语义意识到的分解学习(语义上的因果分解)。尤其是,CADSI明确地对推荐任务的因果关系进行建模,从而通过解开对特定项目上下文的真实意图来产生语义感知的表示形式。此外,因果干预机制旨在消除源于上下文信息引起的混杂偏见,这进一步使语义意识到的表示与用户的真正意图保持一致。广泛的实验和案例研究都验证了我们提出的模型的鲁棒性和解释性。
Traditional recommendation models trained on observational interaction data have generated large impacts in a wide range of applications, it faces bias problems that cover users' true intent and thus deteriorate the recommendation effectiveness. Existing methods tracks this problem as eliminating bias for the robust recommendation, e.g., by re-weighting training samples or learning disentangled representation. The disentangled representation methods as the state-of-the-art eliminate bias through revealing cause-effect of the bias generation. However, how to design the semantics-aware and unbiased representation for users true intents is largely unexplored. To bridge the gap, we are the first to propose an unbiased and semantics-aware disentanglement learning called CaDSI (Causal Disentanglement for Semantics-Aware Intent Learning) from a causal perspective. Particularly, CaDSI explicitly models the causal relations underlying recommendation task, and thus produces semantics-aware representations via disentangling users true intents aware of specific item context. Moreover, the causal intervention mechanism is designed to eliminate confounding bias stemmed from context information, which further to align the semantics-aware representation with users true intent. Extensive experiments and case studies both validate the robustness and interpretability of our proposed model.