论文标题
对学习排名的悲观非政策优化
Pessimistic Off-Policy Optimization for Learning to Rank
论文作者
论文摘要
非政策学习是使用另一个策略收集的数据优化政策而无需部署政策的框架。在推荐系统中,由于记录数据的不平衡问题尤其具有挑战性:建议某些项目比其他项目更频繁地记录。推荐项目列表时,这将进一步延续,因为动作空间是组合的。为了应对这一挑战,我们研究了对学习排名的悲观范围优化。关键思想是在点击模型的参数上计算较低的置信度范围,然后以最高的悲观估计值返回列表。这种方法在计算上是有效的,我们对其进行了分析。我们研究其贝叶斯和频繁的变体,并通过融合经验贝叶斯来克服未知先验的局限性。为了展示我们方法的经验有效性,我们将其与使用反向倾向得分或忽略不确定性的非政策优化器进行了比较。我们的方法表现优于所有基准,并且既健壮又一般。
Off-policy learning is a framework for optimizing policies without deploying them, using data collected by another policy. In recommender systems, this is especially challenging due to the imbalance in logged data: some items are recommended and thus logged more frequently than others. This is further perpetuated when recommending a list of items, as the action space is combinatorial. To address this challenge, we study pessimistic off-policy optimization for learning to rank. The key idea is to compute lower confidence bounds on parameters of click models and then return the list with the highest pessimistic estimate of its value. This approach is computationally efficient, and we analyze it. We study its Bayesian and frequentist variants and overcome the limitation of unknown prior by incorporating empirical Bayes. To show the empirical effectiveness of our approach, we compare it to off-policy optimizers that use inverse propensity scores or neglect uncertainty. Our approach outperforms all baselines and is both robust and general.