论文标题
在线supsodular +超模型(bp)最大化,用匪徒反馈
Online SuBmodular + SuPermodular (BP) Maximization with Bandit Feedback
论文作者
论文摘要
在与组合目标的在线互动机器学习的背景下,我们将纯粹的下一个先前工作扩展到更一般的非样本目标。这包括:(1)那些可与两个术语的总和分解为两个术语的总和(单调下模量和单调超模块化项,称为BP分解); (2)那些仅弱supsodular。在这两种情况下,这不仅允许代表对象之间的竞争性(superdoular),还可以表示对象之间的互补(超模型)关系,从而将此设置增强到更广泛的应用程序(例如,电影建议,医疗治疗等),这是有益的。此外,在两项情况下,我们不仅研究了更典型的单片反馈方法,而且还研究一个新颖的框架,在该框架中,每个术语都可以单独使用反馈。考虑到现实世界的实用性和可扩展性,我们整合了Nystrom素描技术,以显着降低计算成本,包括纯粹的子模型。在高斯过程上下文匪徒的设置中,我们在所有情况下都显示出亚线性理论遗憾的界限。我们还从经验上显示出对建议系统和数据子集选择的良好适用性。
In the context of online interactive machine learning with combinatorial objectives, we extend purely submodular prior work to more general non-submodular objectives. This includes: (1) those that are additively decomposable into a sum of two terms (a monotone submodular and monotone supermodular term, known as a BP decomposition); and (2) those that are only weakly submodular. In both cases, this allows representing not only competitive (submodular) but also complementary (supermodular) relationships between objects, enhancing this setting to a broader range of applications (e.g., movie recommendations, medical treatments, etc.) where this is beneficial. In the two-term case, moreover, we study not only the more typical monolithic feedback approach but also a novel framework where feedback is available separately for each term. With real-world practicality and scalability in mind, we integrate Nystrom sketching techniques to significantly reduce the computational cost, including for the purely submodular case. In the Gaussian process contextual bandits setting, we show sub-linear theoretical regret bounds in all cases. We also empirically show good applicability to recommendation systems and data subset selection.