论文标题

组合纯探索与全班姓或部分线性反馈

Combinatorial Pure Exploration with Full-Bandit or Partial Linear Feedback

论文作者

Du, Yihan, Kuroki, Yuko, Chen, Wei

论文摘要

在本文中,我们首先研究了组合纯探索的问题以及全伴兰反馈(CPE-bl),在该问题中,让学习者得到一个组合动作空间$ \ MATHCAL {x} \ subseteq \ subseteq \ {0,1 \}^d $ $ x^{\ top}θ$,带有$θ\ in \ mathbb {r}^d $一个潜在的和未知的环境向量。目的是使用尽可能少的样本确定具有最高预期奖励的最佳动作。对于CPE-BL,我们设计了第一个{\ em多项式时间自适应}算法,其样品复杂性与一个实例家庭的下限(在对数因素内)匹配,并且具有$δ_ {\ min} $的光依赖性(最佳动作和亚及以下操作之间的最小差距)。此外,我们提出了一种具有柔性反馈结构的CPE-BL的新概括,称为组合纯粹的探索和部分线性反馈(CPE-PL),其中涵盖了几个子问题家族,包括全伴侣反馈,半伴随反馈,半伴随反馈,部分反馈,部分反馈和非线性奖励功能。在CPE-PL中,每次动作$ x $报告了一个随机反馈向量,期望为$ m_ {x}θ$,其中$ m_x \ in \ mathbb {r}^{r}^{m_x \ times d} $是$ x $的转换矩阵,并获得随机的(可能是非线性)奖励与$ x $相关的。对于CPE-PL,我们开发了第一个{\ em多项式时间}算法,该算法同时解决了有限的反馈,一般奖励函数和组合动作空间,并提供了样品复杂性分析。我们的经验评估表明,我们的算法的数量级比现有算法更快,并且我们的CPE-BL算法在不同的$δ_ {\ min} $设置上具有鲁棒性,而我们的CPE-PL算法是唯一一个对非线性奖励功能的返回正确答案。

In this paper, we first study the problem of combinatorial pure exploration with full-bandit feedback (CPE-BL), where a learner is given a combinatorial action space $\mathcal{X} \subseteq \{0,1\}^d$, and in each round the learner pulls an action $x \in \mathcal{X}$ and receives a random reward with expectation $x^{\top} θ$, with $θ\in \mathbb{R}^d$ a latent and unknown environment vector. The objective is to identify the optimal action with the highest expected reward, using as few samples as possible. For CPE-BL, we design the first {\em polynomial-time adaptive} algorithm, whose sample complexity matches the lower bound (within a logarithmic factor) for a family of instances and has a light dependence of $Δ_{\min}$ (the smallest gap between the optimal action and sub-optimal actions). Furthermore, we propose a novel generalization of CPE-BL with flexible feedback structures, called combinatorial pure exploration with partial linear feedback (CPE-PL), which encompasses several families of sub-problems including full-bandit feedback, semi-bandit feedback, partial feedback and nonlinear reward functions. In CPE-PL, each pull of action $x$ reports a random feedback vector with expectation of $M_{x} θ$, where $M_x \in \mathbb{R}^{m_x \times d}$ is a transformation matrix for $x$, and gains a random (possibly nonlinear) reward related to $x$. For CPE-PL, we develop the first {\em polynomial-time} algorithm, which simultaneously addresses limited feedback, general reward function and combinatorial action space, and provide its sample complexity analysis. Our empirical evaluation demonstrates that our algorithms run orders of magnitude faster than the existing ones, and our CPE-BL algorithm is robust across different $Δ_{\min}$ settings while our CPE-PL algorithm is the only one returning correct answers for nonlinear reward functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源