论文标题

靠近固有贝尔曼错误的最佳政策接近最佳政策

Learning Near Optimal Policies with Low Inherent Bellman Error

论文作者

Zanette, Andrea, Lazaric, Alessandro, Kochenderfer, Mykel, Brunskill, Emma

论文摘要

我们研究了在低固有贝尔曼误差的概念下,在情节增强学习中具有近似线性动作值函数,这是通常用于显示近似值迭代的融合的条件。首先,我们将此条件与其他常见框架联系起来,并表明它比先前工作的低等级(或线性)MDP更一般。第二,我们提供了一种算法,具有很高的可能性后悔绑定$ \ widetilde o(\ sum_ {t = 1}^h d_t \ sqrt \ sqrt {k} + sum_ {t = 1}^h \ sqrt {d_t} \ ibe k)$ h $是$ k $ k $ is的$ k $固有的Bellman错误和$ D_T $是TimEstep $ t $的功能维度。此外,我们通过显示匹配的下限,表明结果超出了常数和日志。 This has two important consequences: 1) it shows that exploration is possible using only \emph{batch assumptions} with an algorithm that achieves the optimal statistical rate for the setting we consider, which is more general than prior work on low-rank MDPs 2) the lack of closedness (measured by the inherent Bellman error) is only amplified by $\sqrt{d_t}$ despite working in the online setting.最后,当$ h = 1 $时,该算法将算法还原为著名的\ textsc {linucb},但探索参数的选择不同,该参数允许处理拼写的上下文线性bastits。尽管MDP设置的计算障碍问题仍然打开,但这富集了MDP类,并具有线性表示的动作值函数,其中可能是统计上有效的增强学习。

We study the exploration problem with approximate linear action-value functions in episodic reinforcement learning under the notion of low inherent Bellman error, a condition normally employed to show convergence of approximate value iteration. First we relate this condition to other common frameworks and show that it is strictly more general than the low rank (or linear) MDP assumption of prior work. Second we provide an algorithm with a high probability regret bound $\widetilde O(\sum_{t=1}^H d_t \sqrt{K} + \sum_{t=1}^H \sqrt{d_t} \IBE K)$ where $H$ is the horizon, $K$ is the number of episodes, $\IBE$ is the value if the inherent Bellman error and $d_t$ is the feature dimension at timestep $t$. In addition, we show that the result is unimprovable beyond constants and logs by showing a matching lower bound. This has two important consequences: 1) it shows that exploration is possible using only \emph{batch assumptions} with an algorithm that achieves the optimal statistical rate for the setting we consider, which is more general than prior work on low-rank MDPs 2) the lack of closedness (measured by the inherent Bellman error) is only amplified by $\sqrt{d_t}$ despite working in the online setting. Finally, the algorithm reduces to the celebrated \textsc{LinUCB} when $H=1$ but with a different choice of the exploration parameter that allows handling misspecified contextual linear bandits. While computational tractability questions remain open for the MDP setting, this enriches the class of MDPs with a linear representation for the action-value function where statistically efficient reinforcement learning is possible.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源