论文标题

覆盖范围在在线强化学习中的作用

The Role of Coverage in Online Reinforcement Learning

论文作者

Xie, Tengyang, Foster, Dylan J., Bai, Yu, Jiang, Nan, Kakade, Sham M.

论文摘要

覆盖条件 - 断言数据记录分布充分涵盖了状态空间 - 在确定离线增强学习的样本复杂性方面起着基本作用。虽然乍一看,这种情况似乎与在线加强学习无关,但我们通过显示出一个新的联系(令人惊讶的是)仅存在具有良好覆盖范围的数据分发可以实现样品有效的在线RL。具体而言,我们表明覆盖性 - 即,存在满足无处不在的覆盖条件的数据分布,称为浓缩性 - 也可以看作是基础MDP的结构性特性,即使代理人不知道该分布,也可以通过标准算法来利用标准算法的样品效率探索。我们通过证明了几个较弱的覆盖范围,尽管足以进行离线RL,但我们可以补充这一结果。我们还表明,在线RL的现有复杂性度量,包括Bellman Rank和Bellman-Eluder维度,无法最佳地捕获可覆盖能力,并提出了一种新的复杂性度量,即顺序推断系数,以提供统一。

Coverage conditions -- which assert that the data logging distribution adequately covers the state space -- play a fundamental role in determining the sample complexity of offline reinforcement learning. While such conditions might seem irrelevant to online reinforcement learning at first glance, we establish a new connection by showing -- somewhat surprisingly -- that the mere existence of a data distribution with good coverage can enable sample-efficient online RL. Concretely, we show that coverability -- that is, existence of a data distribution that satisfies a ubiquitous coverage condition called concentrability -- can be viewed as a structural property of the underlying MDP, and can be exploited by standard algorithms for sample-efficient exploration, even when the agent does not know said distribution. We complement this result by proving that several weaker notions of coverage, despite being sufficient for offline RL, are insufficient for online RL. We also show that existing complexity measures for online RL, including Bellman rank and Bellman-Eluder dimension, fail to optimally capture coverability, and propose a new complexity measure, the sequential extrapolation coefficient, to provide a unification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源