论文标题
带有决策估计系数的RL的统一算法:PAC,无奖励,基于偏好的学习及以后
Unified Algorithms for RL with Decision-Estimation Coefficients: PAC, Reward-Free, Preference-Based Learning, and Beyond
论文作者
论文摘要
现代强化学习(RL)不仅仅是学习最佳政策;替代学习目标,例如探索环境,估计基础模型以及从偏好反馈中学习都是实际的重要性。尽管已经提出了每个特定目标的样本效率算法,但这些算法通常很大程度上取决于特定的学习目标,因此相应地接受了不同的结构。这是一个迫切的问题,是否可以通过单个统一算法来解决这些学习目标。 我们通过为基于决策估计系数(DEC)框架的统一算法框架开发统一的算法框架来取得进展。我们的框架处理了许多学习目标,例如无需RL,PAC RL,无奖励学习,模型估计和基于首选项的学习,仅通过简单地实例化了称为“广义DEC”的相同的通用复杂性度量以及相应的通用算法。广义DEC还为每个特定学习目标产生了样本复杂性下限。作为应用程序,我们将“可脱字表示”作为界限通用DEC的自然条件,并使用它来获得许多新样本效率的结果(并恢复现有结果),以作为广泛的学习目标和问题类别作为直接划定。最后,作为一种连接,我们基于后验采样和最大似然估计重新分析了两种现有的基于乐观模型的算法,表明它们在与DEC相似的结构条件下享有样品复杂性界限。
Modern Reinforcement Learning (RL) is more than just learning the optimal policy; Alternative learning goals such as exploring the environment, estimating the underlying model, and learning from preference feedback are all of practical importance. While provably sample-efficient algorithms for each specific goal have been proposed, these algorithms often depend strongly on the particular learning goal and thus admit different structures correspondingly. It is an urging open question whether these learning goals can rather be tackled by a single unified algorithm. We make progress on this question by developing a unified algorithm framework for a large class of learning goals, building on the Decision-Estimation Coefficient (DEC) framework. Our framework handles many learning goals such as no-regret RL, PAC RL, reward-free learning, model estimation, and preference-based learning, all by simply instantiating the same generic complexity measure called "Generalized DEC", and a corresponding generic algorithm. The generalized DEC also yields a sample complexity lower bound for each specific learning goal. As applications, we propose "decouplable representation" as a natural sufficient condition for bounding generalized DECs, and use it to obtain many new sample-efficient results (and recover existing results) for a wide range of learning goals and problem classes as direct corollaries. Finally, as a connection, we re-analyze two existing optimistic model-based algorithms based on Posterior Sampling and Maximum Likelihood Estimation, showing that they enjoy sample complexity bounds under similar structural conditions as the DEC.