论文标题
学习Bellman完成离线政策评估的完整表示
Learning Bellman Complete Representations for Offline Policy Evaluation
论文作者
论文摘要
我们研究了离线增强学习(RL)的表示形式学习,重点是离线政策评估(OPE)的重要任务。最近的工作表明,与监督的学习相反,Q功能的可实现性不足以学习。样品效率OPE的两个足够条件是Bellman的完整性和覆盖范围。先前的工作通常假定给出满足这些条件的表示形式,结果大多是理论上的。在这项工作中,我们提出了BCRL,该BCRL直接从数据中吸取了近似线性的Bellman完整表示,并具有良好的覆盖范围。通过这种学识渊博的表示,我们使用最小平方策略评估(LSPE)执行OPE,并在我们学习的表示中具有线性功能。我们提出了端到端的理论分析,表明我们的两阶段算法享有多项式样本的复杂性,该算法在所考虑的丰富类别中提供了一些表示形式,这是线性的钟声钟完成。从经验上讲,我们对来自DeepMind Control Suite的具有挑战性的,基于图像的连续控制任务进行了广泛的评估。与以前的代表性学习方法相比,我们表明我们的表示能够更好地OPE(例如,Curl,Spr)。 BCRL通过最先进的方法拟合Q评估(FQE)实现竞争性OPE误差,并在评估超出初始状态分布的评估时击败FQE。我们的消融表明,我们方法的线性铃铛完整和覆盖范围都是至关重要的。
We study representation learning for Offline Reinforcement Learning (RL), focusing on the important task of Offline Policy Evaluation (OPE). Recent work shows that, in contrast to supervised learning, realizability of the Q-function is not enough for learning it. Two sufficient conditions for sample-efficient OPE are Bellman completeness and coverage. Prior work often assumes that representations satisfying these conditions are given, with results being mostly theoretical in nature. In this work, we propose BCRL, which directly learns from data an approximately linear Bellman complete representation with good coverage. With this learned representation, we perform OPE using Least Square Policy Evaluation (LSPE) with linear functions in our learned representation. We present an end-to-end theoretical analysis, showing that our two-stage algorithm enjoys polynomial sample complexity provided some representation in the rich class considered is linear Bellman complete. Empirically, we extensively evaluate our algorithm on challenging, image-based continuous control tasks from the Deepmind Control Suite. We show our representation enables better OPE compared to previous representation learning methods developed for off-policy RL (e.g., CURL, SPR). BCRL achieve competitive OPE error with the state-of-the-art method Fitted Q-Evaluation (FQE), and beats FQE when evaluating beyond the initial state distribution. Our ablations show that both linear Bellman complete and coverage components of our method are crucial.