论文标题
对比价值学习:简单离线RL的隐式模型
Contrastive Value Learning: Implicit Models for Simple Offline RL
论文作者
论文摘要
基于模型的强化学习(RL)方法在离线设置中很有吸引力,因为它们允许代理商在不与环境互动的情况下推理动作的后果。先前的方法学习了一个1步动力学模型,该模型可以预测给定当前状态和动作的下一个状态。这些模型不会立即告诉代理要采取哪些操作,而必须集成到更大的RL框架中。我们可以以不同的方式对环境动力学进行建模,以便学习的模型可以直接表示每个动作的价值?在本文中,我们提出了对比价值学习(CVL),该学习将学习环境动态的隐式多步模型。可以在不访问奖励功能的情况下学习该模型,但是可以使用无需任何TD学习即可直接估计每个动作的价值。因为该模型隐含地代表了多步骤的转变,所以它避免了必须预测高维观测,从而缩放到高维任务。我们的实验表明,在复杂的连续控制基准上,CVL优于先前的离线RL方法。
Model-based reinforcement learning (RL) methods are appealing in the offline setting because they allow an agent to reason about the consequences of actions without interacting with the environment. Prior methods learn a 1-step dynamics model, which predicts the next state given the current state and action. These models do not immediately tell the agent which actions to take, but must be integrated into a larger RL framework. Can we model the environment dynamics in a different way, such that the learned model does directly indicate the value of each action? In this paper, we propose Contrastive Value Learning (CVL), which learns an implicit, multi-step model of the environment dynamics. This model can be learned without access to reward functions, but nonetheless can be used to directly estimate the value of each action, without requiring any TD learning. Because this model represents the multi-step transitions implicitly, it avoids having to predict high-dimensional observations and thus scales to high-dimensional tasks. Our experiments demonstrate that CVL outperforms prior offline RL methods on complex continuous control benchmarks.