论文标题
为快速学习设计奖励
Designing Rewards for Fast Learning
论文作者
论文摘要
为了将所需的行为传达给增强学习(RL)代理,设计师必须为环境选择奖励功能,可以说是最重要的旋钮设计师在与RL代理互动时拥有的最重要的旋钮。尽管许多奖励功能会引起相同的最佳行为(Ng等,1999),但实际上,其中一些奖励的学习速度比其他人更快。在本文中,我们研究奖励设计的选择如何影响学习速度,并寻求确定迅速引起目标行为的良好奖励设计原则。这个奖励识别问题被构成一个优化问题:首先,我们主张选择最大化动作差距的州奖励,从而使最佳动作易于与次优的奖励。其次,我们建议最大程度地减少视野的度量,这就是我们称之为“主观折扣”的,需要优化奖励以鼓励代理商以较少的lookahead做出最佳决策。为了解决此优化问题,我们提出了一种基于线性编程的算法,该算法有效地找到了奖励功能,可最大程度地提高动作差距并最大程度地减少主观折扣。我们通过Q学习的表格环境中的算法来测试用算法产生的奖励,并从经验上表明它们导致了更快的学习。尽管我们只专注于Q学习,因为它可能是最简单,最广为人知的RL算法,但R-Max的初步结果(Brafman和Tennenholtz,2000年)表明我们的结果更为笼统。我们的实验支持奖励设计的三个原则:1)与现有结果一致,对每一步的惩罚都比奖励目标更快。 2)当沿着目标轨迹奖励子目标时,随着目标越来越接近,奖励应逐渐增加。 3)在仔细设计的情况下,在每个状态下均不为零的密集奖励是好的。
To convey desired behavior to a Reinforcement Learning (RL) agent, a designer must choose a reward function for the environment, arguably the most important knob designers have in interacting with RL agents. Although many reward functions induce the same optimal behavior (Ng et al., 1999), in practice, some of them result in faster learning than others. In this paper, we look at how reward-design choices impact learning speed and seek to identify principles of good reward design that quickly induce target behavior. This reward-identification problem is framed as an optimization problem: Firstly, we advocate choosing state-based rewards that maximize the action gap, making optimal actions easy to distinguish from suboptimal ones. Secondly, we propose minimizing a measure of the horizon, something we call the "subjective discount", over which rewards need to be optimized to encourage agents to make optimal decisions with less lookahead. To solve this optimization problem, we propose a linear-programming based algorithm that efficiently finds a reward function that maximizes action gap and minimizes subjective discount. We test the rewards generated with the algorithm in tabular environments with Q-Learning, and empirically show they lead to faster learning. Although we only focus on Q-Learning because it is perhaps the simplest and most well understood RL algorithm, preliminary results with R-max (Brafman and Tennenholtz, 2000) suggest our results are much more general. Our experiments support three principles of reward design: 1) consistent with existing results, penalizing each step taken induces faster learning than rewarding the goal. 2) When rewarding subgoals along the target trajectory, rewards should gradually increase as the goal gets closer. 3) Dense reward that's nonzero on every state is only good if designed carefully.