论文标题
对比回顾:磨练RL快速学习和概括的关键步骤
Contrastive Retrospection: honing in on critical steps for rapid learning and generalization in RL
论文作者
论文摘要
在现实生活中,成功通常取决于多个关键步骤,这些步骤彼此之间以及最终的奖励。这些关键的步骤是要确定依靠钟声方程进行信用分配的传统加强学习(RL)方法的挑战。在这里,我们提出了一种新的RL算法,该算法使用离线对比度学习来磨练这些关键步骤。我们称之为对比度回顾(Conspec)的算法可以添加到任何现有的RL算法中。 Conspec通过新颖的对比损失来学习任务中关键步骤的一组原型,并在当前状态与原型之一相匹配时提供固有的奖励。销售中的原型为信用分配提供了两个关键好处:(i)它们可以快速识别所有关键步骤。 (ii)他们以一种易于解释的方式这样做,在改变感觉特征时可以使分布概括。与其他当代RL的信用分配方法不同,Conspec利用了一个事实,即回顾性地确定成功依靠(和忽略其他州)的一小部分步骤比在每个采取步骤中预测奖励的一小部分。 Conspec大大改善了各种RL任务的学习。该代码可在链接上找到:https://github.com/sunchipsster1/conspec
In real life, success is often contingent upon multiple critical steps that are distant in time from each other and from the final reward. These critical steps are challenging to identify with traditional reinforcement learning (RL) methods that rely on the Bellman equation for credit assignment. Here, we present a new RL algorithm that uses offline contrastive learning to hone in on these critical steps. This algorithm, which we call Contrastive Retrospection (ConSpec), can be added to any existing RL algorithm. ConSpec learns a set of prototypes for the critical steps in a task by a novel contrastive loss and delivers an intrinsic reward when the current state matches one of the prototypes. The prototypes in ConSpec provide two key benefits for credit assignment: (i) They enable rapid identification of all the critical steps. (ii) They do so in a readily interpretable manner, enabling out-of-distribution generalization when sensory features are altered. Distinct from other contemporary RL approaches to credit assignment, ConSpec takes advantage of the fact that it is easier to retrospectively identify the small set of steps that success is contingent upon (and ignoring other states) than it is to prospectively predict reward at every taken step. ConSpec greatly improves learning in a diverse set of RL tasks. The code is available at the link: https://github.com/sunchipsster1/ConSpec