论文标题
物理嵌入式计划问题:增强学习的新挑战
Physically Embedded Planning Problems: New Challenges for Reinforcement Learning
论文作者
论文摘要
深度强化学习(RL)的最新研究产生了能够掌握诸如GO,Chess或Shogi之类的挑战性游戏的算法。在这些作品中,RL代理直接观察游戏的自然状态,并直接控制其行动。但是,当人类玩这样的游戏时,他们不仅是理由的,而且还与身体环境互动。他们通过使用触摸和细粒电动机控制来操纵零件来了解游戏的状态,并通过操纵零件来修改游戏状态。通过抽象目标掌握复杂的物理系统是人工智能的核心挑战,但对于现有的RL算法仍然遥不可及。为了鼓励朝着这个目标的进步,我们引入了一系列物理嵌入式的计划问题,并使它们公开可用。我们将具有挑战性的符号任务(Sokoban,TIC-TAC-TOE和GO)嵌入物理引擎中,以产生一组需要在长期视野中感知,推理和运动控制的任务。尽管现有的RL算法可以解决这些任务的符号版本,但我们发现它们甚至很难掌握最简单的物理嵌入式对应物。作为表征解决这些任务空间的第一步,我们引入了一个强大的基线,该基线使用预先训练的专家游戏玩家在抽象空间中为RL代理的策略提供提示,同时在完整的感觉运动控制任务上训练它。最终的代理解决了许多任务,强调了对抽象计划和体现控制之间差距的方法的需求。请参阅https://youtu.be/rwhihlym_1k的插图视频。
Recent work in deep reinforcement learning (RL) has produced algorithms capable of mastering challenging games such as Go, chess, or shogi. In these works the RL agent directly observes the natural state of the game and controls that state directly with its actions. However, when humans play such games, they do not just reason about the moves but also interact with their physical environment. They understand the state of the game by looking at the physical board in front of them and modify it by manipulating pieces using touch and fine-grained motor control. Mastering complicated physical systems with abstract goals is a central challenge for artificial intelligence, but it remains out of reach for existing RL algorithms. To encourage progress towards this goal we introduce a set of physically embedded planning problems and make them publicly available. We embed challenging symbolic tasks (Sokoban, tic-tac-toe, and Go) in a physics engine to produce a set of tasks that require perception, reasoning, and motor control over long time horizons. Although existing RL algorithms can tackle the symbolic versions of these tasks, we find that they struggle to master even the simplest of their physically embedded counterparts. As a first step towards characterizing the space of solution to these tasks, we introduce a strong baseline that uses a pre-trained expert game player to provide hints in the abstract space to an RL agent's policy while training it on the full sensorimotor control task. The resulting agent solves many of the tasks, underlining the need for methods that bridge the gap between abstract planning and embodied control. See illustrating video at https://youtu.be/RwHiHlym_1k.