论文标题
世界价值功能:学习和计划的知识表示
World Value Functions: Knowledge Representation for Learning and Planning
论文作者
论文摘要
我们提出了世界价值函数(WVFS),这是一种面向目标的一般价值函数,它不仅代表了如何解决给定任务,还代表了代理环境中的任何其他目标任务。这是通过将代理装备内部目标空间定义为经历终端过渡的所有世界状态来实现的。然后,代理可以修改标准任务奖励以定义其自身的奖励功能,事实证明,它可以驱动其学习如何实现所有可触及的内部目标,以及在当前任务中的价值。我们在学习和计划的背景下展示了WVF的两个关键好处。特别是,给定有学习的WVF,代理可以通过简单地估计任务的奖励功能来计算新任务中的最佳策略。此外,我们表明WVF还隐式编码环境的过渡动力学,因此可以用于执行计划。实验结果表明,WVF可以比常规价值功能更快地学习,而它们的推断环境动态的能力可用于整合学习和计划方法以进一步提高样本效率。
We propose world value functions (WVFs), a type of goal-oriented general value function that represents how to solve not just a given task, but any other goal-reaching task in an agent's environment. This is achieved by equipping an agent with an internal goal space defined as all the world states where it experiences a terminal transition. The agent can then modify the standard task rewards to define its own reward function, which provably drives it to learn how to achieve all reachable internal goals, and the value of doing so in the current task. We demonstrate two key benefits of WVFs in the context of learning and planning. In particular, given a learned WVF, an agent can compute the optimal policy in a new task by simply estimating the task's reward function. Furthermore, we show that WVFs also implicitly encode the transition dynamics of the environment, and so can be used to perform planning. Experimental results show that WVFs can be learned faster than regular value functions, while their ability to infer the environment's dynamics can be used to integrate learning and planning methods to further improve sample efficiency.