论文标题

快速政策转移的相对政策转换优化

Relative Policy-Transition Optimization for Fast Policy Transfer

论文作者

Xu, Jiawei, Zhou, Cheng, Zhang, Yizheng, Wang, Baoxiang, Han, Lei

论文摘要

我们考虑两个马尔可夫决策过程(MDP)之间的政策转移问题。我们基于现有的理论结果来介绍一个引理,以衡量两个任意MDP之间的相对性差距,这是在不同策略和环境动态上定义的任何两个累积期望回报之间的差异。基于此引理,我们提出了两种称为相对策略优化(RPO)和相对过渡优化(RTO)的新算法,分别提供了快速的策略转移和动态建模。 RPO转移在一个环境中评估的策略,以最大程度地提高另一个环境的返回,而RTO更新了参数化的动态模型,以减少两个环境的动力学之间的差距。整合两种算法会导致完整的相对策略转变优化(RPTO)算法,其中策略同时与两个环境进行交互,因此从两个环境中的数据收集,策略和过渡更新以一个封闭的循环完成,以形成一个用于策略传输的主要学习框架。我们通过通过变体动态创建策略转移问题来证明RPTO对一组穆约科连续控制任务的有效性。

We consider the problem of policy transfer between two Markov Decision Processes (MDPs). We introduce a lemma based on existing theoretical results in reinforcement learning to measure the relativity gap between two arbitrary MDPs, that is the difference between any two cumulative expected returns defined on different policies and environment dynamics. Based on this lemma, we propose two new algorithms referred to as Relative Policy Optimization (RPO) and Relative Transition Optimization (RTO), which offer fast policy transfer and dynamics modelling, respectively. RPO transfers the policy evaluated in one environment to maximize the return in another, while RTO updates the parameterized dynamics model to reduce the gap between the dynamics of the two environments. Integrating the two algorithms results in the complete Relative Policy-Transition Optimization (RPTO) algorithm, in which the policy interacts with the two environments simultaneously, such that data collections from two environments, policy and transition updates are completed in one closed loop to form a principled learning framework for policy transfer. We demonstrate the effectiveness of RPTO on a set of MuJoCo continuous control tasks by creating policy transfer problems via variant dynamics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源