论文标题

稳定有效的政策评估

Stable and Efficient Policy Evaluation

论文作者

Lyu, Daoming, Liu, Bo, Geist, Matthieu, Dong, Wen, Biaz, Saad, Wang, Qi

论文摘要

政策评估算法对于强化学习至关重要,因为它们可以预测政策的绩效。但是,在这个预测问题中存在两个长期存在的问题,需要解决:非政策稳定性和政策效率。已知常规的时间差异(TD)算法在车间环境中表现良好,但不是稳定的。另一方面,梯度TD和强调的TD算法是稳定的,但并非有效。本文介绍了新型算法,这些算法既可以使用倾斜投影方法,既可以稳定,又可以在上政策。对各个领域的经验实验结果验证了所提出的方法的有效性。

Policy evaluation algorithms are essential to reinforcement learning due to their ability to predict the performance of a policy. However, there are two long-standing issues lying in this prediction problem that need to be tackled: off-policy stability and on-policy efficiency. The conventional temporal difference (TD) algorithm is known to perform very well in the on-policy setting, yet is not off-policy stable. On the other hand, the gradient TD and emphatic TD algorithms are off-policy stable, but are not on-policy efficient. This paper introduces novel algorithms that are both off-policy stable and on-policy efficient by using the oblique projection method. The empirical experimental results on various domains validate the effectiveness of the proposed approach.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源