论文标题

机会性的情节增强学习

Opportunistic Episodic Reinforcement Learning

论文作者

Wang, Xiaoxiao, Bouacida, Nader, Guo, Xueying, Liu, Xin

论文摘要

在本文中,我们提出和研究机会主义的加强学习 - 一种强化学习问题的新变体,其中选择次优的动作的遗憾在被称为变异因素的外部环境条件下有所不同。当变异因子较低时,选择次优动作的遗憾,反之亦然。我们的直觉是在变异因子高时利用更多,并在变异因子较低时探索更多。我们通过设计和评估OppPPSRL算法来证明这一新型框架对有限摩尼斯情节MDP的好处。我们的算法通过引入变异因素依赖性乐观情绪来指导探索,动态平衡了探索 - 探索探索折衷的折衷。我们建立了一个$ \ tilde {o}(hs \ sqrt {at})$遗憾,依靠Oppucrl2算法,并通过模拟显示OpppeRL2和Oppppsrl算法均优于其原始相应的算法。

In this paper, we propose and study opportunistic reinforcement learning - a new variant of reinforcement learning problems where the regret of selecting a suboptimal action varies under an external environmental condition known as the variation factor. When the variation factor is low, so is the regret of selecting a suboptimal action and vice versa. Our intuition is to exploit more when the variation factor is high, and explore more when the variation factor is low. We demonstrate the benefit of this novel framework for finite-horizon episodic MDPs by designing and evaluating OppUCRL2 and OppPSRL algorithms. Our algorithms dynamically balance the exploration-exploitation trade-off for reinforcement learning by introducing variation factor-dependent optimism to guide exploration. We establish an $\tilde{O}(HS \sqrt{AT})$ regret bound for the OppUCRL2 algorithm and show through simulations that both OppUCRL2 and OppPSRL algorithm outperform their original corresponding algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源