论文标题

通过两样本测试来解决增强学习中最大化偏差

Addressing Maximization Bias in Reinforcement Learning with Two-Sample Testing

论文作者

Waltz, Martin, Okhrin, Ostap

论文摘要

基于价值的增强学习算法在游戏,机器人技术和其他现实世界中表现出了很高的结果。高估偏差是对这些算法的已知威胁,有时会导致急剧的性能降低甚至完全算法失败。我们从统计上构架了偏差问题,并认为它是估计一组随机变量的最大期望值(MEV)的实例。我们根据平均值的两样本测试提出了$ t $ stimator(TE),该测试通过调整基本假设检验的显着性水平来灵活地在过度和低估之间进行插值。我们还引入了一个概括,称为$ k $ estimator(KE),该概述遵守与TE的偏差和差异界限,并依赖于几乎任意的内核函数。我们使用TE和KE介绍了$ Q $ - 学习的修改和自举的Deep $ Q $ -NETWORK(BDQN),并在表格设置中证明了融合。此外,我们提出了基于TE的BDQN的自适应变体,该变体会动态调整显着性水平,以最大程度地减少绝对估计偏差。所有提出的估计器和算法都经过对不同任务和环境的彻底测试和验证,这说明了TE和KE的偏见控制和性能潜力。

Value-based reinforcement-learning algorithms have shown strong results in games, robotics, and other real-world applications. Overestimation bias is a known threat to those algorithms and can sometimes lead to dramatic performance decreases or even complete algorithmic failure. We frame the bias problem statistically and consider it an instance of estimating the maximum expected value (MEV) of a set of random variables. We propose the $T$-Estimator (TE) based on two-sample testing for the mean, that flexibly interpolates between over- and underestimation by adjusting the significance level of the underlying hypothesis tests. We also introduce a generalization, termed $K$-Estimator (KE), that obeys the same bias and variance bounds as the TE and relies on a nearly arbitrary kernel function. We introduce modifications of $Q$-Learning and the Bootstrapped Deep $Q$-Network (BDQN) using the TE and the KE, and prove convergence in the tabular setting. Furthermore, we propose an adaptive variant of the TE-based BDQN that dynamically adjusts the significance level to minimize the absolute estimation bias. All proposed estimators and algorithms are thoroughly tested and validated on diverse tasks and environments, illustrating the bias control and performance potential of the TE and KE.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源