论文标题
最大熵模型推出:基于快速模型的策略优化而不复合错误
Maximum Entropy Model Rollouts: Fast Model Based Policy Optimization without Compounding Errors
论文作者
论文摘要
模型使用是基于模型的增强学习的核心挑战。尽管基于深神经网络的动力学模型为单步预测提供了良好的概括,但是当由于复合误差而用于预测长度的轨迹时,这种能力会过度利用。在这项工作中,我们提出了一种基于DYNA模型的强化学习算法,我们将其称为最大熵模型推出(MEMR)。为了消除复合错误,我们仅使用模型来生成单步推出。此外,我们建议通过对环境的不均匀采样来生成\ emph {Virflese}模型推出,以使模型推出的熵最大化。我们在数学上得出了高斯先验下一个数据案例的最大熵采样标准。为了实现此标准,我们建议利用优先的经验重播。我们在挑战运动基准的初步实验表明,我们的方法达到了最佳基于模型的算法的相同样品效率,与最佳无模型算法的渐近性能匹配,并显着降低了基于其他模型方法的计算要求。
Model usage is the central challenge of model-based reinforcement learning. Although dynamics model based on deep neural networks provide good generalization for single step prediction, such ability is over exploited when it is used to predict long horizon trajectories due to compounding errors. In this work, we propose a Dyna-style model-based reinforcement learning algorithm, which we called Maximum Entropy Model Rollouts (MEMR). To eliminate the compounding errors, we only use our model to generate single-step rollouts. Furthermore, we propose to generate \emph{diverse} model rollouts by non-uniform sampling of the environment states such that the entropy of the model rollouts is maximized. We mathematically derived the maximum entropy sampling criteria for one data case under Gaussian prior. To accomplish this criteria, we propose to utilize a prioritized experience replay. Our preliminary experiments in challenging locomotion benchmarks show that our approach achieves the same sample efficiency of the best model-based algorithms, matches the asymptotic performance of the best model-free algorithms, and significantly reduces the computation requirements of other model-based methods.