论文标题
通过$ε$ - 严格探索了解强化学习中的深神经功能近似
Understanding Deep Neural Function Approximation in Reinforcement Learning via $ε$-Greedy Exploration
论文作者
论文摘要
本文提供了对强化学习(RL)深处神经功能近似(RL)的理论研究,并在在线环境下进行了$ε$ - 梅迪探索。这种问题设置是由属于该制度的成功的深Q-Networks(DQN)框架所激发的。在这项工作中,我们从函数类别的角度和神经网络体系结构(例如宽度和深度)的角度提供了对理论理解深度RL的初步尝试。具体而言,我们专注于基于价值的算法,分别通过BESOV(和Barron)功能空间赋予的深层(和两层)神经网络进行$ε$ - 果岭探索,旨在近似于$α$α$ -smoth Q-unction y $ d $ d $ d $ d $ d $ d $ demensional-demensional temunials fartile space。 We prove that, with $T$ episodes, scaling the width $m = \widetilde{\mathcal{O}}(T^{\frac{d}{2α+ d}})$ and the depth $L=\mathcal{O}(\log T)$ of the neural network for deep RL is sufficient for learning with sublinear regret in Besov spaces.此外,对于由Barron空间赋予的两层神经网络,缩放宽度$ω(\ sqrt {t})$就足够了。为了实现这一目标,我们的分析中的关键问题是如何估计深神经功能近似下的时间差异误差,因为$ε$ - 否则探索不足以确保``乐观''。我们的分析重新制定了$ l^2(\ mathrm {d}μ)$ - 在某个平均度量$μ$上的可集成空间的时间差误差,并将其转换为非IID设置下的概括问题。这可能对RL理论具有自身的兴趣,以便更好地理解Deep RL中的$ε$ - 对待探索。
This paper provides a theoretical study of deep neural function approximation in reinforcement learning (RL) with the $ε$-greedy exploration under the online setting. This problem setting is motivated by the successful deep Q-networks (DQN) framework that falls in this regime. In this work, we provide an initial attempt on theoretical understanding deep RL from the perspective of function class and neural networks architectures (e.g., width and depth) beyond the ``linear'' regime. To be specific, we focus on the value based algorithm with the $ε$-greedy exploration via deep (and two-layer) neural networks endowed by Besov (and Barron) function spaces, respectively, which aims at approximating an $α$-smooth Q-function in a $d$-dimensional feature space. We prove that, with $T$ episodes, scaling the width $m = \widetilde{\mathcal{O}}(T^{\frac{d}{2α+ d}})$ and the depth $L=\mathcal{O}(\log T)$ of the neural network for deep RL is sufficient for learning with sublinear regret in Besov spaces. Moreover, for a two layer neural network endowed by the Barron space, scaling the width $Ω(\sqrt{T})$ is sufficient. To achieve this, the key issue in our analysis is how to estimate the temporal difference error under deep neural function approximation as the $ε$-greedy exploration is not enough to ensure ``optimism''. Our analysis reformulates the temporal difference error in an $L^2(\mathrm{d}μ)$-integrable space over a certain averaged measure $μ$, and transforms it to a generalization problem under the non-iid setting. This might have its own interest in RL theory for better understanding $ε$-greedy exploration in deep RL.