论文标题
具有神经网络近似的时间差与残留梯度之间的实验比较
An Experimental Comparison Between Temporal Difference and Residual Gradient with Neural Network Approximation
论文作者
论文摘要
梯度下降或其变体在训练神经网络中很受欢迎。然而,在具有神经网络近似的深度Q学习中,一种增强学习,梯度下降(也称为残留梯度(RG))几乎没有用于解决Bellman残留最小化问题。相反,时间差(TD)是一种不完整的梯度下降方法。在这项工作中,我们执行了广泛的实验,以表明TD优于RG,也就是说,当训练导致小钟声剩余错误时,TD找到的解决方案具有更好的策略,并且对神经网络参数的扰动更为强大。我们进一步使用实验来揭示强化学习和监督学习之间的关键区别,也就是说,小钟声剩余错误可能对应于强化学习中的不良政策,而监督学习中的测试损失函数是表明表现的标准索引。我们还经验研究,TD中缺少的术语是RG表现不佳的关键原因。我们的工作表明,深度Q学习解决方案的性能与训练动态密切相关,以及不完整的梯度下降方法如何找到良好的政策对于将来的研究很有趣。
Gradient descent or its variants are popular in training neural networks. However, in deep Q-learning with neural network approximation, a type of reinforcement learning, gradient descent (also known as Residual Gradient (RG)) is barely used to solve Bellman residual minimization problem. On the contrary, Temporal Difference (TD), an incomplete gradient descent method prevails. In this work, we perform extensive experiments to show that TD outperforms RG, that is, when the training leads to a small Bellman residual error, the solution found by TD has a better policy and is more robust against the perturbation of neural network parameters. We further use experiments to reveal a key difference between reinforcement learning and supervised learning, that is, a small Bellman residual error can correspond to a bad policy in reinforcement learning while the test loss function in supervised learning is a standard index to indicate the performance. We also empirically examine that the missing term in TD is a key reason why RG performs badly. Our work shows that the performance of a deep Q-learning solution is closely related to the training dynamics and how an incomplete gradient descent method can find a good policy is interesting for future study.