论文标题
视觉径向基Q网络
Visual Radial Basis Q-Network
论文作者
论文摘要
尽管在过去的十年中已经在很大程度上研究了原始图像的加固学习(RL),但现有方法仍然受到许多限制。通常使用专家知识来处理高输入维度,以提取手工制作的功能或通过卷积网络编码的环境。两种解决方案都需要优化许多参数。相比之下,我们提出了一种通用方法,可以从少量可训练参数的原始图像中提取稀疏特征。我们直接在原始图像上使用径向基函数网络(RBFN)实现了这一目标。我们评估了Vizdoom环境中Q学习任务中提出的视觉提取方法的性能。然后,我们将结果与两个深Q网络进行了比较,一个直接在图像上训练,另一个对由预验证的自动编码器提取的功能进行了训练。我们表明,所提出的方法提供了类似的,或者在某些情况下,在概念上更简单的情况下,具有更少的可训练参数的性能甚至更好。
While reinforcement learning (RL) from raw images has been largely investigated in the last decade, existing approaches still suffer from a number of constraints. The high input dimension is often handled using either expert knowledge to extract handcrafted features or environment encoding through convolutional networks. Both solutions require numerous parameters to be optimized. In contrast, we propose a generic method to extract sparse features from raw images with few trainable parameters. We achieved this using a Radial Basis Function Network (RBFN) directly on raw image. We evaluate the performance of the proposed approach for visual extraction in Q-learning tasks in the Vizdoom environment. Then, we compare our results with two Deep Q-Network, one trained directly on images and another one trained on feature extracted by a pretrained auto-encoder. We show that the proposed approach provides similar or, in some cases, even better performances with fewer trainable parameters while being conceptually simpler.