论文标题
使用深度加强学习在实际驾驶环境中的离散控制
Discrete Control in Real-World Driving Environments using Deep Reinforcement Learning
论文作者
论文摘要
训练自动驾驶汽车通常具有挑战性,因为它们在多个现实世界中需要大量标记的数据,这在计算上是计算和记忆密集型的。研究人员经常诉诸于驱动模拟器来训练代理商并将知识转移到现实世界中。由于模拟器缺乏现实的行为,因此这些方法效率很低。为了解决这个问题,我们在现实世界中介绍了一个框架(感知,计划和控制),该框架通过设置可靠的马尔可夫决策过程(MDP),将现实世界环境转移到游戏环境中。我们建议在多代理设置中进行现有强化学习(RL)算法的变化,以学习和执行现实环境中的离散控制。实验表明,在所有情况下,多代理设置的表现都优于单代理设置。我们还提出了可靠的初始化,数据增强和培训技术,使代理商能够学习和推广,以最少的输入视频数据以及最少的培训在现实环境中导航。此外,为了显示我们提出的算法的功效,我们将方法部署在虚拟驾驶环境TORC中。
Training self-driving cars is often challenging since they require a vast amount of labeled data in multiple real-world contexts, which is computationally and memory intensive. Researchers often resort to driving simulators to train the agent and transfer the knowledge to a real-world setting. Since simulators lack realistic behavior, these methods are quite inefficient. To address this issue, we introduce a framework (perception, planning, and control) in a real-world driving environment that transfers the real-world environments into gaming environments by setting up a reliable Markov Decision Process (MDP). We propose variations of existing Reinforcement Learning (RL) algorithms in a multi-agent setting to learn and execute the discrete control in real-world environments. Experiments show that the multi-agent setting outperforms the single-agent setting in all the scenarios. We also propose reliable initialization, data augmentation, and training techniques that enable the agents to learn and generalize to navigate in a real-world environment with minimal input video data, and with minimal training. Additionally, to show the efficacy of our proposed algorithm, we deploy our method in the virtual driving environment TORCS.