论文标题
应用对抗网络以提高自动驾驶汽车的数据效率和可靠性
Applying adversarial networks to increase the data efficiency and reliability of Self-Driving Cars
论文作者
论文摘要
卷积神经网络(CNN)很容易在存在小扰动时误解图像。随着自动驾驶汽车中CNN的患病率的增加,至关重要的是,确保这些算法是强大的,以防止由于识别情况而导致碰撞发生。在对抗性自动驾驶框架中,实现了生成对抗网络(GAN)来在图像中生成现实的扰动,从而导致分类器CNN误导数据。然后,此扰动数据用于进一步训练分类器CNN。对抗性自动驾驶框架被应用于图像分类算法以提高扰动图像的分类精度,后来应用于训练自动驾驶汽车以驱动模拟。还建造了一辆小型的自动驾驶汽车,以绕着轨道行驶和对标志进行分类。对抗性自动驾驶框架通过学习数据集产生了扰动的图像,因此消除了对大量数据进行培训的需求。实验表明,对抗性自动驾驶框架确定了CNN容易受到扰动的情况,并为CNN生成了这些情况的新示例,以供CNN进行训练。对抗性自动驾驶框架生成的其他数据为CNN提供了足够的数据,可以推广到环境。因此,它是提高CNN对扰动的弹性的可行工具。特别是,在现实世界中的自动驾驶汽车中,对抗性自动驾驶框架的应用导致准确性提高了18%,并且模拟的自动驾驶模型在驾驶30分钟内没有碰撞。
Convolutional Neural Networks (CNNs) are vulnerable to misclassifying images when small perturbations are present. With the increasing prevalence of CNNs in self-driving cars, it is vital to ensure these algorithms are robust to prevent collisions from occurring due to failure in recognizing a situation. In the Adversarial Self-Driving framework, a Generative Adversarial Network (GAN) is implemented to generate realistic perturbations in an image that cause a classifier CNN to misclassify data. This perturbed data is then used to train the classifier CNN further. The Adversarial Self-driving framework is applied to an image classification algorithm to improve the classification accuracy on perturbed images and is later applied to train a self-driving car to drive in a simulation. A small-scale self-driving car is also built to drive around a track and classify signs. The Adversarial Self-driving framework produces perturbed images through learning a dataset, as a result removing the need to train on significant amounts of data. Experiments demonstrate that the Adversarial Self-driving framework identifies situations where CNNs are vulnerable to perturbations and generates new examples of these situations for the CNN to train on. The additional data generated by the Adversarial Self-driving framework provides sufficient data for the CNN to generalize to the environment. Therefore, it is a viable tool to increase the resilience of CNNs to perturbations. Particularly, in the real-world self-driving car, the application of the Adversarial Self-Driving framework resulted in an 18 % increase in accuracy, and the simulated self-driving model had no collisions in 30 minutes of driving.