论文标题
安全关注和缓解方法有关在安全 - 关键的感知任务中使用深度学习
Safety Concerns and Mitigation Approaches Regarding the Use of Deep Learning in Safety-Critical Perception Tasks
论文作者
论文摘要
在设计自主剂,例如机器人,无人机或自动化车辆等自主剂时,深度学习方法被广泛认为是必不可少的。但是,对于不大规模的自主代理人使用深度学习的主要原因已经是安全问题。深度学习方法通常表现出黑色框的行为,这使得它们很难在关键方面进行评估。尽管在深度学习中进行了一些安全工作,但大多数论文通常都集中在高级安全问题上。在这项工作中,我们试图深入研究深度学习方法的安全关注,并在技术层面上提出简洁的枚举。此外,我们对可能的缓解方法进行了广泛的讨论,并就仍缺少哪些缓解方法的前景来促进有关深度学习方法安全的论证。
Deep learning methods are widely regarded as indispensable when it comes to designing perception pipelines for autonomous agents such as robots, drones or automated vehicles. The main reasons, however, for deep learning not being used for autonomous agents at large scale already are safety concerns. Deep learning approaches typically exhibit a black-box behavior which makes it hard for them to be evaluated with respect to safety-critical aspects. While there have been some work on safety in deep learning, most papers typically focus on high-level safety concerns. In this work, we seek to dive into the safety concerns of deep learning methods and present a concise enumeration on a deeply technical level. Additionally, we present extensive discussions on possible mitigation methods and give an outlook regarding what mitigation methods are still missing in order to facilitate an argumentation for the safety of a deep learning method.