论文标题

自动化设计评估中的功能可视化利用可解释的人工智能方法

Feature Visualization within an Automated Design Assessment leveraging Explainable Artificial Intelligence Methods

论文作者

Schönhof, Raoul, Werner, Artem, Elstner, Jannes, Zopcsak, Boldizsar, Awad, Ramez, Huber, Marco

论文摘要

不仅是制造过程的自动化,而且自动化程序的自动化本身也与自动化研究变得越来越相关。在这种情况下,已经介绍了自动化功能评估,主要由由3D CAD数据驱动的深度学习系统利用。当前的评估系统可能能够评估有关抽象功能的CAD数据,例如自动将组件与大型商品或抓地力表面的存在分开的能力。然而,它们遭受了黑匣子系统的损失,可以轻松地学习和生成评估,但没有任何关于系统决策原因的几何指标。通过利用可解释的AI(XAI)方法,我们尝试打开黑匣子。已经使用了可解释的AI方法来评估神经网络是否成功地学习了给定的任务或分析输入的哪些特征可能会导致对抗性攻击。这些方法旨在通过分析给定输入及其对网络输出的影响的模式来获取对神经网络的更多见解。在NeuroCAD项目中,XAI方法用于识别与某些抽象特征相关的几何特征。在这项工作中,敏感性分析(SA),层面相关性传播(LRP),梯度加权的类激活映射(GRAD-CAM)方法以及局部可解释的模型 - 静态解释(LIME)在NeuroCAD环境中实现了,不仅可以评估CAD模型,还可以评估CAD模型,还可以确定与该网络有关的功能相关的功能。在媒体运行中,这可能能够确定支持产品设计师的兴趣区域,以优化其模型。

Not only automation of manufacturing processes but also automation of automation procedures itself become increasingly relevant to automation research. In this context, automated capability assessment, mainly leveraged by deep learning systems driven from 3D CAD data, have been presented. Current assessment systems may be able to assess CAD data with regards to abstract features, e.g. the ability to automatically separate components from bulk goods, or the presence of gripping surfaces. Nevertheless, they suffer from the factor of black box systems, where an assessment can be learned and generated easily, but without any geometrical indicator about the reasons of the system's decision. By utilizing explainable AI (xAI) methods, we attempt to open up the black box. Explainable AI methods have been used in order to assess whether a neural network has successfully learned a given task or to analyze which features of an input might lead to an adversarial attack. These methods aim to derive additional insights into a neural network, by analyzing patterns from a given input and its impact to the network output. Within the NeuroCAD Project, xAI methods are used to identify geometrical features which are associated with a certain abstract feature. Within this work, a sensitivity analysis (SA), the layer-wise relevance propagation (LRP), the Gradient-weighted Class Activation Mapping (Grad-CAM) method as well as the Local Interpretable Model-Agnostic Explanations (LIME) have been implemented in the NeuroCAD environment, allowing not only to assess CAD models but also to identify features which have been relevant for the network decision. In the medium run, this might enable to identify regions of interest supporting product designers to optimize their models with regards to assembly processes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源