论文标题
可解释的人工智能分析和设计的多组分框架
A multi-component framework for the analysis and design of explainable artificial intelligence
论文作者
论文摘要
可解释的人工智能(XAI)中研究的快速增长是基于两个实质性发展的。首先,现代机器学习方法的巨大应用成功,尤其是深度和强化学习,这对工业,商业和社会价值产生了巨大的期望。其次,创建受信任的AI系统的关注的出现,包括创建监管原理以确保AI系统的透明度和信任。这两个线程创造了一种研究活动的“完美风暴”,所有这些线程都渴望创建和交付任何一套工具和技术来满足XAI需求。正如当前XAI的一些调查所表明的那样,尚未出现一个原则上的框架,该框架尊重科学史上的解释性文献,并为开发透明XAI框架的开发提供了基础。在这里,我们打算提供XAI要求的战略清单,展示它们与XAI想法历史的联系,并将这些想法综合为一个简单的框架,以校准连续的五个连续的XAI级别。
The rapid growth of research in explainable artificial intelligence (XAI) follows on two substantial developments. First, the enormous application success of modern machine learning methods, especially deep and reinforcement learning, which have created high expectations for industrial, commercial and social value. Second, the emergence of concern for creating trusted AI systems, including the creation of regulatory principles to ensure transparency and trust of AI systems.These two threads have created a kind of "perfect storm" of research activity, all eager to create and deliver it any set of tools and techniques to address the XAI demand. As some surveys of current XAI suggest, there is yet to appear a principled framework that respects the literature of explainability in the history of science, and which provides a basis for the development of a framework for transparent XAI. Here we intend to provide a strategic inventory of XAI requirements, demonstrate their connection to a history of XAI ideas, and synthesize those ideas into a simple framework to calibrate five successive levels of XAI.