论文标题
单向解释性不是消息
One-way Explainability Isn't The Message
论文作者
论文摘要
专业计算硬件,数据收购和存储技术的最新工程发展将机器学习(ML)的出现视为一种强大的数据分析形式,其广泛适用性超出了其在自主代理设计中的历史根源。但是,可能是因为它起源于能够自我发现的代理人的发展 - 对人与ML之间的相互作用的关注相对较少。在本文中,我们关注的是在自动化或半自动化工具中使用ML来帮助一个或多个人类决策者。我们认为,在这种情况下,对人和机器的要求与使用ML作为自主剂的一部分的使用明显不同,或者是统计数据分析的一部分。我们的主要立场是,这种人机系统的设计应由反复的,双向信息的可理解性,而不是对ML系统建议的单向解释性。我们认为,经过迭代的可理解信息交流将表征所需的合作种类,以了解男人或机器都没有完整答案的复杂现象。我们提出了运营原则 - 我们称它们为可理解性公理 - 指导协作决策支持系统的设计。原则与:(a)人类提供的信息对ML系统可理解的意义; (b)ML系统提供的解释是对人的理解的意义。使用文献中有关使用ML用于药物设计和医学的示例,我们证明了满足公理状况的情况。我们描述了设计真正协作的决策支持系统所需的一些其他要求。
Recent engineering developments in specialised computational hardware, data-acquisition and storage technology have seen the emergence of Machine Learning (ML) as a powerful form of data analysis with widespread applicability beyond its historical roots in the design of autonomous agents. However -- possibly because of its origins in the development of agents capable of self-discovery -- relatively little attention has been paid to the interaction between people and ML. In this paper we are concerned with the use of ML in automated or semi-automated tools that assist one or more human decision makers. We argue that requirements on both human and machine in this context are significantly different to the use of ML either as part of autonomous agents for self-discovery or as part statistical data analysis. Our principal position is that the design of such human-machine systems should be driven by repeated, two-way intelligibility of information rather than one-way explainability of the ML-system's recommendations. Iterated rounds of intelligible information exchange, we think, will characterise the kinds of collaboration that will be needed to understand complex phenomena for which neither man or machine have complete answers. We propose operational principles -- we call them Intelligibility Axioms -- to guide the design of a collaborative decision-support system. The principles are concerned with: (a) what it means for information provided by the human to be intelligible to the ML system; and (b) what it means for an explanation provided by an ML system to be intelligible to a human. Using examples from the literature on the use of ML for drug-design and in medicine, we demonstrate cases where the conditions of the axioms are met. We describe some additional requirements needed for the design of a truly collaborative decision-support system.