论文标题
通过其解释性停止订购机器学习算法!以用户为中心的性能和解释性调查
Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
论文作者
论文摘要
机器学习算法可以在当代智能系统中进行高级决策。研究表明,它们的模型性能与解释性之间存在权衡。具有较高性能的机器学习模型通常基于更复杂的算法,因此缺乏解释性,反之亦然。但是,从最终用户的角度来看,这种权衡几乎没有经验证据。我们旨在通过进行两个用户实验来提供经验证据。使用两个不同的数据集,我们首先测量五种常见的机器学习算法的权衡。其次,我们解决了最终用户对可解释的人工智能增强的看法的问题,旨在增加对高性能复杂模型的决策逻辑的理解。我们的结果与权衡曲线的广泛假设有所不同,并表明模型性能和解释性之间的权衡在最终用户的看法中的逐渐逐渐降低。这与假定的固有模型可解释性形成鲜明对比。此外,我们发现折衷是由于数据复杂性而成为情境。我们的第二个实验的结果表明,尽管可以使用可解释的人工智能增强来提高解释性,但解释的类型在最终用户的看法中起着至关重要的作用。
Machine learning algorithms enable advanced decision making in contemporary intelligent systems. Research indicates that there is a tradeoff between their model performance and explainability. Machine learning models with higher performance are often based on more complex algorithms and therefore lack explainability and vice versa. However, there is little to no empirical evidence of this tradeoff from an end user perspective. We aim to provide empirical evidence by conducting two user experiments. Using two distinct datasets, we first measure the tradeoff for five common classes of machine learning algorithms. Second, we address the problem of end user perceptions of explainable artificial intelligence augmentations aimed at increasing the understanding of the decision logic of high-performing complex models. Our results diverge from the widespread assumption of a tradeoff curve and indicate that the tradeoff between model performance and explainability is much less gradual in the end user's perception. This is a stark contrast to assumed inherent model interpretability. Further, we found the tradeoff to be situational for example due to data complexity. Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.