论文标题
充分公平的解释
Adequate and fair explanations
论文作者
论文摘要
在AI的基础上,解释基于机器学习的系统是一个重要问题。最近的努力显示了提供解释的各种方法。这些方法可以大致分为两所学校:那些提供机器学习算法的本地和人类可解释近似的学校,以及逻辑方法,这些方法恰好是该决定的一个方面。在本文中,我们将重点介绍了第二所精确解释,并具有严格的逻辑基础。这些确切的方法存在认识论问题。尽管他们可以提供完整的解释,但这种解释可能太复杂了,无法理解甚至以人类可读形式写下。解释性需要认识论上的解释,人类可以掌握的解释。然而,一个足够完整的认识论上的解释仍然需要澄清。我们在这里以反事实来做到这一点,之后[Wachter等,2017]。通过反事实解释,提供完整解释所需的许多假设都是隐含的。为此,反事实解释利用特定数据点或样本的属性,因此也是局部和部分解释。我们探索如何从本地部分解释转变为所谓的完整本地解释,然后再转变为全球解释。但是,为了保留可访问性,我们主张需要偏见。这种局势使算法中存在的明确偏见可能是有害或不公平的。我们调查通过利用一组反事实的结构来提供完整的本地解释的结构来揭示这些偏见在提供完整而公平的解释方面有多么容易。
Explaining sophisticated machine-learning based systems is an important issue at the foundations of AI. Recent efforts have shown various methods for providing explanations. These approaches can be broadly divided into two schools: those that provide a local and human interpreatable approximation of a machine learning algorithm, and logical approaches that exactly characterise one aspect of the decision. In this paper we focus upon the second school of exact explanations with a rigorous logical foundation. There is an epistemological problem with these exact methods. While they can furnish complete explanations, such explanations may be too complex for humans to understand or even to write down in human readable form. Interpretability requires epistemically accessible explanations, explanations humans can grasp. Yet what is a sufficiently complete epistemically accessible explanation still needs clarification. We do this here in terms of counterfactuals, following [Wachter et al., 2017]. With counterfactual explanations, many of the assumptions needed to provide a complete explanation are left implicit. To do so, counterfactual explanations exploit the properties of a particular data point or sample, and as such are also local as well as partial explanations. We explore how to move from local partial explanations to what we call complete local explanations and then to global ones. But to preserve accessibility we argue for the need for partiality. This partiality makes it possible to hide explicit biases present in the algorithm that may be injurious or unfair.We investigate how easy it is to uncover these biases in providing complete and fair explanations by exploiting the structure of the set of counterfactuals providing a complete local explanation.