论文标题

关于解释,公平看法和决策之间的关系

On the Relationship Between Explanations, Fairness Perceptions, and Decisions

论文作者

Schoeffer, Jakob, De-Arteaga, Maria, Kuehl, Niklas

论文摘要

众所周知,基于AI的系统的建议是不正确或不公平的。因此,通常有人建议人类成为最终的决策者。先前的工作认为,解释是帮助人类决策者提高决策质量并减轻偏见的必要途径,即促进人类互补性。为了使这些好处实现,解释应使人类能够适当依靠AI建议,并在必要时覆盖算法建议,以增加决策的分配公平性。然而,文献并未提供有关解释是否能够在实践中赋予这种互补性的结论性经验证据。在这项工作中,我们(a)提供了一个概念框架,以表达解释,公平感,依赖和分配公平之间的关系,(b)应用于解释和公平性的相交(看似)(看似)矛盾的研究结果,以及(c)对研究问题和实验的研究表现出的凝聚力。

It is known that recommendations of AI-based systems can be incorrect or unfair. Hence, it is often proposed that a human be the final decision-maker. Prior work has argued that explanations are an essential pathway to help human decision-makers enhance decision quality and mitigate bias, i.e., facilitate human-AI complementarity. For these benefits to materialize, explanations should enable humans to appropriately rely on AI recommendations and override the algorithmic recommendation when necessary to increase distributive fairness of decisions. The literature, however, does not provide conclusive empirical evidence as to whether explanations enable such complementarity in practice. In this work, we (a) provide a conceptual framework to articulate the relationships between explanations, fairness perceptions, reliance, and distributive fairness, (b) apply it to understand (seemingly) contradictory research findings at the intersection of explanations and fairness, and (c) derive cohesive implications for the formulation of research questions and the design of experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源