论文标题

XAI的信任和依赖 - 区分态度和行为措施

Trust and Reliance in XAI -- Distinguishing Between Attitudinal and Behavioral Measures

论文作者

Scharowski, Nicolas, Perrig, Sebastian A. C., von Felten, Nick, Brühlmann, Florian

论文摘要

信任通常被视为有效使用和现实世界AI的基本标准。研究人员认为,AI应该更加透明,以提高信任,使透明度成为XAI的主要目标之一。然而,关于该主题的实证研究在透明度对信任的影响方面尚无定论。关于这种歧义的解释可能是,信任在XAI中的运作方式不同。在该立场论文中,我们主张在行为(客观)依赖和态度(主观)信任度量之间有明确的区别。但是,尽管态度措施更合适,但研究人员有时似乎在打算捕捉信任时似乎会采取行为措施。基于过去的研究,我们强调有理论上的理论原因可以使信任和依赖分开。正确区分这两个概念提供了对透明度如何影响信任和依赖的更全面的理解,从而使未来的XAI研究受益。

Trust is often cited as an essential criterion for the effective use and real-world deployment of AI. Researchers argue that AI should be more transparent to increase trust, making transparency one of the main goals of XAI. Nevertheless, empirical research on this topic is inconclusive regarding the effect of transparency on trust. An explanation for this ambiguity could be that trust is operationalized differently within XAI. In this position paper, we advocate for a clear distinction between behavioral (objective) measures of reliance and attitudinal (subjective) measures of trust. However, researchers sometimes appear to use behavioral measures when intending to capture trust, although attitudinal measures would be more appropriate. Based on past research, we emphasize that there are sound theoretical reasons to keep trust and reliance separate. Properly distinguishing these two concepts provides a more comprehensive understanding of how transparency affects trust and reliance, benefiting future XAI research.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源