论文标题
接缝xai:在可解释的AI中操作接缝设计
Seamful XAI: Operationalizing Seamful Design in Explainable AI
论文作者
论文摘要
AI系统中的错误是不可避免的,这是由于技术局限性和社会技术差距而引起的。虽然黑箱AI系统可以使用户体验无缝,但隐藏接缝的风险会使用户能够减轻AI错误的后果。我们可以利用它们来帮助用户?虽然可解释的AI(XAI)主要解决了算法的不透明,但我们建议通过揭示和利用社会技术和基础设施不匹配来促进AI的解释性。我们介绍了(1)在概念上将“接缝”转移到AI上下文的概念,(2)开发一个设计过程,该过程有助于利益相关者通过接缝进行预测和设计。我们使用基于场景的共同设计活动与现实世界中用例有关的基于方案的共同设计活动探索了这一过程。我们发现,接缝XAI设计过程有助于用户预见AI伤害,识别潜在原因(接缝),将它们定位在AI的生命周期中,学习如何利用接缝信息来改善XAI和用户代理。我们分享了有关该过程如何帮助从业者预测和制作AI中的接缝的经验见解,含义和思考,如何提高解释性,赋予最终用户能力并促进负责任的AI。
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps. While black-boxing AI systems can make the user experience seamless, hiding the seams risks disempowering users to mitigate fallouts from AI mistakes. Instead of hiding these AI imperfections, can we leverage them to help the user? While Explainable AI (XAI) has predominantly tackled algorithmic opaqueness, we propose that seamful design can foster AI explainability by revealing and leveraging sociotechnical and infrastructural mismatches. We introduce the concept of Seamful XAI by (1) conceptually transferring "seams" to the AI context and (2) developing a design process that helps stakeholders anticipate and design with seams. We explore this process with 43 AI practitioners and real end-users, using a scenario-based co-design activity informed by real-world use cases. We found that the Seamful XAI design process helped users foresee AI harms, identify underlying reasons (seams), locate them in the AI's lifecycle, learn how to leverage seamful information to improve XAI and user agency. We share empirical insights, implications, and reflections on how this process can help practitioners anticipate and craft seams in AI, how seamfulness can improve explainability, empower end-users, and facilitate Responsible AI.