论文标题
可解释机器人的信任考虑:人为因素的观点
Trust Considerations for Explainable Robots: A Human Factors Perspective
论文作者
论文摘要
人工智能(AI)和机器人技术的最新进展引起了人们对AI系统和机器人的需求,这对人类用户来说是可以理解的。可解释的AI(XAI)和可解释的机器人文献旨在通过为用户提供有关AI和机器人行为的必要信息来增强人类的理解和人类机器人团队的绩效。同时,人类因素文献长期以来一直在解决有助于人类绩效的重要考虑因素,包括人类对自主系统的信任。在本文中,根据人为因素文献的借鉴,我们讨论了可解释的机器人系统设计的三个重要的信任相关考虑因素:信任,信任校准和信任特异性的基础。我们进一步详细介绍了基于可解释机器人提供的解释来评估机器人系统中信任的现有和潜在指标。
Recent advances in artificial intelligence (AI) and robotics have drawn attention to the need for AI systems and robots to be understandable to human users. The explainable AI (XAI) and explainable robots literature aims to enhance human understanding and human-robot team performance by providing users with necessary information about AI and robot behavior. Simultaneously, the human factors literature has long addressed important considerations that contribute to human performance, including human trust in autonomous systems. In this paper, drawing from the human factors literature, we discuss three important trust-related considerations for the design of explainable robot systems: the bases of trust, trust calibration, and trust specificity. We further detail existing and potential metrics for assessing trust in robotic systems based on explanations provided by explainable robots.