论文标题
信任人类互动:范围范围
Trust in Human-AI Interaction: Scoping Out Models, Measures, and Methods
论文作者
论文摘要
信任已成为人们与AI注入系统互动的关键因素。然而,关于使用了哪些信任模型以及哪些系统:机器人,虚拟角色,智能车辆,决策辅助工具或其他系统知之甚少。此外,目前尚无衡量AI信任的已知标准方法。这项范围的评论从模型,措施和方法的角度来绘制了人类互动(HAII)的信任状态。调查结果表明,信任是HAII背景下的重要且多方面的研究主题。但是,大多数工作的理论不足且报告不足,通常不使用既定的信任模型,也不使用有关方法的缺少详细信息,尤其是绿野仙踪。我们为系统审查工作提供了几个目标,以及将优势和解决当前文献劣势结合起来的研究议程。
Trust has emerged as a key factor in people's interactions with AI-infused systems. Yet, little is known about what models of trust have been used and for what systems: robots, virtual characters, smart vehicles, decision aids, or others. Moreover, there is yet no known standard approach to measuring trust in AI. This scoping review maps out the state of affairs on trust in human-AI interaction (HAII) from the perspectives of models, measures, and methods. Findings suggest that trust is an important and multi-faceted topic of study within HAII contexts. However, most work is under-theorized and under-reported, generally not using established trust models and missing details about methods, especially Wizard of Oz. We offer several targets for systematic review work as well as a research agenda for combining the strengths and addressing the weaknesses of the current literature.