论文标题
对基于AI的医学诊断支持系统的用户信任
User Trust on an Explainable AI-based Medical Diagnosis Support System
论文作者
论文摘要
最近的研究支持系统解释性可以提高用户信任和使用医疗AI进行诊断支持的意愿。在本文中,我们使用基于X射线图像的胸部疾病诊断作为案例研究来研究用户信任和依赖。我们提出了一个支持系统,用户(放射科医生)可以查看最终决定的因果解释。在观察了这些因果解释之后,用户提供了对模型预测的看法,如果他们不同意,可以纠正解释。我们衡量了用户的信任,作为模型与放射科医生的诊断之间的一致性,以及放射科医生对模型解释的反馈。此外,他们报告了对系统的信任。我们在CXR-EYE数据集上测试了我们的模型,并达到了总体准确性74.1%。但是,我们的用户研究中的专家仅以46.4%的案件同意该模型,这表明有必要改善信任。自我报告的信任评分为1.0至5.0的量表为3.2,表明用户倾向于信任该模型,但仍然需要增强信任。
Recent research has supported that system explainability improves user trust and willingness to use medical AI for diagnostic support. In this paper, we use chest disease diagnosis based on X-Ray images as a case study to investigate user trust and reliance. Building off explainability, we propose a support system where users (radiologists) can view causal explanations for final decisions. After observing these causal explanations, users provided their opinions of the model predictions and could correct explanations if they did not agree. We measured user trust as the agreement between the model's and the radiologist's diagnosis as well as the radiologists' feedback on the model explanations. Additionally, they reported their trust in the system. We tested our model on the CXR-Eye dataset and it achieved an overall accuracy of 74.1%. However, the experts in our user study agreed with the model for only 46.4% of the cases, indicating the necessity of improving the trust. The self-reported trust score was 3.2 on a scale of 1.0 to 5.0, showing that the users tended to trust the model but the trust still needs to be enhanced.