论文标题
通过忠实的机器学习的视觉解释来减轻系统错误,以减轻系统错误
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
论文作者
论文摘要
诸如显着图之类的模型说明可以通过突出预测的重要功能来改善用户对AI的信任。但是,当解释遭受系统误差(偏见)的图像的预测时,它们会变得扭曲和误导。此外,尽管模型对不同因素的偏差(模糊,色温,白天/夜晚)的偏差进行微调,但扭曲仍然存在。我们提出了依据CAM,以通过训练具有辅助任务的多输入,多任务模型来恢复各种偏见类型和级别的解释忠诚,以解释和偏差级别的预测。在仿真研究中,该方法不仅提高了预测准确性,而且还产生了对这些预测的高度忠实的解释,就好像图像是公正的。在用户研究中,辩解的解释改善了用户任务的性能,感知的真实性和感知的帮助。辩护培训可以为具有数据偏见的广泛应用程序提供强劲性能和解释忠诚的多功能平台。
Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias). Furthermore, the distortions persist despite model fine-tuning on images biased by different factors (blur, color temperature, day/night). We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions. In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations about these predictions as if the images were unbiased. In user studies, debiased explanations improved user task performance, perceived truthfulness and perceived helpfulness. Debiased training can provide a versatile platform for robust performance and explanation faithfulness for a wide range of applications with data biases.