论文标题

用户会从可解释的愿景中受益吗?用户研究,基线和数据集

Do Users Benefit From Interpretable Vision? A User Study, Baseline, And Dataset

论文作者

Sixt, Leon, Schuessler, Martin, Popescu, Oana-Iuliana, Weiß, Philipp, Landgraf, Tim

论文摘要

存在各种方法来解释图像分类模型。但是,对于简单地比较各种输入而为用户提供任何好处,并且模型的各自预测尚不清楚。我们进行了一项用户研究(n = 240),以测试这种基线解释技术如何针对基于概念的和反事实解释。为此,我们为能够偏置单个属性并量化其与模型的相关性的合成数据集生成器提供了贡献。在一项研究中,我们评估参与者是否可以识别与地面真相相比的相关属性集。我们的结果表明,基线优于基于概念的解释。可逆性神经网络的反事实解释与基线相似。尽管如此,他们仍允许用户更准确地识别一些属性。我们的结果强调了衡量用户如何推理模型偏差的重要性,而不仅仅是仅依靠技术评估或代理任务。我们开源我们的研究和数据集,因此可以作为未来研究的蓝色印刷品。有关代码,请参见,https://github.com/berleon/do_users_benefit_from_interpretable_vision

A variety of methods exist to explain image classification models. However, whether they provide any benefit to users over simply comparing various inputs and the model's respective predictions remains unclear. We conducted a user study (N=240) to test how such a baseline explanation technique performs against concept-based and counterfactual explanations. To this end, we contribute a synthetic dataset generator capable of biasing individual attributes and quantifying their relevance to the model. In a study, we assess if participants can identify the relevant set of attributes compared to the ground-truth. Our results show that the baseline outperformed concept-based explanations. Counterfactual explanations from an invertible neural network performed similarly as the baseline. Still, they allowed users to identify some attributes more accurately. Our results highlight the importance of measuring how well users can reason about biases of a model, rather than solely relying on technical evaluations or proxy tasks. We open-source our study and dataset so it can serve as a blue-print for future studies. For code see, https://github.com/berleon/do_users_benefit_from_interpretable_vision

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源