论文标题
通过与他们的解释进行互动,出于正确的科学原因正确建立深层神经网络
Making deep neural networks right for the right scientific reasons by interacting with their explanations
论文作者
论文摘要
深度神经网络在许多实际应用中都表现出了出色的表现。不幸的是,它们可能显示出“聪明的汉斯”的行为 - 利用数据集中的混杂因素 - 以实现高性能。在这项工作中,我们介绍了“解释性互动学习”(XIL)的新颖学习设置,并说明了其在植物表型研究任务上的好处。 XIL将科学家添加到训练循环中,以便通过提供有关其解释的反馈来交互修改原始模型。我们的实验结果表明,XIL可以帮助避免机器学习中聪明的汉斯时刻,并鼓励(或劝阻,如果适当)信任基础模型。
Deep neural networks have shown excellent performances in many real-world applications. Unfortunately, they may show "Clever Hans"-like behavior -- making use of confounding factors within datasets -- to achieve high performance. In this work, we introduce the novel learning setting of "explanatory interactive learning" (XIL) and illustrate its benefits on a plant phenotyping research task. XIL adds the scientist into the training loop such that she interactively revises the original model via providing feedback on its explanations. Our experimental results demonstrate that XIL can help avoiding Clever Hans moments in machine learning and encourages (or discourages, if appropriate) trust into the underlying model.