论文标题
神经原型树,用于可解释的细粒图像识别
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
论文作者
论文摘要
基于原型的方法使用可解释的表示来解决深度学习模型的黑盒性质,与仅近似此类模型的事后解释方法相反。我们提出了神经原型树(Prototree),这是一种可解释的深度学习方法,可用于细粒度的图像识别。原型将原型学习与决策树结合在一起,从而通过设计产生了全球可解释的模型。此外,原始版可以通过概述通过树的决策路径来局部解释单个预测。我们二进制树中的每个节点都包含一个可训练的原型部分。图像中该学到的原型的存在或不存在确定通过节点的路由。因此,决策类似于人类的推理:鸟有红色的喉咙吗?还有细长的喙?那是一只蜂鸟!我们使用集合方法,修剪和二进制来调整准确性解干性权衡。我们在不牺牲准确性的情况下进行修剪,导致一棵小树,沿着一条从200种将鸟类分类的路径只有8个学到的原型。 5个原始人的合奏在2011年的CUB-2011和Stanford Cars数据集上实现了竞争精度。代码可在https://github.com/m-nauta/prototree中获得
Prototype-based methods use interpretable representations to address the black-box nature of deep learning models, in contrast to post-hoc explanation methods that only approximate such models. We propose the Neural Prototype Tree (ProtoTree), an intrinsically interpretable deep learning method for fine-grained image recognition. ProtoTree combines prototype learning with decision trees, and thus results in a globally interpretable model by design. Additionally, ProtoTree can locally explain a single prediction by outlining a decision path through the tree. Each node in our binary tree contains a trainable prototypical part. The presence or absence of this learned prototype in an image determines the routing through a node. Decision making is therefore similar to human reasoning: Does the bird have a red throat? And an elongated beak? Then it's a hummingbird! We tune the accuracy-interpretability trade-off using ensemble methods, pruning and binarizing. We apply pruning without sacrificing accuracy, resulting in a small tree with only 8 learned prototypes along a path to classify a bird from 200 species. An ensemble of 5 ProtoTrees achieves competitive accuracy on the CUB-200- 2011 and Stanford Cars data sets. Code is available at https://github.com/M-Nauta/ProtoTree