论文标题

Autoprotonet:原型网络的解释性

AutoProtoNet: Interpretability for Prototypical Networks

论文作者

Sandoval-Segura, Pedro, Lawson, Wallace

论文摘要

在元学习方法中,从业者很难理解该模型采用哪种形式。没有这种能力,很难理解模型所知道的以及进行有意义的更正。为了应对这些挑战,我们引入了Autopotonet,该挑战通过训练适合重建输入的嵌入空间来建立原型网络,同时仍然方便用于几次学习。我们演示了如何将这个嵌入空间中的点可视化和用于理解类表征。我们还设计了一种原型改进方法,该方法使人类可以调试分类参数不足。我们在自定义分类任务上使用此调试技术,并发现它可以改善由野外图像组成的验证集。我们主张在元学习方法中解释性,并表明人类增强元学习算法的交互方式。

In meta-learning approaches, it is difficult for a practitioner to make sense of what kind of representations the model employs. Without this ability, it can be difficult to both understand what the model knows as well as to make meaningful corrections. To address these challenges, we introduce AutoProtoNet, which builds interpretability into Prototypical Networks by training an embedding space suitable for reconstructing inputs, while remaining convenient for few-shot learning. We demonstrate how points in this embedding space can be visualized and used to understand class representations. We also devise a prototype refinement method, which allows a human to debug inadequate classification parameters. We use this debugging technique on a custom classification task and find that it leads to accuracy improvements on a validation set consisting of in-the-wild images. We advocate for interpretability in meta-learning approaches and show that there are interactive ways for a human to enhance meta-learning algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源