论文标题

出于正确的原因记住:解释减少了灾难性的遗忘

Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting

论文作者

Ebrahimi, Sayna, Petryk, Suzanne, Gokul, Akash, Gan, William, Gonzalez, Joseph E., Rohrbach, Marcus, Darrell, Trevor

论文摘要

持续学习的目的(CL)是学习一系列任务,而不会遭受灾难性遗忘现象的困扰。先前的工作表明,以重播缓冲区形式利用内存可以减少先前任务上的性能降解。我们假设,当鼓励模型记住\ textit {证据}以前做出的决定时,可以进一步减少忘记。作为探索这一假设的第一步,我们提出了一个简单的新颖培训范式,称为“记忆”出于正确的原因(RRR),该范式还在缓冲区中存储了每个示例的视觉模型解释,并确保该模型具有“正确的理由”,以鼓励其在培训时间进行决策的解释,以鼓励其预测。没有这种限制,随着传统的持续学习算法学习新任务,解释会有所漂移和忘记增加。我们证明了如何轻松地将RRR添加到任何基于内存或正则化的方法中,并导致遗忘减少,更重要的是改进的模型解释。我们已经在标准和少量设置中评估了我们的方法,并使用不同的体系结构和技术在各种CL方法上进行了一致的改进,以生成模型解释,并证明了我们的方法在解释性与持续学习之间有着有希望的联系。我们的代码可在\ url {https://github.com/saynaebrahimi/rememembering-for-the-right-reasons}中获得。

The goal of continual learning (CL) is to learn a sequence of tasks without suffering from the phenomenon of catastrophic forgetting. Previous work has shown that leveraging memory in the form of a replay buffer can reduce performance degradation on prior tasks. We hypothesize that forgetting can be further reduced when the model is encouraged to remember the \textit{evidence} for previously made decisions. As a first step towards exploring this hypothesis, we propose a simple novel training paradigm, called Remembering for the Right Reasons (RRR), that additionally stores visual model explanations for each example in the buffer and ensures the model has "the right reasons" for its predictions by encouraging its explanations to remain consistent with those used to make decisions at training time. Without this constraint, there is a drift in explanations and increase in forgetting as conventional continual learning algorithms learn new tasks. We demonstrate how RRR can be easily added to any memory or regularization-based approach and results in reduced forgetting, and more importantly, improved model explanations. We have evaluated our approach in the standard and few-shot settings and observed a consistent improvement across various CL approaches using different architectures and techniques to generate model explanations and demonstrated our approach showing a promising connection between explainability and continual learning. Our code is available at \url{https://github.com/SaynaEbrahimi/Remembering-for-the-Right-Reasons}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源