论文标题

利用解释的力量进行增量培训:一种基于石灰的方法

Harnessing the Power of Explanations for Incremental Training: A LIME-Based Approach

论文作者

Mazumder, Arnab Neelim, Lyons, Niall, Pandey, Ashutosh, Santra, Avik, Mohsenin, Tinoosh

论文摘要

神经网络预测的解释性对于了解特征重要性并获得对神经网络性能的可解释见解至关重要。但是,神经网络结果的解释大部分仅限于可视化,并且有稀缺的工作希望将这些解释用作改善模型性能的反馈。在这项工作中,模型解释可以追溯到馈送前进培训,以帮助模型更好地概括。在此范围内,提出了一种自定义的加权损失,即通过考虑真正的石灰(局部可解释的模型 - 不合Snostic解释)的解释和模型预测的石灰解释之间的欧几里得距离来产生权重。同样,在实践培训方案中,开发一个可以帮助模型依次学习的解决方案,而不会丢失先前数据分布的信息,这对于所有培训数据的不可用而至关重要。因此,该框架将自定义的加权损失与弹性权重巩固(EWC)结合在一起,以保持连续测试集中的性能。与使用Google Speech命令数据集相比,提出的定制培训程序在增量学习设置的所有阶段的准确性均从0.5%到1.5%不等,从0.5%到1.5%。

Explainability of neural network prediction is essential to understand feature importance and gain interpretable insight into neural network performance. However, explanations of neural network outcomes are mostly limited to visualization, and there is scarce work that looks to use these explanations as feedback to improve model performance. In this work, model explanations are fed back to the feed-forward training to help the model generalize better. To this extent, a custom weighted loss where the weights are generated by considering the Euclidean distances between true LIME (Local Interpretable Model-Agnostic Explanations) explanations and model-predicted LIME explanations is proposed. Also, in practical training scenarios, developing a solution that can help the model learn sequentially without losing information on previous data distribution is imperative due to the unavailability of all the training data at once. Thus, the framework incorporates the custom weighted loss with Elastic Weight Consolidation (EWC) to maintain performance in sequential testing sets. The proposed custom training procedure results in a consistent enhancement of accuracy ranging from 0.5% to 1.5% throughout all phases of the incremental learning setup compared to traditional loss-based training methods for the keyword spotting task using the Google Speech Commands dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源