论文标题
部分可观测时空混沌系统的无模型预测
Few-Shot Class-Incremental Learning via Entropy-Regularized Data-Free Replay
论文作者
论文摘要
很少有人提出了几乎没有阶级的班级学习(FSCIL),目的是使深度学习系统能够通过有限的数据逐步学习新课程。最近,一位先驱声称,通常使用的基于重播的课堂学习方法(CIL)是无效的,因此对于FSCIL而言并不是首选。如果真理,这对FSCIL领域产生了重大影响。在本文中,我们通过经验结果表明,采用数据重播非常有利。但是,存储和重播旧数据可能会导致隐私问题。为了解决此问题,我们或建议使用无数据重播,该重播可以通过生成器综合数据而无需访问真实数据。在观察知识蒸馏的不确定数据的有效性时,我们在发电机培训中施加了熵正则化,以鼓励更不确定的例子。此外,我们建议用单速样标签重新标记生成的数据。这种修改使网络可以通过完全减少交叉渗透损失来学习,从而减轻了在常规知识蒸馏方法中平衡不同目标的问题。最后,我们对CIFAR-100,MiniimageNet和Cub-200进行了广泛的实验结果和分析,以证明我们提出的效果。
Few-shot class-incremental learning (FSCIL) has been proposed aiming to enable a deep learning system to incrementally learn new classes with limited data. Recently, a pioneer claims that the commonly used replay-based method in class-incremental learning (CIL) is ineffective and thus not preferred for FSCIL. This has, if truth, a significant influence on the fields of FSCIL. In this paper, we show through empirical results that adopting the data replay is surprisingly favorable. However, storing and replaying old data can lead to a privacy concern. To address this issue, we alternatively propose using data-free replay that can synthesize data by a generator without accessing real data. In observing the the effectiveness of uncertain data for knowledge distillation, we impose entropy regularization in the generator training to encourage more uncertain examples. Moreover, we propose to relabel the generated data with one-hot-like labels. This modification allows the network to learn by solely minimizing the cross-entropy loss, which mitigates the problem of balancing different objectives in the conventional knowledge distillation approach. Finally, we show extensive experimental results and analysis on CIFAR-100, miniImageNet and CUB-200 to demonstrate the effectiveness of our proposed one.