论文标题
无监督的持续学习中的分布式检测
Out-Of-Distribution Detection In Unsupervised Continual Learning
论文作者
论文摘要
无监督的持续学习旨在逐步学习新任务,而无需进行人类注释。但是,大多数现有的方法,尤其是针对图像分类的方法,仅通过假设所有新数据都属于新任务,仅在简化的方案中起作用,如果不提供类标签,这是不现实的。因此,要在现实生活中执行无监督的持续学习,在开始确定每个新数据是否与新任务相对应还是已经学习的任务时,需要一个脱离分布检测器,这仍然尚未探索。在这项工作中,我们通过相应的评估协议提出了无监督持续学习(OOD-UCL)中分布式检测的问题。此外,我们通过首先纠正输出偏差,然后基于任务歧视性来提高输出置信度,从而提高输出置信度,从而提高了一种新颖的OOD检测方法,该方法可以直接应用,而无需修改持续学习的学习过程和目标。通过遵循提出的评估协议,在CIFAR-100数据集上评估了我们的方法,与无监督的持续学习方案下的现有OOD检测方法相比,我们显示出改进的性能。
Unsupervised continual learning aims to learn new tasks incrementally without requiring human annotations. However, most existing methods, especially those targeted on image classification, only work in a simplified scenario by assuming all new data belong to new tasks, which is not realistic if the class labels are not provided. Therefore, to perform unsupervised continual learning in real life applications, an out-of-distribution detector is required at beginning to identify whether each new data corresponds to a new task or already learned tasks, which still remains under-explored yet. In this work, we formulate the problem for Out-of-distribution Detection in Unsupervised Continual Learning (OOD-UCL) with the corresponding evaluation protocol. In addition, we propose a novel OOD detection method by correcting the output bias at first and then enhancing the output confidence for in-distribution data based on task discriminativeness, which can be applied directly without modifying the learning procedures and objectives of continual learning. Our method is evaluated on CIFAR-100 dataset by following the proposed evaluation protocol and we show improved performance compared with existing OOD detection methods under the unsupervised continual learning scenario.