论文标题
扩展持续的几次学习基准,以包括对特定实例的识别
Expanding continual few-shot learning benchmarks to include recognition of specific instances
论文作者
论文摘要
持续学习和几乎没有学习的学习是朝着更广泛的机器学习(ML)功能的重要领域。最近,将两者都引起了浓厚的兴趣。这样做的第一个例子之一是Antoniou等人的持续少量学习(CFSL)框架。 Arxiv:2004.11967。在这项研究中,我们以两种方式扩展了CFSL,以捕捉更广泛的挑战,这对于现实情况下的智能代理行为很重要。首先,我们通过数量级增加了类的数量,从而使结果与标准持续学习实验更具可比性。其次,我们引入了一个“实例测试”,该测试需要识别类的特定实例,这是ML中通常忽略的动物认知能力。为了在这些条件下对ML模型性能进行初步探索,我们从原始CFSL工作中选择了代表性的基线模型,并添加了带有重播的模型变体。正如预期的那样,学习更多的课程比原始的CFSL实验更加困难,有趣的是,呈现图像实例和类的方式会影响分类性能。令人惊讶的是,基线实例测试中的准确性与其他分类任务相当,但是鉴于明显的遮挡和噪声,较差。重播进行合并的使用大大改善了两种任务的性能,尤其是在实例测试中。
Continual learning and few-shot learning are important frontiers in progress toward broader Machine Learning (ML) capabilities. Recently, there has been intense interest in combining both. One of the first examples to do so was the Continual few-shot Learning (CFSL) framework of Antoniou et al. arXiv:2004.11967. In this study, we extend CFSL in two ways that capture a broader range of challenges, important for intelligent agent behaviour in real-world conditions. First, we increased the number of classes by an order of magnitude, making the results more comparable to standard continual learning experiments. Second, we introduced an 'instance test' which requires recognition of specific instances of classes -- a capability of animal cognition that is usually neglected in ML. For an initial exploration of ML model performance under these conditions, we selected representative baseline models from the original CFSL work and added a model variant with replay. As expected, learning more classes is more difficult than the original CFSL experiments, and interestingly, the way in which image instances and classes are presented affects classification performance. Surprisingly, accuracy in the baseline instance test is comparable to other classification tasks, but poor given significant occlusion and noise. The use of replay for consolidation substantially improves performance for both types of tasks, but particularly for the instance test.