论文标题

很少射击的学习和空间注意力的作用

Few-Shot Few-Shot Learning and the role of Spatial Attention

论文作者

Lifchitz, Yann, Avrithis, Yannis, Picard, Sylvaine

论文摘要

几乎没有人从几个例子中学习新任务的能力通常会激发很少的学习。但是,标准少数分类的基准假设以有限的基类数据来学习表示形式,从而忽略了人类在学习新任务之前可能积累的先验知识。同时,即使有强大的表示形式,也可能在某些域中发生,基类数据是有限或不存在的。这激发了我们研究一个问题,即从在不同领域的大规模数据集中获得的分类器获得表示形式,假设无法访问其培训过程,而基类数据仅限于每个类别的示例很少,其作用是将表示形式调整到手头上的域而不是从划痕中学习。我们将表示形式分为两个阶段,即,如果可用的话,也可以根据几个基类数据进行调整,而新任务的数据更少。在此过程中,我们从预先训练的分类器中获得空间注意图,该图形允许专注于对象并抑制背景混乱。这在新问题中很重要,因为当基类数据很少时,网络无法学习在哪里隐含地关注。我们还表明,预先训练的网络很容易适应新的类别,而无需元学习。

Few-shot learning is often motivated by the ability of humans to learn new tasks from few examples. However, standard few-shot classification benchmarks assume that the representation is learned on a limited amount of base class data, ignoring the amount of prior knowledge that a human may have accumulated before learning new tasks. At the same time, even if a powerful representation is available, it may happen in some domain that base class data are limited or non-existent. This motivates us to study a problem where the representation is obtained from a classifier pre-trained on a large-scale dataset of a different domain, assuming no access to its training process, while the base class data are limited to few examples per class and their role is to adapt the representation to the domain at hand rather than learn from scratch. We adapt the representation in two stages, namely on the few base class data if available and on the even fewer data of new tasks. In doing so, we obtain from the pre-trained classifier a spatial attention map that allows focusing on objects and suppressing background clutter. This is important in the new problem, because when base class data are few, the network cannot learn where to focus implicitly. We also show that a pre-trained network may be easily adapted to novel classes, without meta-learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源