论文标题
拉普拉斯(Laplacian)正规化的少数学习
Laplacian Regularized Few-Shot Learning
论文作者
论文摘要
我们提出了针对几个射击任务的转导性laplacian登记的推断。鉴于从基类中学到的任何特征嵌入,我们将包含两个术语的二进制二进制函数最小化:(1)一个单词,将查询样品分配给最近的类原型,以及(2)一个成对的laplacian术语,鼓励附近的查询样品具有一致的标签分配。我们的跨性推理不会重新培训基本模型,并且可以将其视为查询集的图形聚类,但要受支持集的监督约束。我们得出了函数放松的计算有效的界限优化器,该功能的放宽对每个查询样本进行了独立的(并行)更新,同时保证收敛。经过对基础类别的简单跨排入培训,并且没有复杂的元学习策略,我们对五个少数几次学习基准进行了全面的实验。我们的Laplacianshot始终通过不同模型,设置和数据集的大量利润来胜过最先进的方法。此外,我们的转导推理非常快,计算时间接近归纳推理,可用于大规模的几次任务。
We propose a transductive Laplacian-regularized inference for few-shot tasks. Given any feature embedding learned from the base classes, we minimize a quadratic binary-assignment function containing two terms: (1) a unary term assigning query samples to the nearest class prototype, and (2) a pairwise Laplacian term encouraging nearby query samples to have consistent label assignments. Our transductive inference does not re-train the base model, and can be viewed as a graph clustering of the query set, subject to supervision constraints from the support set. We derive a computationally efficient bound optimizer of a relaxation of our function, which computes independent (parallel) updates for each query sample, while guaranteeing convergence. Following a simple cross-entropy training on the base classes, and without complex meta-learning strategies, we conducted comprehensive experiments over five few-shot learning benchmarks. Our LaplacianShot consistently outperforms state-of-the-art methods by significant margins across different models, settings, and data sets. Furthermore, our transductive inference is very fast, with computational times that are close to inductive inference, and can be used for large-scale few-shot tasks.