论文标题

特征提取器堆叠用于跨域的少量学习

Feature Extractor Stacking for Cross-domain Few-shot Learning

论文作者

Wang, Hongyu, Frank, Eibe, Pfahringer, Bernhard, Mayo, Michael, Holmes, Geoffrey

论文摘要

跨域少数学习(CDFSL)解决了学习问题,在这些问题中,知识需要从一个或多个源域转移到具有明显不同分布的实例降低目标域。最近发布的CDFSL方法通常构建了一个通用模型,该模型将多个源域知识结合到一个特征提取器中。这可以有效推理,但每当添加新的源域时,都需要重新计算提取器。这些方法中的一些也与异质源域提取器架构不兼容。我们提出了特征提取器堆叠(FES),这是一种新的CDFSL方法,用于结合来自提取器集合的信息,可以从开箱即用的盒子中利用异质预审计的提取器,并且在更新其提取器收集时不需要重新计算的通用模型。我们提出了基本的FES算法,该算法的灵感来自经典的堆积概括方法,还引入了两个变体:卷积FES(confes)和正则化FES(参考)。给定目标域任务,这些算法独立地调整了每个提取器,使用交叉验证从支持集中提取训练数据以堆叠概括,并从此数据中学习一个简单的线性堆叠分类器。我们在众所周知的元数据基准上评估了FES方法,以卷积神经网络针对图像分类,并表明它们可以实现最新的性能。

Cross-domain few-shot learning (CDFSL) addresses learning problems where knowledge needs to be transferred from one or more source domains into an instance-scarce target domain with an explicitly different distribution. Recently published CDFSL methods generally construct a universal model that combines knowledge of multiple source domains into one feature extractor. This enables efficient inference but necessitates re-computation of the extractor whenever a new source domain is added. Some of these methods are also incompatible with heterogeneous source domain extractor architectures. We propose feature extractor stacking (FES), a new CDFSL method for combining information from a collection of extractors, that can utilise heterogeneous pretrained extractors out of the box and does not maintain a universal model that needs to be re-computed when its extractor collection is updated. We present the basic FES algorithm, which is inspired by the classic stacked generalisation approach, and also introduce two variants: convolutional FES (ConFES) and regularised FES (ReFES). Given a target-domain task, these algorithms fine-tune each extractor independently, use cross-validation to extract training data for stacked generalisation from the support set, and learn a simple linear stacking classifier from this data. We evaluate our FES methods on the well-known Meta-Dataset benchmark, targeting image classification with convolutional neural networks, and show that they can achieve state-of-the-art performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源