论文标题

元学习中的神经路线

Neural Routing in Meta Learning

论文作者

Cai, Jicang, Vahidian, Saeed, Wang, Weijia, Joneidi, Mohsen, Lin, Bill

论文摘要

元学习通常被称为学习对学习是通过利用先前任务的知识,但能够迅速适应新任务来模仿人类学习的有前途的概念。在这种情况下出现了许多模型,并提高了这里出现的学习效率,鲁棒性等。我们可以模仿人类学习的其他方面并将其纳入现有的元学习算法中吗?受神经科学中广泛认可的发现的启发,即大脑的不同部分是针对不同类型的任务的高度专业化的,我们旨在通过选择性地使用以输入任务为条件的部分来选择性地改善当前元学习算法的模型性能。在这项工作中,我们描述了一种方法,该方法通过利用与每个卷积层相关的批处理归一化(BN)层中的缩放系数来研究深卷卷神经网络(CNN)中的任务依赖性动态神经元选择。问题很有趣,因为帮助模型的不同部分从不同类型的任务中学习的想法可能有助于我们在CNN中训练更好的过滤器,并改善模型的概括性能。我们发现,在最广泛使用的基准数据集上,拟议中的方法,元学习中的神经路由(NRML)优于现有众所周知的元学习基线之一。

Meta-learning often referred to as learning-to-learn is a promising notion raised to mimic human learning by exploiting the knowledge of prior tasks but being able to adapt quickly to novel tasks. A plethora of models has emerged in this context and improved the learning efficiency, robustness, etc. The question that arises here is can we emulate other aspects of human learning and incorporate them into the existing meta learning algorithms? Inspired by the widely recognized finding in neuroscience that distinct parts of the brain are highly specialized for different types of tasks, we aim to improve the model performance of the current meta learning algorithms by selectively using only parts of the model conditioned on the input tasks. In this work, we describe an approach that investigates task-dependent dynamic neuron selection in deep convolutional neural networks (CNNs) by leveraging the scaling factor in the batch normalization (BN) layer associated with each convolutional layer. The problem is intriguing because the idea of helping different parts of the model to learn from different types of tasks may help us train better filters in CNNs, and improve the model generalization performance. We find that the proposed approach, neural routing in meta learning (NRML), outperforms one of the well-known existing meta learning baselines on few-shot classification tasks on the most widely used benchmark datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源