论文标题
MAML什么时候做得最好?关于NLP应用中模型 - 反应元学习的实证研究
When does MAML Work the Best? An Empirical Study on Model-Agnostic Meta-Learning in NLP Applications
论文作者
论文摘要
模型不合时宜的元学习(MAML)是一种模型 - 静态的元学习方法,成功地用于NLP应用程序中,包括很少弹出的文本分类和多域的低资源语言生成。许多影响因素,包括数据数量,任务之间的相似性以及一般语言模型和特定于任务的适应性之间的平衡,都会影响NLP中MAML的性能,但是很少有作品对它们进行了彻底的研究。在本文中,我们进行了一项经验研究,以研究这些影响因素,并根据实验结果得出结论。
Model-Agnostic Meta-Learning (MAML), a model-agnostic meta-learning method, is successfully employed in NLP applications including few-shot text classification and multi-domain low-resource language generation. Many impacting factors, including data quantity, similarity among tasks, and the balance between general language model and task-specific adaptation, can affect the performance of MAML in NLP, but few works have thoroughly studied them. In this paper, we conduct an empirical study to investigate these impacting factors and conclude when MAML works the best based on the experimental results.