论文标题

元学习以嘈杂的几个示例来对意图和插槽标签进行分类和插槽标签

Meta learning to classify intent and slot labels with noisy few shot examples

论文作者

Li, Shang-Wen, Krone, Jason, Dong, Shuyan, Zhang, Yi, Al-onaizan, Yaser

论文摘要

最近,深度学习主导了许多机器学习领域,包括口语理解(SLU)。但是,深度学习模型因渴望数据而臭名昭著,并且经过重视的模型通常对提供的训练示例的质量以及培训和推理条件之间的一致性敏感。为了提高SLU模型在具有嘈杂且训练资源少的任务上的性能,我们提出了一个新的SLU基准测试任务:很少有稳健的SLU,其中SLU包括两个核心问题,Intent Grance(IC)和插槽标签(SL)。我们通过在三个公共IC/SL数据集(ATIS,SNIPS和TOP)上定义少量拆分来建立任务,并在拆分中添加两种类型的自然噪声(改编示例缺失/替换和模态不匹配)。我们进一步提出了一个基于原型网络的新型噪声射击SLU模型。我们显示该模型的表现始终优于常规微调基线,而另一种流行的元学习方法,模型 - 静态的元学习(MAML),就实现更好的IC准确性和SL F1而言,并且在存在声音时会产生较小的性能变化。

Recently deep learning has dominated many machine learning areas, including spoken language understanding (SLU). However, deep learning models are notorious for being data-hungry, and the heavily optimized models are usually sensitive to the quality of the training examples provided and the consistency between training and inference conditions. To improve the performance of SLU models on tasks with noisy and low training resources, we propose a new SLU benchmarking task: few-shot robust SLU, where SLU comprises two core problems, intent classification (IC) and slot labeling (SL). We establish the task by defining few-shot splits on three public IC/SL datasets, ATIS, SNIPS, and TOP, and adding two types of natural noises (adaptation example missing/replacing and modality mismatch) to the splits. We further propose a novel noise-robust few-shot SLU model based on prototypical networks. We show the model consistently outperforms the conventional fine-tuning baseline and another popular meta-learning method, Model-Agnostic Meta-Learning (MAML), in terms of achieving better IC accuracy and SL F1, and yielding smaller performance variation when noises are present.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源