论文标题
改进词汇嵌入以回答强大的问题
Improving Lexical Embeddings for Robust Question Answering
论文作者
论文摘要
有争议的回答(QA)的最新技术通过某些质量检查模型超过了人类绩效,已获得了显着的性能提高。但是,这些模型真正理解语言的能力仍然令人怀疑,并且在面对对抗性例子时,这些模型正在揭示局限性。为了增强质量检查模型的鲁棒性及其概括能力,我们提出了通过语义和上下文约束方法(ESC)方法提高表示形式,以改善词汇嵌入的鲁棒性。具体而言,我们通过上下文构成损失插入具有语义约束的扰动,并训练增强的上下文表示,以更好地区分正确答案的上下文线索。实验结果表明,我们的方法在四个对抗性测试集上获得了显着的鲁棒性改善。
Recent techniques in Question Answering (QA) have gained remarkable performance improvement with some QA models even surpassed human performance. However, the ability of these models in truly understanding the language still remains dubious and the models are revealing limitations when facing adversarial examples. To strengthen the robustness of QA models and their generalization ability, we propose a representation Enhancement via Semantic and Context constraints (ESC) approach to improve the robustness of lexical embeddings. Specifically, we insert perturbations with semantic constraints and train enhanced contextual representations via a context-constraint loss to better distinguish the context clues for the correct answer. Experimental results show that our approach gains significant robustness improvement on four adversarial test sets.