论文标题
Xtremedistil:大型多语言模型的多阶段蒸馏
XtremeDistil: Multi-stage Distillation for Massive Multilingual Models
论文作者
论文摘要
深度和大型的预训练语言模型是各种自然语言处理任务的最先进。但是,这些型号的巨大尺寸可能会在实践中使用它们。一些最近和并发的作品使用知识蒸馏将这些巨大模型压缩成浅层模型。在这项工作中,我们研究知识蒸馏,重点是多语言命名实体识别(NER)。特别是,我们研究了几种蒸馏策略,并提出了一个阶段优化方案,利用教师内部表示,该计划对教师建筑不可知,并表明它表现出优于先前工作中使用的策略。此外,我们研究了几个因素的作用,例如未标记的数据,注释资源,模型体系结构和推理潜伏期的命名。我们表明,就参数而言,就批处理推理的延迟而言,我们的方法会导致高达35倍的类似Mbert的教师模型,同时保留了41种语言的NER的95%。
Deep and large pre-trained language models are the state-of-the-art for various natural language processing tasks. However, the huge size of these models could be a deterrent to use them in practice. Some recent and concurrent works use knowledge distillation to compress these huge models into shallow ones. In this work we study knowledge distillation with a focus on multi-lingual Named Entity Recognition (NER). In particular, we study several distillation strategies and propose a stage-wise optimization scheme leveraging teacher internal representations that is agnostic of teacher architecture and show that it outperforms strategies employed in prior works. Additionally, we investigate the role of several factors like the amount of unlabeled data, annotation resources, model architecture and inference latency to name a few. We show that our approach leads to massive compression of MBERT-like teacher models by upto 35x in terms of parameters and 51x in terms of latency for batch inference while retaining 95% of its F1-score for NER over 41 languages.