论文标题

用于神经机器翻译的多单元变压器

Multi-Unit Transformers for Neural Machine Translation

论文作者

Yan, Jianhao, Meng, Fandong, Zhou, Jie

论文摘要

变压器模型在神经机器翻译中取得了显着成功。通过在级联中堆叠多个单元(即多头注意力和FFN的组合),许多努力致力于加深变压器,而对多个平行单位的调查很少引起关注。在本文中,我们提出了多单元变压器(MUTE),旨在通过引入多样化和互补单位来促进变压器的表现力。具体来说,我们使用几个平行单元,并表明具有多个单元的建模可改善模型性能并引入多样性。此外,为了更好地利用多单元设置的优势,我们设计了有偏见的模块和顺序依赖性,以指导和鼓励不同单元之间的互补性。在三个机器翻译任务上,NIST中文到英语,WMT'14英语至德国人和WMT'18中文到英语的实验结果表明,静音模型极大地超过了变压器基础,高达+1.​​52,+1.52,+1.90和+1.10 bleu点,只有少量下降,只有速度较小的速度(大约3.1%)。此外,我们的方法还超过了变压器模型,其参数的54%。这些结果证明了静音的有效性,以及其在推理过程和参数使用方面的效率。

Transformer models achieve remarkable success in Neural Machine Translation. Many efforts have been devoted to deepening the Transformer by stacking several units (i.e., a combination of Multihead Attentions and FFN) in a cascade, while the investigation over multiple parallel units draws little attention. In this paper, we propose the Multi-Unit Transformers (MUTE), which aim to promote the expressiveness of the Transformer by introducing diverse and complementary units. Specifically, we use several parallel units and show that modeling with multiple units improves model performance and introduces diversity. Further, to better leverage the advantage of the multi-unit setting, we design biased module and sequential dependency that guide and encourage complementariness among different units. Experimental results on three machine translation tasks, the NIST Chinese-to-English, WMT'14 English-to-German and WMT'18 Chinese-to-English, show that the MUTE models significantly outperform the Transformer-Base, by up to +1.52, +1.90 and +1.10 BLEU points, with only a mild drop in inference speed (about 3.1%). In addition, our methods also surpass the Transformer-Big model, with only 54\% of its parameters. These results demonstrate the effectiveness of the MUTE, as well as its efficiency in both the inference process and parameter usage.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源