论文标题

通过大型动作空间进行神经机器翻译的强化学习

Reinforcement Learning with Large Action Spaces for Neural Machine Translation

论文作者

Yehudai, Asaf, Choshen, Leshem, Fox, Lior, Abend, Omri

论文摘要

在最大似然估计(MLE)预训练之后应用增强学习(RL)是增强神经机器翻译(NMT)性能的多功能方法。但是,最近的工作认为,RL对NMT产生的收益主要是由于促进了已经获得预训练可能性很高的令牌。我们假设大型动作空间是RL在MT中有效性的主要障碍,并且进行了两组实验,以支持我们的假设。首先,我们发现减少词汇的大小可以提高RL的有效性。其次,我们发现有效地降低动作空间的维度而不改变词汇的维度也可以通过BLEU,语义相似性和人类评估所评估的明显改进。实际上,通过初始化网络的最终完全连接层(将网络的内部维度映射到词汇维度),并通过概括相似动作的层来实现RL性能的实质性改进:平均而言1.5 BLEU点。

Applying Reinforcement learning (RL) following maximum likelihood estimation (MLE) pre-training is a versatile method for enhancing neural machine translation (NMT) performance. However, recent work has argued that the gains produced by RL for NMT are mostly due to promoting tokens that have already received a fairly high probability in pre-training. We hypothesize that the large action space is a main obstacle to RL's effectiveness in MT, and conduct two sets of experiments that lend support to our hypothesis. First, we find that reducing the size of the vocabulary improves RL's effectiveness. Second, we find that effectively reducing the dimension of the action space without changing the vocabulary also yields notable improvement as evaluated by BLEU, semantic similarity, and human evaluation. Indeed, by initializing the network's final fully connected layer (that maps the network's internal dimension to the vocabulary dimension), with a layer that generalizes over similar actions, we obtain a substantial improvement in RL performance: 1.5 BLEU points on average.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源