论文标题

生成的对抗训练可以改善神经语言模型

Generative Adversarial Training Can Improve Neural Language Models

论文作者

Movahedi, Sajad, Shakery, Azadeh

论文摘要

尽管以复发性神经网络(RNN)形式进行深入学习导致神经语言建模有了显着改善,但它们非常容易过度拟合的事实仍然主要是尚未解决的问题。在本文中,我们提出了一种基于生成对抗网络(GAN)和对抗训练(AT)的正则化方法,该方法可以防止神经语言模型过度拟合。与常见的对抗训练方法(例如快速梯度符号方法(FGSM)),这些方法需要第二次向后传播,因此有效地需要至少两倍的定期培训时间,我们方法的开销不超过20%的碱培训。

While deep learning in the form of recurrent neural networks (RNNs) has caused a significant improvement in neural language modeling, the fact that they are extremely prone to overfitting is still a mainly unresolved issue. In this paper we propose a regularization method based on generative adversarial networks (GANs) and adversarial training (AT), that can prevent overfitting in neural language models. Unlike common adversarial training methods such as the fast gradient sign method (FGSM) that require a second back-propagation through time, and therefore effectively require at least twice the amount of time for regular training, the overhead of our method does not exceed more than 20% of the training of the baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源