论文标题
使用神经网络的乌尔都语 - 英语机器音译
Urdu-English Machine Transliteration using Neural Networks
论文作者
论文摘要
近年来,机器翻译引起了很多关注。它是计算语言的子场,专注于将文本从一种语言转换为另一种语言。在不同的翻译技术中,神经网络目前以其功能提供了具有注意机制,序列到序列和长期术语建模的单个大型神经网络的能力。尽管机器翻译领域取得了重大进展,但包括技术术语,命名本性在内的数量量表(OOV)的翻译,外国单词仍然是对当前最新翻译系统的挑战,而这种情况在低资源语言或具有不同结构不同的低资源语言或语言之间变得更糟。由于一种语言的形态丰富性,一个词在不同的情况下可能具有不同的脑膜。在这种情况下,单词的翻译不仅足以提供正确/质量的翻译。音译是一种在翻译过程中考虑单词/句子上下文的方法。对于像乌尔都语这样的低资源语言,很难找到/找到平行语料库来进行音译,这足以训练系统。在这项工作中,我们根据未经监督和语言独立的期望最大化(EM)提出了音译技术。系统可以从平行语料库中学习模式和唱片外(OOV)单词,因此无需明确地在音译语料库上训练它。该方法对三种统计机器翻译(SMT)的模型进行了测试,其中包括基于短语,基于层次的基于层次的基于因子和因子模型以及两个神经机器翻译模型,其中包括LSTM和Transformer模型。
Machine translation has gained much attention in recent years. It is a sub-field of computational linguistic which focus on translating text from one language to other language. Among different translation techniques, neural network currently leading the domain with its capabilities of providing a single large neural network with attention mechanism, sequence-to-sequence and long-short term modelling. Despite significant progress in domain of machine translation, translation of out-of-vocabulary words(OOV) which include technical terms, named-entities, foreign words are still a challenge for current state-of-art translation systems, and this situation becomes even worse while translating between low resource languages or languages having different structures. Due to morphological richness of a language, a word may have different meninges in different context. In such scenarios, translation of word is not only enough in order provide the correct/quality translation. Transliteration is a way to consider the context of word/sentence during translation. For low resource language like Urdu, it is very difficult to have/find parallel corpus for transliteration which is large enough to train the system. In this work, we presented transliteration technique based on Expectation Maximization (EM) which is un-supervised and language independent. Systems learns the pattern and out-of-vocabulary (OOV) words from parallel corpus and there is no need to train it on transliteration corpus explicitly. This approach is tested on three models of statistical machine translation (SMT) which include phrasebased, hierarchical phrase-based and factor based models and two models of neural machine translation which include LSTM and transformer model.