论文标题

抽象性摘要转移到资源较低的语言

Cross-lingual Transfer of Abstractive Summarizer to Less-resource Language

论文作者

Žagar, Aleš, Robnik-Šikonja, Marko

论文摘要

自动文本摘要从文本中提取重要信息,并以摘要的形式介绍信息。抽象性摘要方法通过切换到深神经网络而显着发展,但结果尚不令人满意,尤其是对于不存在大型训练集的语言。在几种自然语言处理任务中,跨语性模型转移成功地以较少的资源语言应用。为了摘要,由于无法纠正目标语言产生的神经模型的不可解决的解码器,因此未尝试跨语性模型转移。在我们的工作中,我们使用基于深神经网络和顺序到序列体系结构的预先训练的英语摘要模型来总结斯洛文·新闻文章。我们通过使用其他语言模型来评估目标语言生成的文本,以解决解码器不足的问题。我们测试了几个具有不同目标数据进行微调的跨语性摘要模型。我们通过自动评估措施评估模型,并进行小规模的人类评估。自动评估表明,我们最佳的跨语言模型的摘要非常有用,质量类似于仅在目标语言中训练的模型。人类评估表明,我们的最佳模型以高精度和可接受的可读性生成摘要。但是,与其他抽象模型类似,我们的模型并不完美,有时可能会产生误导或荒谬的内容。

Automatic text summarization extracts important information from texts and presents the information in the form of a summary. Abstractive summarization approaches progressed significantly by switching to deep neural networks, but results are not yet satisfactory, especially for languages where large training sets do not exist. In several natural language processing tasks, a cross-lingual model transfer is successfully applied in less-resource languages. For summarization, the cross-lingual model transfer was not attempted due to a non-reusable decoder side of neural models that cannot correct target language generation. In our work, we use a pre-trained English summarization model based on deep neural networks and sequence-to-sequence architecture to summarize Slovene news articles. We address the problem of inadequate decoder by using an additional language model for the evaluation of the generated text in target language. We test several cross-lingual summarization models with different amounts of target data for fine-tuning. We assess the models with automatic evaluation measures and conduct a small-scale human evaluation. Automatic evaluation shows that the summaries of our best cross-lingual model are useful and of quality similar to the model trained only in the target language. Human evaluation shows that our best model generates summaries with high accuracy and acceptable readability. However, similar to other abstractive models, our models are not perfect and may occasionally produce misleading or absurd content.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源