论文标题

使用基于偏好的强化学习进行抽象时间线摘要

Towards Abstractive Timeline Summarisation using Preference-based Reinforcement Learning

论文作者

Ye, Yuxuan, Simpson, Edwin

论文摘要

本文介绍了一条新颖的管道,以汇总多个新闻来源报告的事件时间表。基于变压器的抽象摘要模型产生了长文档的相干和简洁的摘要,但不能超越建立的专业任务的提取方法,例如时间轴摘要(TLS)。尽管提取性摘要更忠实于其来源,但它们可能不那么可读,并且包含多余或不必要的信息。本文提出了一种基于偏好的增强学习(PBRL)方法,用于调整预处理的抽象摘要对TLS,该方法可以克服提取时间线摘要的缺点。我们定义了一个复合奖励功能,该功能从感兴趣的关键字和成对偏好标签中学习,我们用它通过离线增强学习来微调预验证的抽象摘要。我们在三个数据集上同时进行自动化和人类评估,发现我们的方法在三个基准数据集中的两个上都优于一种可比的提取性TLS方法,并且参与者更喜欢我们的方法的摘要,而不是提取性TLS方法和预处理的抽象模型的摘要。该方法不需要昂贵的参考摘要,只需要少量的偏好即可将生成的摘要与人类的偏好对齐。

This paper introduces a novel pipeline for summarising timelines of events reported by multiple news sources. Transformer-based models for abstractive summarisation generate coherent and concise summaries of long documents but can fail to outperform established extractive methods on specialised tasks such as timeline summarisation (TLS). While extractive summaries are more faithful to their sources, they may be less readable and contain redundant or unnecessary information. This paper proposes a preference-based reinforcement learning (PBRL) method for adapting pretrained abstractive summarisers to TLS, which can overcome the drawbacks of extractive timeline summaries. We define a compound reward function that learns from keywords of interest and pairwise preference labels, which we use to fine-tune a pretrained abstractive summariser via offline reinforcement learning. We carry out both automated and human evaluation on three datasets, finding that our method outperforms a comparable extractive TLS method on two of the three benchmark datasets, and participants prefer our method's summaries to those of both the extractive TLS method and the pretrained abstractive model. The method does not require expensive reference summaries and needs only a small number of preferences to align the generated summaries with human preferences.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源