论文标题

训练后的对话摘要使用伪合作

Post-Training Dialogue Summarization using Pseudo-Paraphrasing

论文作者

Jia, Qi, Liu, Yizhu, Tang, Haifeng, Zhu, Kenny Q.

论文摘要

以前的对话摘要技术通过将特定于对话的特征注入模型中,适应了叙事文本预测的大型语言模型。这些功能要么需要额外的知识来识别或使所得模型更难调整。为了弥合对话摘要任务中的对话和叙事摘要之间的格式差距,我们建议在培训预训练的语言模型(PLMS)中以从对话到叙述。之后,该模型像往常一样进行微调以进行对话摘要。综合实验表明,我们的方法可以大大改善对话摘要的香草PLM,并通过摘要质量和实施成本优于其他SOTA模型。

Previous dialogue summarization techniques adapt large language models pretrained on the narrative text by injecting dialogue-specific features into the models. These features either require additional knowledge to recognize or make the resulting models harder to tune. To bridge the format gap between dialogues and narrative summaries in dialogue summarization tasks, we propose to post-train pretrained language models (PLMs) to rephrase from dialogue to narratives. After that, the model is fine-tuned for dialogue summarization as usual. Comprehensive experiments show that our approach significantly improves vanilla PLMs on dialogue summarization and outperforms other SOTA models by the summary quality and implementation costs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源