论文标题
面向域的前缀调整:朝对话摘要进行高效且可推广的微调
Domain-Oriented Prefix-Tuning: Towards Efficient and Generalizable Fine-tuning for Zero-Shot Dialogue Summarization
论文作者
论文摘要
最先进的抽象性对话摘要缺乏对新领域的概括能力,而现有的摘要中域适应性的研究通常依赖于大规模的预培训。在本文中,我们提出了一种有效且可推广的面向域的前缀调整模型,探索对对话总结的适应性对话总结的轻巧微调方法,该模型使用域单词初始化的前缀模块来减轻域范围,以减轻范围的范围,并指导模型的模型,并提示派遣模型并增强对话的模型。我们在两个多域对话摘要数据集(Todsum和Qmsum)上进行零射击实验,并在两个多域对话摘要数据集上构建域适应性基准。足够的实验和定性分析证明了我们方法的有效性。
The most advanced abstractive dialogue summarizers lack generalization ability on new domains and the existing researches for domain adaptation in summarization generally rely on large-scale pre-trainings. To explore the lightweight fine-tuning methods for domain adaptation of dialogue summarization, in this paper, we propose an efficient and generalizable Domain-Oriented Prefix-tuning model, which utilizes a domain word initialized prefix module to alleviate domain entanglement and adopts discrete prompts to guide the model to focus on key contents of dialogues and enhance model generalization. We conduct zero-shot experiments and build domain adaptation benchmarks on two multi-domain dialogue summarization datasets, TODSum and QMSum. Adequate experiments and qualitative analysis prove the effectiveness of our methods.