论文标题

Templm:将语言模型提炼成基于模板的发电机

TempLM: Distilling Language Models into Template-Based Generators

论文作者

Zhang, Tianyi, Lee, Mina, Li, Lisa, Shen, Ende, Hashimoto, Tatsunori B.

论文摘要

虽然审慎的语言模型(PLM)大大改善了文本的生成,但也众所周知,它们会产生不忠或不适当的内容。相比之下,基于经典模板的系统以流利性为代价提供了强大的忠诚保证。我们提出了Templm,它通过将PLM提炼成基于模板的发电机来实现两全其美。在E2E和SynthBio数据之间数据集上,我们表明Templm比原始PLM更忠实,并且比以前的模板系统更流利。值得注意的是,在室外评估中,Templm将FineTuned Bart模型的不忠率从83%降低到0%。在一项人类研究中,我们发现Templm的模板在Bertscore中对人创作的模板显着改善。

While pretrained language models (PLMs) have greatly improved text generation, they have also been known to produce unfaithful or inappropriate content. In contrast, classic template-based systems provide strong guarantees of faithfulness at the cost of fluency. We propose TempLM, which achieves the best of both worlds by distilling a PLM into a template-based generator. On the E2E and SynthBio data-to-text datasets, we show that TempLM is more faithful than the original PLM and is more fluent than prior template systems. Notably, on an out-of-domain evaluation, TempLM reduces a finetuned BART model's unfaithfulness rate from 83% to 0%. In a human study, we find that TempLM's templates substantially improve upon human-written ones in BERTScore.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源