论文标题
从负面示例中学习以连贯性写作
Learning to Write with Coherence From Negative Examples
论文作者
论文摘要
连贯性是决定写作质量的关键因素之一。我们建议对神经编码器自然语言产生(NLG)模型的写作相关性(WR)培训方法,该模型通过利用负面示例来提高延续的连贯性。 WR损失会回归上下文的矢量表示,并通过将其与负面因素进行对比,从而产生句子对积极的延续。我们将我们的方法与常识性自然语言推理(NLI)Corpora的文本连续任务中的不可能(UL)培训进行了比较,以表明哪种方法可以通过避免不太可能继续进行的方法来更好地模拟连贯性。我们在人类评估中对方法的偏爱表明了我们方法在改善连贯性方面的功效。
Coherence is one of the critical factors that determine the quality of writing. We propose writing relevance (WR) training method for neural encoder-decoder natural language generation (NLG) models which improves coherence of the continuation by leveraging negative examples. WR loss regresses the vector representation of the context and generated sentence toward positive continuation by contrasting it with the negatives. We compare our approach with Unlikelihood (UL) training in a text continuation task on commonsense natural language inference (NLI) corpora to show which method better models the coherence by avoiding unlikely continuations. The preference of our approach in human evaluation shows the efficacy of our method in improving coherence.