论文标题
具有属性歧视性潜在空间的语言排毒
Language Detoxification with Attribute-Discriminative Latent Space
论文作者
论文摘要
基于变压器的语言模型(LMS)在自然语言理解任务上取得了令人印象深刻的结果,但它们还可以生成有毒文本,例如侮辱,威胁和亵渎性,从而限制了其现实世界中的应用。为了克服这个问题,一些文本生成方法旨在使用其他LMS或扰动来排毒有毒文本。但是,以前的方法需要过多的记忆,计算和时间,这些方法在其现实世界中的应用中是严重的瓶颈。为了解决此类局限性,我们提出了一种使用属性 - 歧义潜在空间的有效但有效的语言排毒方法。具体而言,我们将原始变压器LM的潜在空间投射到歧视性潜在空间上,该空间使用投影块和属性歧视器通过其属性很好地分离文本。这允许LM通过最小的内存和计算开销来控制文本生成。我们验证了我们的模型,属性 - 歧视语言模型(ADLM)对排毒语言和对话生成任务,我们的方法在其性能和效率方面的表现都大大优于基础。
Transformer-based Language Models (LMs) have achieved impressive results on natural language understanding tasks, but they can also generate toxic text such as insults, threats, and profanity, limiting their real-world applications. To overcome this issue, a few text generation approaches aim to detoxify toxic texts using additional LMs or perturbations. However, previous methods require excessive memory, computations, and time which are serious bottlenecks in their real-world application. To address such limitations, we propose an effective yet efficient method for language detoxification using an attribute-discriminative latent space. Specifically, we project the latent space of an original Transformer LM onto a discriminative latent space that well-separates texts by their attributes using a projection block and an attribute discriminator. This allows the LM to control the text generation to be non-toxic with minimal memory and computation overhead. We validate our model, Attribute-Discriminative Language Model (ADLM) on detoxified language and dialogue generation tasks, on which our method significantly outperforms baselines both in performance and efficiency.