论文标题

PCL:对抗对抗性学习,具有不同的扩展,用于无监督的句子嵌入

PCL: Peer-Contrastive Learning with Diverse Augmentations for Unsupervised Sentence Embeddings

论文作者

Wu, Qiyu, Tao, Chongyang, Shen, Tao, Xu, Can, Geng, Xiubo, Jiang, Daxin

论文摘要

以无监督的方式学习句子嵌入是自然语言处理的基础。最近的常见实践是将预训练的语言模型与无监督的对比学习息息,他们的成功依赖于用语义上关闭的积极实例增强句子来构建对比对。但是,现有的方法通常取决于单声音策略,这会导致学习捷径增加增加偏见,从而破坏句子嵌入的质量。一个直接的解决方案是从多功能策略中求助于更多样化的积极因素,而一个开放的问题仍然是关于如何不容置疑地从多样化的阳性中学习,但在文本字段中具有不均匀的增强品质。作为一个答案,我们提出了一种新颖的同伴对抗性学习(PCL),并具有不同的增强。 PCL在小组级别构建了各种对比的积极因素和负面因素,以无监督的句子嵌入。 PCL执行同行阳性对比度以及同行网络合作,该合作具有固有的反偏见能力和一种从不同的增强中学习的有效方法。在STS基准上进行的实验证实了PCL对其竞争对手的有效性,并在无监督的句子嵌入中验证。

Learning sentence embeddings in an unsupervised manner is fundamental in natural language processing. Recent common practice is to couple pre-trained language models with unsupervised contrastive learning, whose success relies on augmenting a sentence with a semantically-close positive instance to construct contrastive pairs. Nonetheless, existing approaches usually depend on a mono-augmenting strategy, which causes learning shortcuts towards the augmenting biases and thus corrupts the quality of sentence embeddings. A straightforward solution is resorting to more diverse positives from a multi-augmenting strategy, while an open question remains about how to unsupervisedly learn from the diverse positives but with uneven augmenting qualities in the text field. As one answer, we propose a novel Peer-Contrastive Learning (PCL) with diverse augmentations. PCL constructs diverse contrastive positives and negatives at the group level for unsupervised sentence embeddings. PCL performs peer-positive contrast as well as peer-network cooperation, which offers an inherent anti-bias ability and an effective way to learn from diverse augmentations. Experiments on STS benchmarks verify the effectiveness of PCL against its competitors in unsupervised sentence embeddings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源