论文标题

评估无监督的文本分类:零拍和基于相似性的方法

Evaluating Unsupervised Text Classification: Zero-shot and Similarity-based Approaches

论文作者

Schopf, Tim, Braun, Daniel, Matthes, Florian

论文摘要

看不见的类的文本分类是一项具有挑战性的自然语言处理任务,主要使用两种不同类型的方法尝试。基于相似性的方法试图根据文本文档表示和类描述表示之间的相似性对实例进行分类。零击文本分类方法旨在通过为文本文档分配适当的未知类标签来概括从培训任务中获得的知识。尽管现有研究已经研究了这些类别的个体方法,但文献中的实验并未提供一致的比较。本文通过对不同类似类别的文本分类进行系统评估来解决这一差距。在四个文本分类数据集上对不同的最新方法进行了基准测试,包括来自医疗域的新数据集。此外,提出了新型的SIMCSE和SBERT基线,因为现有工作中使用的其他基线会产生弱分类结果,并且易于表现均优于表现。最后,提出了基于新颖的基于相似性的LBL2TransFormervec方法,在无监督的文本分类中,它的表现优于先前最先进的方法。我们的实验表明,在大多数情况下,基于相似性的方法在大多数情况下都显着超过零击方法。此外,使用SIMCSE或SBERT嵌入而不是更简单的文本表示,可以进一步增加基于相似性的分类结果。

Text classification of unseen classes is a challenging Natural Language Processing task and is mainly attempted using two different types of approaches. Similarity-based approaches attempt to classify instances based on similarities between text document representations and class description representations. Zero-shot text classification approaches aim to generalize knowledge gained from a training task by assigning appropriate labels of unknown classes to text documents. Although existing studies have already investigated individual approaches to these categories, the experiments in literature do not provide a consistent comparison. This paper addresses this gap by conducting a systematic evaluation of different similarity-based and zero-shot approaches for text classification of unseen classes. Different state-of-the-art approaches are benchmarked on four text classification datasets, including a new dataset from the medical domain. Additionally, novel SimCSE and SBERT-based baselines are proposed, as other baselines used in existing work yield weak classification results and are easily outperformed. Finally, the novel similarity-based Lbl2TransformerVec approach is presented, which outperforms previous state-of-the-art approaches in unsupervised text classification. Our experiments show that similarity-based approaches significantly outperform zero-shot approaches in most cases. Additionally, using SimCSE or SBERT embeddings instead of simpler text representations increases similarity-based classification results even further.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源