论文标题
jarvix在Semeval-2022任务2:需要一个人知道一个吗?使用零和一声学习的惯用性检测
JARVix at SemEval-2022 Task 2: It Takes One to Know One? Idiomaticity Detection using Zero and One-Shot Learning
论文作者
论文摘要
大型语言模型通过捕获文本表示的组成性,在各种自然语言处理任务中取得了成功。尽管它们取得了巨大的成功,但这些向量表示未能捕获惯用多字表达式(MWES)的含义。在本文中,我们专注于使用二进制分类检测惯用表达式。我们使用一个数据集,该数据集包括英语和葡萄牙语中MWE的字面用法和惯用性。此后,我们在两个不同的设置中执行分类:零拍和一个镜头,以确定给定的句子是否包含一个习语。 n个任务的n射击分类是由训练和测试集之间的n个常见成语数定义的。在本文中,我们在设置中训练多个大型语言模型,并为零射击设置获得0.73的F1分数(宏),一个射击设置为0.85的F1分数(宏)。可以在https://github.com/ashwinpathak20/idiomation_detection_uside_few_shot_learning上找到我们工作的实现。
Large Language Models have been successful in a wide variety of Natural Language Processing tasks by capturing the compositionality of the text representations. In spite of their great success, these vector representations fail to capture meaning of idiomatic multi-word expressions (MWEs). In this paper, we focus on the detection of idiomatic expressions by using binary classification. We use a dataset consisting of the literal and idiomatic usage of MWEs in English and Portuguese. Thereafter, we perform the classification in two different settings: zero shot and one shot, to determine if a given sentence contains an idiom or not. N shot classification for this task is defined by N number of common idioms between the training and testing sets. In this paper, we train multiple Large Language Models in both the settings and achieve an F1 score (macro) of 0.73 for the zero shot setting and an F1 score (macro) of 0.85 for the one shot setting. An implementation of our work can be found at https://github.com/ashwinpathak20/Idiomaticity_Detection_Using_Few_Shot_Learning.