论文标题

XU在Semeval-2022任务4:前伯特神经网络方法与伯伯特·罗伯塔邮政在罗伯塔邮政

Xu at SemEval-2022 Task 4: Pre-BERT Neural Network Methods vs Post-BERT RoBERTa Approach for Patronizing and Condescending Language Detection

论文作者

Xu, Jinghua

论文摘要

本文描述了我参与Semeval-2022任务4:光顾和屈服于语言检测。我参与了这两个子任务:光顾和屈服语言(PCL)识别以及光顾和屈服于语言分类,主要重点放在子任务1中。实验比较基于后的前伯特神经网络(NN)系统与后遗产后的语言模型Roberta相比。这项研究发现,与审计的语言模型相比,实验中基于NN的系统在任务上的表现较差。在子任务1中,最佳的罗伯塔系统在78个团队中排名第26(F1得分:54.64),在子任务2中的49支球队中有26个(F1得分:30.03)中有26个。

This paper describes my participation in the SemEval-2022 Task 4: Patronizing and Condescending Language Detection. I participate in both subtasks: Patronizing and Condescending Language (PCL) Identification and Patronizing and Condescending Language Categorization, with the main focus put on subtask 1. The experiments compare pre-BERT neural network (NN) based systems against post-BERT pretrained language model RoBERTa. This research finds NN-based systems in the experiments perform worse on the task compared to the pretrained language models. The top-performing RoBERTa system is ranked 26 out of 78 teams (F1-score: 54.64) in subtask 1, and 23 out of 49 teams (F1-score: 30.03) in subtask 2.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源