论文标题

您只需要重叠的单词删除:在希望语音检测中重新访问数据不平衡

Overlapping Word Removal is All You Need: Revisiting Data Imbalance in Hope Speech Detection

论文作者

LekshmiAmmal, Hariharan RamakrishnaIyer, Ravikiran, Manikandan, Nisha, Gayathri, Balamuralidhar, Navyasree, Madhusoodanan, Adithya, Madasamy, Anand Kumar, Chakravarthi, Bharathi Raja

论文摘要

希望语音检测是认可积极表达的任务,最近取得了重大进步。但是,目前的许多作品都集中在模型开发上,而无需考虑数据中固有的不平衡问题。我们的工作通过引入焦点损失,数据增强和预处理策略来重新讨论Hope语音检测。因此,我们发现,作为多语言 - 伯特(M-Bert)训练过程的一部分,引入焦点损失会减轻类不平衡的影响,并将整体F1-MaCro提高0.11。同时,尽管不平衡,基于M-Bert的基于上下文和反向翻译的单词增强量在基线上提高了0.10。最后,我们表明,基于预处理的重叠词删除,尽管简单,但将F1-MaCro提高了0.28。在适当的过程中,我们提出了描述每种策略的各种行为的详细研究,并从我们的经验结果中总结了有兴趣在现实数据不平衡条件下获得M-Bert的人,从我们的经验结果中总结了希望的人。

Hope Speech Detection, a task of recognizing positive expressions, has made significant strides recently. However, much of the current works focus on model development without considering the issue of inherent imbalance in the data. Our work revisits this issue in hope-speech detection by introducing focal loss, data augmentation, and pre-processing strategies. Accordingly, we find that introducing focal loss as part of Multilingual-BERT's (M-BERT) training process mitigates the effect of class imbalance and improves overall F1-Macro by 0.11. At the same time, contextual and back-translation-based word augmentation with M-BERT improves results by 0.10 over baseline despite imbalance. Finally, we show that overlapping word removal based on pre-processing, though simple, improves F1-Macro by 0.28. In due process, we present detailed studies depicting various behaviors of each of these strategies and summarize key findings from our empirical results for those interested in getting the most out of M-BERT for hope speech detection under real-world conditions of data imbalance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源