论文标题
G2TMN在约束@aaai2021:利用ct-bert和结合学习covid-19
g2tmn at Constraint@AAAI2021: Exploiting CT-BERT and Ensembling Learning for COVID-19 Fake News Detection
论文作者
论文摘要
COVID-19大流行对人类生活的各个领域产生了巨大影响。因此,冠状病毒大流行及其后果正在社交媒体上积极讨论。但是,并非所有社交媒体帖子都是真实的。他们中的许多人传播了假新闻,引起读者的恐慌,错误的人,从而加剧了大流行的影响。在本文中,我们通过约束@AAAI2021共享任务介绍结果:COVID-19-19-Fake News以英语检测。特别是,我们使用基于变压器的COVID-TWITTER-BERT(CT-BERT)模型的集合提出了我们的方法。我们描述了所使用的模型,文本预处理和添加额外数据的方式。结果,我们最好的模型在测试集(排行榜中的第一名)上达到了加权F1得分为98.69,该任务总共吸引了166个提交的团队。
The COVID-19 pandemic has had a huge impact on various areas of human life. Hence, the coronavirus pandemic and its consequences are being actively discussed on social media. However, not all social media posts are truthful. Many of them spread fake news that cause panic among readers, misinform people and thus exacerbate the effect of the pandemic. In this paper, we present our results at the Constraint@AAAI2021 Shared Task: COVID-19 Fake News Detection in English. In particular, we propose our approach using the transformer-based ensemble of COVID-Twitter-BERT (CT-BERT) models. We describe the models used, the ways of text preprocessing and adding extra data. As a result, our best model achieved the weighted F1-score of 98.69 on the test set (the first place in the leaderboard) of this shared task that attracted 166 submitted teams in total.