论文标题

checkthat!在2020年CLEF:启用社交媒体中索赔的自动识别和验证

CheckThat! at CLEF 2020: Enabling the Automatic Identification and Verification of Claims in Social Media

论文作者

Barron-Cedeno, Alberto, Elsayed, Tamer, Nakov, Preslav, Martino, Giovanni Da San, Hasanain, Maram, Suwaileh, Reem, Haouari, Fatima

论文摘要

我们描述了CheckThat的第三版!实验室,是2020年跨语言评估论坛(CLEF)的一部分。 checkthat!提出了四个补充任务和以前的实验室版本的相关任务,这些任务提供了英语,阿拉伯语和西班牙语。任务1要求预测Twitter流中哪些推文值得事实检查。任务2要求确定是否可以使用一组先前事实检查的索赔来验证推文中发布的索赔。任务3要求从给定的网页集检索文本片段,这对于验证目标推文的索赔非常有用。任务4要求使用一组网页和潜在有用的摘要来预测目标推文主张的准确性。最后,该实验室提供了第五项任务,要求预测英国政治辩论和演讲中提出的主张的核对性。 checkthat!具有完整的评估框架。评估是使用平均平均精度或等级K的平均精度或排名任务进行的,而对于分类任务进行了F1。

We describe the third edition of the CheckThat! Lab, which is part of the 2020 Cross-Language Evaluation Forum (CLEF). CheckThat! proposes four complementary tasks and a related task from previous lab editions, offered in English, Arabic, and Spanish. Task 1 asks to predict which tweets in a Twitter stream are worth fact-checking. Task 2 asks to determine whether a claim posted in a tweet can be verified using a set of previously fact-checked claims. Task 3 asks to retrieve text snippets from a given set of Web pages that would be useful for verifying a target tweet's claim. Task 4 asks to predict the veracity of a target tweet's claim using a set of Web pages and potentially useful snippets in them. Finally, the lab offers a fifth task that asks to predict the check-worthiness of the claims made in English political debates and speeches. CheckThat! features a full evaluation framework. The evaluation is carried out using mean average precision or precision at rank k for ranking tasks, and F1 for classification tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源