论文标题
Hatexplain:可解释的仇恨言语检测的基准数据集
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
论文作者
论文摘要
仇恨言论是困扰在线社交媒体的一个具有挑战性的问题。尽管不断开发更好的仇恨言语检测模型,但关于仇恨言论的偏见和解释性方面的研究很少。在本文中,我们介绍了Hatexplain,这是第一个基准仇恨言论数据集,涵盖了该问题的多个方面。我们数据集中的每个帖子都从三个不同的角度注释:基本的,通常使用的三级分类(即仇恨,进攻或正常),目标社区(即,帖子中仇恨言论/进攻性言论)的受害者,以及理性的言论,即他们的帖子的一部分,其标签的决策是基于仇恨(以及仇恨)的仇恨。我们利用现有的最新模型,并观察到,即使在分类方面表现良好的模型也不会在模型合理性和忠诚之类的解释性指标上得分很高。我们还观察到,利用人类理由进行培训的模型在减少对目标社区的意外偏见方面表现更好。我们已在https://github.com/punyajoy/hatexplain上公开代码和数据集
Hate speech is a challenging issue plaguing the online social media. While better models for hate speech detection are continuously being developed, there is little research on the bias and interpretability aspects of hate speech. In this paper, we introduce HateXplain, the first benchmark hate speech dataset covering multiple aspects of the issue. Each post in our dataset is annotated from three different perspectives: the basic, commonly used 3-class classification (i.e., hate, offensive or normal), the target community (i.e., the community that has been the victim of hate speech/offensive speech in the post), and the rationales, i.e., the portions of the post on which their labelling decision (as hate, offensive or normal) is based. We utilize existing state-of-the-art models and observe that even models that perform very well in classification do not score high on explainability metrics like model plausibility and faithfulness. We also observe that models, which utilize the human rationales for training, perform better in reducing unintended bias towards target communities. We have made our code and dataset public at https://github.com/punyajoy/HateXplain