论文标题
隐藏在明显的背后:误导关键字和社交媒体上隐含的滥用语言
Hidden behind the obvious: misleading keywords and implicitly abusive language on social media
论文作者
论文摘要
尽管社交媒体提供自由表达的自由,但滥用语言会产生严重的负面社会影响。在该问题的重要性的推动下,自动检测滥用语言的研究见证了增长和进步。但是,这些检测模型表现出对强烈指示性关键字的依赖,例如诽谤和亵渎性。这意味着他们可以在没有这样的关键字的情况下错误地(1A)滥用滥用,或(1b)以这种关键字为单位的标志,并且(2)他们在看不见的数据上表现较差。尽管认识到这些问题,但文献中仍然存在差距和不一致。在这项研究中,我们分析了从数据集构造到详细的模型行为的关键字的影响,重点是模型如何在(1a)和(1b)上犯错误,以及(1a)和(1b)与(2)的相互作用(1A)和(1B)。通过分析,我们为将来的研究提供了建议,以解决这三个问题。
While social media offers freedom of self-expression, abusive language carry significant negative social impact. Driven by the importance of the issue, research in the automated detection of abusive language has witnessed growth and improvement. However, these detection models display a reliance on strongly indicative keywords, such as slurs and profanity. This means that they can falsely (1a) miss abuse without such keywords or (1b) flag non-abuse with such keywords, and that (2) they perform poorly on unseen data. Despite the recognition of these problems, gaps and inconsistencies remain in the literature. In this study, we analyse the impact of keywords from dataset construction to model behaviour in detail, with a focus on how models make mistakes on (1a) and (1b), and how (1a) and (1b) interact with (2). Through the analysis, we provide suggestions for future research to address all three problems.