论文标题
在Twitter上识别和表征Qanon阴谋中激进化的行为类别
Identifying and Characterizing Behavioral Classes of Radicalization within the QAnon Conspiracy on Twitter
论文作者
论文摘要
社交媒体提供了一个肥沃的理由,阴谋论和激进的想法可以蓬勃发展,吸引广泛的观众,有时会导致在线世界本身以外的仇恨或暴力。 Qanon代表了一场政治阴谋的一个显着例子,该阴谋始于社交媒体,但成为主流,部分原因是有影响力的政治人物的公共认可。如今,Qanon的阴谋经常出现在新闻中,是政治言论的一部分,在美国大量人群中拥护。因此,重要的是要了解这种阴谋是如何在网上扎根的,以及导致这么多社交媒体用户采用其想法的原因。在这项工作中,我们提出了一个框架,该框架可以利用社交互动和内容信号,以发现用户激进化或对Qanon的支持的证据。利用在2020年美国总统大选前收集的240m推文的大型数据集,我们定义并验证了激进化的多元度量指标。我们将其用来将与激进过程相关的不同,自然出现的行为类别分开,从自宣传的Qanon支持者到超活动的阴谋促进者。我们还分析了Twitter的节制政策对不同类别之间相互作用的影响:我们发现了成功的节奏方面,从而大大降低了超活动Qanon帐户所获得的认可。但是,我们也发现了适度失败的地方,显示了Qanon内容放大器如何不受Twitter干预的阻止或影响。我们的发现完善了我们对在线激进过程的理解,揭示了节制的有效和无效方面,并呼吁需要进一步调查社交媒体在阴谋传播中的作用。
Social media provide a fertile ground where conspiracy theories and radical ideas can flourish, reach broad audiences, and sometimes lead to hate or violence beyond the online world itself. QAnon represents a notable example of a political conspiracy that started out on social media but turned mainstream, in part due to public endorsement by influential political figures. Nowadays, QAnon conspiracies often appear in the news, are part of political rhetoric, and are espoused by significant swaths of people in the United States. It is therefore crucial to understand how such a conspiracy took root online, and what led so many social media users to adopt its ideas. In this work, we propose a framework that exploits both social interaction and content signals to uncover evidence of user radicalization or support for QAnon. Leveraging a large dataset of 240M tweets collected in the run-up to the 2020 US Presidential election, we define and validate a multivariate metric of radicalization. We use that to separate users in distinct, naturally-emerging, classes of behaviors associated to radicalization processes, from self-declared QAnon supporters to hyper-active conspiracy promoters. We also analyze the impact of Twitter's moderation policies on the interactions among different classes: we discover aspects of moderation that succeed, yielding a substantial reduction in the endorsement received by hyper-active QAnon accounts. But we also uncover where moderation fails, showing how QAnon content amplifiers are not deterred or affected by Twitter intervention. Our findings refine our understanding of online radicalization processes, reveal effective and ineffective aspects of moderation, and call for the need to further investigate the role social media play in the spread of conspiracies.