论文标题
使用平行伯特深神经网络的假新闻检测
Fake news detection using parallel BERT deep neural networks
论文作者
论文摘要
假新闻是社交网络和媒体的日益严重的挑战。多年来,对假新闻的发现一直是一个问题,但是在社交网络的发展和近年来新闻传播速度的发展之后,再次考虑了。有几种解决此问题的方法,其中之一是使用深层神经网络根据其文本样式检测假新闻。近年来,用于自然语言处理的深神经网络最常用的形式之一是使用变压器转移学习。伯特(Bert)是最有前途的变压器之一,在许多NLP基准测试中都胜过其他模型。本文,我们介绍了MWPBERT,该文章使用两个平行的BERT网络在全文新闻文章上执行真实检测。伯特网络之一编码新闻标题,另一个编码新闻机构。由于BERT网络的输入长度是有限且恒定的,而且新闻机构通常是长文本,因此我们无法将整个新闻文本喂入Bert。因此,使用Maxworth算法,我们选择了新闻文本的一部分,这些部分对于事实检查更有价值,并将其送入BERT网络。最后,我们将两个BERT网络的输出编码为输出网络以对新闻进行分类。实验结果表明,所提出的模型在准确性和其他绩效指标方面优于先前的模型。
Fake news is a growing challenge for social networks and media. Detection of fake news always has been a problem for many years, but after the evolution of social networks and increasing speed of news dissemination in recent years has been considered again. There are several approaches to solving this problem, one of which is to detect fake news based on its text style using deep neural networks. In recent years, one of the most used forms of deep neural networks for natural language processing is transfer learning with transformers. BERT is one of the most promising transformers who outperforms other models in many NLP benchmarks. This article, we introduce MWPBert, which uses two parallel BERT networks to perform veracity detection on full-text news articles. One of the BERT networks encodes news headline, and another encodes news body. Since the input length of the BERT network is limited and constant and the news body is usually a long text, we cannot fed the whole news text into the BERT. Therefore, using the MaxWorth algorithm, we selected the part of the news text that is more valuable for fact-checking, and fed it into the BERT network. Finally, we encode the output of the two BERT networks to an output network to classify the news. The experiment results showed that the proposed model outperformed previous models in terms of accuracy and other performance measures.