Towards COVID-19 fake news detection using transformer-based models.

Jawaher Alghamdi, Yuqing Lin, Suhuai Luo
Author Information
  1. Jawaher Alghamdi: School of Information and Physical Sciences, The University of Newcastle, Newcastle, Australia.
  2. Yuqing Lin: School of Information and Physical Sciences, The University of Newcastle, Newcastle, Australia.
  3. Suhuai Luo: School of Information and Physical Sciences, The University of Newcastle, Newcastle, Australia.

Abstract

The COVID-19 pandemic has resulted in a surge of fake news, creating public health risks. However, developing an effective way to detect such news is challenging, especially when published news involves mixing true and false information. Detecting COVID-19 fake news has become a critical task in the field of natural language processing (NLP). This paper explores the effectiveness of several machine learning algorithms and fine-tuning pre-trained transformer-based models, including Bidirectional Encoder Representations from Transformers (BERT) and COVID-Twitter-BERT (CT-BERT), for COVID-19 fake news detection. We evaluate the performance of different downstream neural network structures, such as CNN and BiGRU layers, added on top of BERT and CT-BERT with frozen or unfrozen parameters. Our experiments on a real-world COVID-19 fake news dataset demonstrate that incorporating BiGRU on top of the CT-BERT model achieves outstanding performance, with a state-of-the-art F1 score of 98%. These results have significant implications for mitigating the spread of COVID-19 misinformation and highlight the potential of advanced machine learning models for fake news detection.

Keywords

References

  1. PLoS One. 2021 May 12;16(5):e0251605 [PMID: 33979412]
  2. J Exp Psychol Appl. 2021 Dec;27(4):773-784 [PMID: 34110860]
  3. Front Artif Intell. 2023 Mar 14;6:1023281 [PMID: 36998290]

Word Cloud

Created with Highcharts 10.0.0newsCOVID-19fakemodelsCT-BERTdetectionmachinelearningtransformer-basedBERTperformanceBiGRUtoppandemicresultedsurgecreatingpublichealthrisksHoweverdevelopingeffectivewaydetectchallengingespeciallypublishedinvolvesmixingtruefalseinformationDetectingbecomecriticaltaskfieldnaturallanguageprocessingNLPpaperexploreseffectivenessseveralalgorithmsfine-tuningpre-trainedincludingBidirectionalEncoderRepresentationsTransformersCOVID-Twitter-BERTevaluatedifferentdownstreamneuralnetworkstructuresCNNlayersaddedfrozenunfrozenparametersexperimentsreal-worlddatasetdemonstrateincorporatingmodelachievesoutstandingstate-of-the-artF1score98%resultssignificantimplicationsmitigatingspreadmisinformationhighlightpotentialadvancedTowardsusingFakeMisinformationPre-trainedtransformerSocialmedia

Similar Articles

Cited By