LLM-Enhanced multimodal detection of fake news.

Jingwei Wang, Ziyue Zhu, Chunxiao Liu, Rong Li, Xin Wu
Author Information
  1. Jingwei Wang: School of Humanities and Communication, Zhejiang Gongshang University, Hangzhou, China. ORCID
  2. Ziyue Zhu: School of Information and Electronic Engineering, Zhejiang Gongshang University, Hangzhou, China.
  3. Chunxiao Liu: School of Computer Science and Technology, Zhejiang Gongshang University, Hangzhou, China.
  4. Rong Li: School of Humanities and Communication, Zhejiang Gongshang University, Hangzhou, China.
  5. Xin Wu: School of Humanities and Communication, Zhejiang Gongshang University, Hangzhou, China.

Abstract

Fake news detection is growing in importance as a key topic in the information age. However, most current methods rely on pre-trained small language models (SLMs), which face significant limitations in processing news content that requires specialized knowledge, thereby constraining the efficiency of fake news detection. To address these limitations, we propose the FND-LLM Framework, which effectively combines SLMs and LLMs to enhance their complementary strengths and explore the capabilities of LLMs in multimodal fake news detection. The FND-LLM framework integrates the textual feature branch, the visual semantic branch, the visual tampering branch, the co-attention network, the cross-modal feature branch and the large language model branch. The textual feature branch and visual semantic branch are responsible for extracting the textual and visual information of the news content, respectively, while the co-attention network is used to refine the interrelationship between the textual and visual information. The visual tampering branch is responsible for extracting news image tampering features. The cross-modal feature branch enhances inter-modal complementarity through the CLIP model, while the large language model branch utilizes the inference capability of LLMs to provide auxiliary explanation for the detection process. Our experimental results indicate that the FND-LLM framework outperforms existing models, achieving improvements of 0.7%, 6.8% and 1.3% improvements in overall accuracy on Weibo, Gossipcop, and Politifact, respectively.

References

  1. Front Neurosci. 2019 Mar 07;13:95 [PMID: 30899212]
  2. Big Data. 2020 Jun;8(3):171-188 [PMID: 32491943]
  3. Inf Process Manag. 2021 Sep;58(5):102610 [PMID: 36567974]
  4. Heliyon. 2023 Sep 21;9(10):e20382 [PMID: 37780751]

MeSH Term

Humans
Deception
Semantics
Algorithms
Social Media

Word Cloud

Created with Highcharts 10.0.0branchnewsvisualdetectiontextualfeatureinformationlanguagefakeFND-LLMLLMstamperingmodelmodelsSLMslimitationscontentmultimodalframeworksemanticco-attentionnetworkcross-modallargeresponsibleextractingrespectivelyimprovementsFakegrowingimportancekeytopicageHowevercurrentmethodsrelypre-trainedsmallfacesignificantprocessingrequiresspecializedknowledgetherebyconstrainingefficiencyaddressproposeFrameworkeffectivelycombinesenhancecomplementarystrengthsexplorecapabilitiesintegratesusedrefineinterrelationshipimagefeaturesenhancesinter-modalcomplementarityCLIPutilizesinferencecapabilityprovideauxiliaryexplanationprocessexperimentalresultsindicateoutperformsexistingachieving07%68%13%overallaccuracyWeiboGossipcopPolitifactLLM-Enhanced

Similar Articles

Cited By

No available data.