Safeguarding authenticity for mitigating the harms of generative AI: Issues, research agenda, and policies for detection, fact-checking, and ethical AI.

Ahmed Abdeen Hamed, Malgorzata Zachara-Szymanska, Xindong Wu
Author Information
  1. Ahmed Abdeen Hamed: Network Medicine and AI, Sano Centre for Computational Medicine, Czarnowiejska 36 Building C5, 30-072 Cracow, Poland.
  2. Malgorzata Zachara-Szymanska: Faculty of International and Political Studies, Jagiellonian University, Wladys lawa Reymonta 4, 30-059 Cracow, Poland.
  3. Xindong Wu: Zhejiang Lab, Research Center for Knowledge Engineering, Kechuang Avenue, Hangzhou, Zhejiang Province 311121, P.R. China.

Abstract

As the influence of transformer-based approaches in general and generative artificial intelligence (AI) in particular continues to expand across various domains, concerns regarding authenticity and explainability are on the rise. Here, we share our perspective on the necessity of implementing effective detection, verification, and explainability mechanisms to counteract the potential harms arising from the proliferation of AI-generated inauthentic content and science. We recognize the transformative potential of generative AI, exemplified by ChatGPT, in the scientific landscape. However, we also emphasize the urgency of addressing associated challenges, particularly in light of the risks posed by disinformation, misinformation, and unreproducible science. This perspective serves as a response to the call for concerted efforts to safeguard the authenticity of information in the age of AI. By prioritizing detection, fact-checking, and explainability policies, we aim to foster a climate of trust, uphold ethical standards, and harness the full potential of AI for the betterment of science and society.

Keywords

References

  1. R Soc Open Sci. 2019 May 1;6(5):190161 [PMID: 31218057]
  2. Bull Med Libr Assoc. 2000 Jul;88(3):265-6 [PMID: 10928714]
  3. Database (Oxford). 2020 Jan 1;2020: [PMID: 32761142]
  4. Nucleic Acids Res. 2012 Jan;40(Database issue):D136-43 [PMID: 22139910]
  5. Bioinformatics. 2022 Sep 16;38(Suppl_2):ii120-ii126 [PMID: 36124793]
  6. Nature. 2023 Nov;623(7987):466-467 [PMID: 37949983]
  7. Sci Data. 2022 Jun 17;9(1):322 [PMID: 35715466]
  8. Adv Neural Inf Process Syst. 2019 Dec;32:9240-9251 [PMID: 32265580]
  9. Science. 2018 Mar 9;359(6380):1094-1096 [PMID: 29590025]
  10. Brief Bioinform. 2023 Nov 22;25(1): [PMID: 38168838]
  11. Int J Environ Res Public Health. 2022 Jan 11;19(2): [PMID: 35055616]
  12. Nature. 2022 Dec;612(7940):386-387 [PMID: 36460917]

Word Cloud

Created with Highcharts 10.0.0AIgenerativeintelligenceauthenticityexplainabilitydetectionpotentialscienceperspectiveharmsfact-checkingpoliciesethicalArtificialsciencesinfluencetransformer-basedapproachesgeneralartificialparticularcontinuesexpandacrossvariousdomainsconcernsregardingrisesharenecessityimplementingeffectiveverificationmechanismscounteractarisingproliferationAI-generatedinauthenticcontentrecognizetransformativeexemplifiedChatGPTscientificlandscapeHoweveralsoemphasizeurgencyaddressingassociatedchallengesparticularlylightrisksposeddisinformationmisinformationunreproducibleservesresponsecallconcertedeffortssafeguardinformationageprioritizingaimfosterclimatetrustupholdstandardsharnessfullbettermentsocietySafeguardingmitigatingAI:IssuesresearchagendaapplicationsBiocomputationalmethodBioinformaticsBiologicalComputationalbioinformaticsNaturalNeuralnetworks

Similar Articles

Cited By (1)