Accuracy of Prospective Assessments of 4 Large Language Model Chatbot Responses to Patient Questions About Emergency Care: Experimental Comparative Study.

Jonathan Yi-Shin Yau, Soheil Saadat, Edmund Hsu, Linda Suk-Ling Murphy, Jennifer S Roh, Jeffrey Suchard, Antonio Tapia, Warren Wiechmann, Mark I Langdorf
Author Information
  1. Jonathan Yi-Shin Yau: College of Natural and Agricultural Sciences, University of California - Riverside, Riverside, CA, United States. ORCID
  2. Soheil Saadat: Department of Emergency Medicine, University of California - Irvine, Orange, CA, United States. ORCID
  3. Edmund Hsu: Department of Emergency Medicine, University of California - Irvine, Orange, CA, United States. ORCID
  4. Linda Suk-Ling Murphy: Reference Department, University of California - Irvine Libraries, Irvine, CA, United States. ORCID
  5. Jennifer S Roh: Department of Emergency Medicine, Harbor-UCLA Medical Center, University of California - Los Angeles, Torrance, CA, United States. ORCID
  6. Jeffrey Suchard: Department of Emergency Medicine, University of California - Irvine, Orange, CA, United States. ORCID
  7. Antonio Tapia: Department of Emergency Medicine, University of California - Irvine, Orange, CA, United States. ORCID
  8. Warren Wiechmann: Department of Emergency Medicine, University of California - Irvine, Orange, CA, United States. ORCID
  9. Mark I Langdorf: Department of Emergency Medicine, University of California - Irvine, Orange, CA, United States. ORCID

Abstract

BACKGROUND: Recent surveys indicate that 48% of consumers actively use generative artificial intelligence (AI) for health-related inquiries. Despite widespread adoption and the potential to improve health care access, scant research examines the performance of AI chatbot responses regarding emergency care advice.
OBJECTIVE: We assessed the quality of AI chatbot responses to common emergency care questions. We sought to determine qualitative differences in responses from 4 free-access AI chatbots, for 10 different serious and benign emergency conditions.
METHODS: We created 10 emergency care questions that we fed into the free-access versions of ChatGPT 3.5 (OpenAI), Google Bard, Bing AI Chat (Microsoft), and Claude AI (Anthropic) on November 26, 2023. Each response was graded by 5 board-certified emergency medicine (EM) faculty for 8 domains of percentage accuracy, presence of dangerous information, factual accuracy, clarity, completeness, understandability, source reliability, and source relevancy. We determined the correct, complete response to the 10 questions from reputable and scholarly emergency medical references. These were compiled by an EM resident physician. For the readability of the chatbot responses, we used the Flesch-Kincaid Grade Level of each response from readability statistics embedded in Microsoft Word. Differences between chatbots were determined by the chi-square test.
RESULTS: Each of the 4 chatbots' responses to the 10 clinical questions were scored across 8 domains by 5 EM faculty, for 400 assessments for each chatbot. Together, the 4 chatbots had the best performance in clarity and understandability (both 85%), intermediate performance in accuracy and completeness (both 50%), and poor performance (10%) for source relevance and reliability (mostly unreported). Chatbots contained dangerous information in 5% to 35% of responses, with no statistical difference between chatbots on this metric (P=.24). ChatGPT, Google Bard, and Claud AI had similar performances across 6 out of 8 domains. Only Bing AI performed better with more identified or relevant sources (40%; the others had 0%-10%). Flesch-Kincaid Reading level was 7.7-8.9 grade for all chatbots, except ChatGPT at 10.8, which were all too advanced for average emergency patients. Responses included both dangerous (eg, starting cardiopulmonary resuscitation with no pulse check) and generally inappropriate advice (eg, loosening the collar to improve breathing without evidence of airway compromise).
CONCLUSIONS: AI chatbots, though ubiquitous, have significant deficiencies in EM patient advice, despite relatively consistent performance. Information for when to seek urgent or emergent care is frequently incomplete and inaccurate, and patients may be unaware of misinformation. Sources are not generally provided. Patients who use AI to guide health care decisions assume potential risks. AI chatbots for health should be subject to further research, refinement, and regulation. We strongly recommend proper medical consultation to prevent potential adverse outcomes.

Keywords

References

  1. Cureus. 2023 Jun 13;15(6):e40351 [PMID: 37456381]
  2. Ann Emerg Med. 2016 Dec;68(6):729-735 [PMID: 27033141]
  3. Clin Mol Hepatol. 2023 Jul;29(3):721-732 [PMID: 36946005]
  4. JMIR Hum Factors. 2023 May 17;10:e47564 [PMID: 37195756]
  5. JAMA Oncol. 2023 Oct 1;9(10):1437-1440 [PMID: 37615960]
  6. BMC Med Inform Decis Mak. 2018 Oct 19;18(1):87 [PMID: 30340488]
  7. Ann Emerg Med. 1988 Feb;17(2):124-6 [PMID: 3337429]
  8. J Health Care Poor Underserved. 1994;5(2):99-111 [PMID: 8043732]
  9. PLOS Digit Health. 2023 Feb 9;2(2):e0000198 [PMID: 36812645]
  10. J Med Internet Res. 2024 Jul 23;26:e56930 [PMID: 39042446]
  11. Obes Surg. 2023 Jun;33(6):1790-1796 [PMID: 37106269]
  12. Australas Med J. 2014 Jan 31;7(1):24-8 [PMID: 24567763]
  13. Cureus. 2023 Sep 18;15(9):e45473 [PMID: 37727841]
  14. Eur J Hosp Pharm. 2024 Oct 25;31(6):491-497 [PMID: 37263772]
  15. Nat Med. 2023 Dec;29(12):2988 [PMID: 37957381]
  16. Radiology. 2023 Jun;307(5):e230922 [PMID: 37310252]
  17. J Med Syst. 2023 Nov 21;47(1):123 [PMID: 37987870]
  18. J Med Internet Res. 2024 Jul 25;26:e60807 [PMID: 39052324]
  19. J Med Internet Res. 2008 Jan 22;10(1):e3 [PMID: 18244894]
  20. J Am Soc Nephrol. 2023 Aug 1;34(8):1302-1304 [PMID: 37254254]
  21. Diagnostics (Basel). 2023 Jun 02;13(11): [PMID: 37296802]
  22. J Patient Exp. 2021 Mar 3;8:2374373521998847 [PMID: 34179407]
  23. West J Emerg Med. 2022 Oct 31;23(6):855-863 [PMID: 36409936]
  24. Prim Care. 2023 Dec;50(4):657-670 [PMID: 37866838]
  25. JAMA. 2002 May 22-29;287(20):2691-700 [PMID: 12020305]
  26. Urology. 2023 Oct;180:35-58 [PMID: 37406864]

MeSH Term

Humans
Emergency Medical Services
Prospective Studies
Artificial Intelligence
Language

Word Cloud

Created with Highcharts 10.0.0AIcareemergencychatbotsresponseshealthperformancechatbot10questions4EM8informationpotentialadviceChatGPT5responsedomainsaccuracydangeroussourcemedicalusegenerativeartificialintelligenceimproveresearchfree-accessGoogleBardBingMicrosoftfacultyclaritycompletenessunderstandabilityreliabilitydeterminedreadabilityFlesch-KincaidacrosspatientsResponseseggenerallypatientmisinformationconsultationBACKGROUND:Recentsurveysindicate48%consumersactivelyhealth-relatedinquiriesDespitewidespreadadoptionaccessscantexaminesregardingOBJECTIVE:assessedqualitycommonsoughtdeterminequalitativedifferencesdifferentseriousbenignconditionsMETHODS:createdfedversions3OpenAIChatClaudeAnthropicNovember262023gradedboard-certifiedmedicinepercentagepresencefactualrelevancycorrectcompletereputablescholarlyreferencescompiledresidentphysicianusedGradeLevelstatisticsembeddedWordDifferenceschi-squaretestRESULTS:chatbots'clinicalscored400assessmentsTogetherbest85%intermediate50%poor10%relevancemostlyunreportedChatbotscontained5%35%statisticaldifferencemetricP=24Claudsimilarperformances6performedbetteridentifiedrelevantsources40%others0%-10%Readinglevel77-89gradeexceptadvancedaverageincludedstartingcardiopulmonaryresuscitationpulsecheckinappropriatelooseningcollarbreathingwithoutevidenceairwaycompromiseCONCLUSIONS:thoughubiquitoussignificantdeficienciesdespiterelativelyconsistentInformationseekurgentemergentfrequentlyincompleteinaccuratemayunawareSourcesprovidedPatientsguidedecisionsassumeriskssubjectrefinementregulationstronglyrecommendproperpreventadverseoutcomesAccuracyProspectiveAssessmentsLargeLanguageModelChatbotPatientQuestionsEmergencyCare:ExperimentalComparativeStudyconsumerliteracynaturallanguageprocessingeducation

Similar Articles

Cited By (1)