- Blake Williams: R. Ken Coit College of Pharmacy, University of Arizona, Tucson, AZ, USA.
- Brian L Erstad: Department of Pharmacy Practice and Science, R. Ken Coit College of Pharmacy, University of Arizona, Tucson, AZ, USA.
DISCLAIMER: In an effort to expedite the publication of articles, AJHP is posting manuscripts online as soon as possible after acceptance. Accepted manuscripts have been peer-reviewed and copyedited, but are posted online before technical formatting and author proofing. These manuscripts are not the final version of record and will be replaced with the final article (formatted per AJHP style and proofed by the authors) at a later time.
PURPOSE: To evaluate the recommendations given by 4 publicly available artificial intelligence (AI) programs in comparison to recommendations in current clinical practice guidelines (CPGs) focused on critically ill adults.
METHODS: This study evaluated 4 publicly available large language models (LLMs): ChatGPT 4.0, Microsoft Copilot Google Gemini Version 1.5, and Meta AI. Each AI chatbot was prompted with medication-related questions related to 6 CPGs published by the Society of Critical Care Medicine (SCCM) and also asked to provide references to support its recommendations. Responses were categorized as correct, partially correct, not correct, or "other" (eg, the LLM answered a question not asked).
RESULTS: In total, 43 responses were recorded for each AI program, with a significant difference (P = 0.007) in response types by AI program. Microsoft Copilot had the highest proportion of correct recommendations, followed by Meta AI, ChatGPT 4.0, and Google Gemini. All 4 LLMs gave some incorrect recommendations, with Gemini having the most incorrect responses, followed closely by ChatGPT. Copilot had the most responses in the "other" category (n = 5, 11.63%). On average, ChatGPT provided the greatest number of references per question (n = 4.54), followed by Google Gemini (n = 3.43), Meta AI (n = 3.06), and Microsoft Copilot (n = 2.04).
CONCLUSION: Although they showed potential for future utility to pharmacists with further development and refinement, the evaluated AI programs did not consistently give accurate medication-related recommendations for the purpose of answering clinical questions such as those pertaining to critical care CPGs.