AI in Radiology: Navigating Medical Responsibility.

Maria Teresa Contaldo, Giovanni Pasceri, Giacomo Vignati, Laura Bracchi, Sonia Triggiani, Gianpaolo Carrafiello
Author Information
  1. Maria Teresa Contaldo: Postgraduation School in Radiodiagnostics, University of Milan, 20122 Milan, Italy. ORCID
  2. Giovanni Pasceri: Information Society Law Center, Department "Cesare Beccaria", University of Milan, 20122 Milan, Italy. ORCID
  3. Giacomo Vignati: Postgraduation School in Radiodiagnostics, University of Milan, 20122 Milan, Italy. ORCID
  4. Laura Bracchi: Cerba Healthcare Italia, 20139 Milan, Italy.
  5. Sonia Triggiani: Postgraduation School in Radiodiagnostics, University of Milan, 20122 Milan, Italy.
  6. Gianpaolo Carrafiello: Postgraduation School in Radiodiagnostics, University of Milan, 20122 Milan, Italy.

Abstract

The application of Artificial Intelligence (AI) facilitates medical activities by automating routine tasks for healthcare professionals. AI augments but does not replace human decision-making, thus complicating the process of addressing legal responsibility. This study investigates the legal challenges associated with the medical use of AI in radiology, analyzing relevant case law and literature, with a specific focus on professional liability attribution. In the case of an error, the primary responsibility remains with the physician, with possible shared liability with developers according to the framework of medical device liability. If there is disagreement with the AI's findings, the physician must not only pursue but also justify their choices according to prevailing professional standards. Regulations must balance the autonomy of AI systems with the need for responsible clinical practice. Effective use of AI-generated evaluations requires knowledge of data dynamics and metrics like sensitivity and specificity, even without a clear understanding of the underlying algorithms: the opacity (referred to as the "black box phenomenon") of certain systems raises concerns about the interpretation and actual usability of results for both physicians and patients. AI is redefining healthcare, underscoring the imperative for robust liability frameworks, meticulous updates of systems, and transparent patient communication regarding AI involvement.

Keywords

References

  1. Lancet Oncol. 2019 May;20(5):e262-e273 [PMID: 31044724]
  2. Phys Med. 2021 Mar;83:1-8 [PMID: 33657513]
  3. Eur Radiol. 2020 Jun;30(6):3576-3584 [PMID: 32064565]
  4. J Am Med Inform Assoc. 2020 Apr 1;27(4):592-600 [PMID: 32106285]
  5. BMJ. 2021 Mar 29;372:n71 [PMID: 33782057]
  6. Regul Gov. 2024 Jan;18(1):3-32 [PMID: 38435808]
  7. Perspect Biol Med. 2019;62(2):237-256 [PMID: 31281120]
  8. J Med Ethics. 2019 Mar;45(3):156-160 [PMID: 30467198]
  9. Insights Imaging. 2018 Oct;9(5):745-753 [PMID: 30112675]
  10. Bull World Health Organ. 2020 Apr 1;98(4):245-250 [PMID: 32284647]
  11. Med J Aust. 2020 Sep;213(6):253-255.e1 [PMID: 32892395]
  12. Philos Trans A Math Phys Eng Sci. 2018 Sep 13;376(2128): [PMID: 30082306]
  13. J Med Internet Res. 2020 Jul 13;22(7):e16649 [PMID: 32673231]
  14. Can Assoc Radiol J. 2019 Nov;70(4):329-334 [PMID: 31585825]
  15. Radiol Med. 2020 Jun;125(6):517-521 [PMID: 32006241]
  16. J Med Ethics. 2020 Apr 3;: [PMID: 32245804]
  17. Front Public Health. 2020 Apr 28;8:117 [PMID: 32411641]
  18. AMA J Ethics. 2019 Feb 1;21(2):E160-166 [PMID: 30794126]
  19. Front Surg. 2022 Mar 14;9:862322 [PMID: 35360424]
  20. N C Med J. 2019 Jul-Aug;80(4):220-222 [PMID: 31278182]
  21. Front Artif Intell. 2022 May 30;5:879603 [PMID: 35707765]
  22. J Gastroenterol Hepatol. 2021 May;36(5):1143-1148 [PMID: 33955059]
  23. J Am Med Inform Assoc. 2020 Dec 9;27(12):2024-2027 [PMID: 32585698]
  24. JAMA. 2019 Dec 24;322(24):2377-2378 [PMID: 31755905]
  25. Front Genet. 2022 Oct 04;13:927721 [PMID: 36267404]
  26. Med Health Care Philos. 2017 Mar;20(1):61-66 [PMID: 27581425]
  27. Can Assoc Radiol J. 2019 May;70(2):107-118 [PMID: 30962048]
  28. Front Psychol. 2021 Sep 28;12:710982 [PMID: 34650476]
  29. Int J Med Inform. 2022 May;161:104738 [PMID: 35299098]

Word Cloud

Created with Highcharts 10.0.0AIliabilitymedicalresponsibilitysystemsArtificialIntelligencehealthcaredecision-makingprocesslegalusecaseprofessionalphysicianaccordingmustapplicationfacilitatesactivitiesautomatingroutinetasksprofessionalsaugmentsreplacehumanthuscomplicatingaddressingstudyinvestigateschallengesassociatedradiologyanalyzingrelevantlawliteraturespecificfocusattributionerrorprimaryremainspossibleshareddevelopersframeworkdevicedisagreementAI'sfindingspursuealsojustifychoicesprevailingstandardsRegulationsbalanceautonomyneedresponsibleclinicalpracticeEffectiveAI-generatedevaluationsrequiresknowledgedatadynamicsmetricslikesensitivityspecificityevenwithoutclearunderstandingunderlyingalgorithms:opacityreferred"blackboxphenomenon"certainraisesconcernsinterpretationactualusabilityresultsphysicianspatientsredefiningunderscoringimperativerobustframeworksmeticulousupdatestransparentpatientcommunicationregardinginvolvementRadiology:NavigatingMedicalResponsibilitySystemsAISsEuropeandoctrineblack-boxphenomenoncomputernalismtransparency

Similar Articles

Cited By