Risk of Injury in Moral Dilemmas With Autonomous Vehicles.

Celso M de Melo, Stacy Marsella, Jonathan Gratch
Author Information
  1. Celso M de Melo: CCDC US Army Research Laboratory, Playa Vista, CA, United States.
  2. Stacy Marsella: College of Computer and Information Science, Northeastern University, Boston, MA, United States.
  3. Jonathan Gratch: Institute for Creative Technologies, University of Southern, Playa Vista, CA, United States.

Abstract

As autonomous machines, such as automated vehicles (AVs) and robots, become pervasive in society, they will inevitably face Moral Dilemmas where they must make decisions that risk injuring humans. However, prior research has framed these dilemmas in starkly simple terms, i.e., framing decisions as life and death and neglecting the influence of risk of injury to the involved parties on the outcome. Here, we focus on this gap and present experimental work that systematically studies the effect of risk of injury on the decisions people make in these dilemmas. In four experiments, participants were asked to program their AVs to either save five pedestrians, which we refer to as the utilitarian choice, or save the driver, which we refer to as the nonutilitarian choice. The results indicate that most participants made the utilitarian choice but that this choice was moderated in important ways by perceived risk to the driver and risk to the pedestrians. As a second contribution, we demonstrate the value of formulating AV Moral Dilemmas in a game-theoretic framework that considers the possible influence of others' behavior. In the fourth experiment, we show that participants were more (less) likely to make the utilitarian choice, the more utilitarian (nonutilitarian) other drivers behaved; furthermore, unlike the game-theoretic prediction that decision-makers inevitably converge to nonutilitarianism, we found significant evidence of utilitarianism. We discuss theoretical implications for our understanding of human decision-making in Moral Dilemmas and practical guidelines for the design of autonomous machines that solve these dilemmas while, at the same time, being likely to be adopted in practice.

Keywords

References

  1. Science. 2001 Sep 14;293(5537):2105-8 [PMID: 11557895]
  2. PLoS One. 2017 Jan 11;12(1):e0170133 [PMID: 28076403]
  3. Trends Cogn Sci. 2007 Apr;11(4):143-52 [PMID: 17329147]
  4. Behav Brain Sci. 2001 Jun;24(3):383-403; discussion 403-51 [PMID: 11682798]
  5. Trends Cogn Sci. 2003 Jul;7(7):320-324 [PMID: 12860191]
  6. Nature. 2015 Jul 2;523(7558):24-6 [PMID: 26135432]
  7. Psychol Sci. 2018 Jul;29(7):1084-1093 [PMID: 29741993]
  8. Nature. 2015 Feb 5;518(7537):20-3 [PMID: 25652978]
  9. Perspect Psychol Sci. 2010 Mar;5(2):187-202 [PMID: 26162125]
  10. Sci Eng Ethics. 2019 Apr;25(2):399-418 [PMID: 29357047]
  11. Sci Eng Ethics. 2017 Jun;23(3):681-700 [PMID: 27417644]
  12. Nature. 2018 Nov;563(7729):59-64 [PMID: 30356211]
  13. Proc Natl Acad Sci U S A. 2019 Feb 26;116(9):3482-3487 [PMID: 30808742]
  14. Br J Soc Psychol. 2003 Sep;42(Pt 3):319-35 [PMID: 14567840]
  15. J Pers Soc Psychol. 2014 May;106(5):713-27 [PMID: 24749820]
  16. Perspect Psychol Sci. 2010 Mar;5(2):209-12 [PMID: 26162128]
  17. Trends Cogn Sci. 2013 Aug;17(8):413-25 [PMID: 23856025]
  18. Science. 2016 Jun 24;352(6293):1573-6 [PMID: 27339987]

Word Cloud

Created with Highcharts 10.0.0dilemmasriskchoiceutilitarianmoralmakedecisionsinjuryparticipantsautonomousmachinesautomatedvehiclesAVsinevitablyinfluencesavepedestriansreferdrivernonutilitariangame-theoreticlikelyrobotsbecomepervasivesocietywillfacemustinjuringhumansHoweverpriorresearchframedstarklysimpletermsieframinglifedeathneglectinginvolvedpartiesoutcomefocusgappresentexperimentalworksystematicallystudieseffectpeoplefourexperimentsaskedprogrameitherfiveresultsindicatemademoderatedimportantwaysperceivedsecondcontributiondemonstratevalueformulatingAVframeworkconsiderspossibleothers'behaviorfourthexperimentshowlessdriversbehavedfurthermoreunlikepredictiondecision-makersconvergenonutilitarianismfoundsignificantevidenceutilitarianismdiscusstheoreticalimplicationsunderstandinghumandecision-makingpracticalguidelinesdesignsolvetimeadoptedpracticeRiskInjuryMoralDilemmasAutonomousVehiclesethicsdilemma

Similar Articles

Cited By