Evaluating gaze behaviors as pre-touch reactions for virtual agents.

Dario Alfonso Cuello Mejía, Hidenobu Sumioka, Hiroshi Ishiguro, Masahiro Shiomi
Author Information
  1. Dario Alfonso Cuello Mejía: Interaction Science Laboratories, ATR, Kyoto, Japan.
  2. Hidenobu Sumioka: Hiroshi Ishiguro Laboratories, ATR, Kyoto, Japan.
  3. Hiroshi Ishiguro: Intelligent Robotics Laboratory, Department of Systems Innovation, Graduate School of Engineer Science, Osaka University, Suita, Osaka, Japan.
  4. Masahiro Shiomi: Interaction Science Laboratories, ATR, Kyoto, Japan.

Abstract

Background: Reaction behaviors by human-looking agents to nonverbal communication cues significantly affect how they are perceived as well as how they directly affect interactions. Some studies have evaluated such reactions toward several interactions, although few approached before-touch situations and how the agent's reaction is perceived. Specifically, it has not been considered how pre-touch reactions impact the interaction, the influence of gaze behavior in a before-touch situation context and how it can condition the participant's perception and preferences in the interaction. The present study investigated the factors that define pre-touch reactions in a humanoid avatar in a virtual reality environment and how they influence people's perceptions of the avatars.
Methods: We performed two experiments to assess the differences between approaches from inside and outside the field of view (FoV) and implemented four different gaze behaviors: face-looking, hand-looking, face-then-hand looking and hand-then-face looking behaviors. We also evaluated the participants' preferences based on the perceived human-likeness, naturalness, and likeability. In Experiment 1, we evaluated the number of steps in gaze behavior, the order of the gaze-steps and the gender; Experiment 2 evaluated the number and order of the gaze-steps.
Results: A two-step gaze behavior was perceived as more human and more natural from both inside and outside the field of view and that a face-first looking behavior when defining only a one-step gaze movement was preferable to hand-first looking behavior from inside the field of view. Regarding the location from where the approach was performed, our results show that a relatively complex gaze movement, including a face-looking behavior, is fundamental for improving the perceptions of agents in before-touch situations.
Discussion: The inclusion of gaze behavior as part of a possible touch interaction is helpful for developing more responsive avatars and gives another communication channel for increasing the immersion and enhance the experience in Virtual Reality environments, extending the frontiers of haptic interaction and complementing the already studied nonverbal communication cues.

Keywords

References

  1. Curr Biol. 2013 Apr 22;23(8):717-21 [PMID: 23562265]
  2. Pers Soc Psychol Bull. 2003 Jul;29(7):819-33 [PMID: 15018671]
  3. J Commun. 1976 Summer;26(3):46-52 [PMID: 7577]
  4. Accid Anal Prev. 2010 Nov;42(6):1577-84 [PMID: 20728606]
  5. Percept Mot Skills. 2007 Dec;105(3 Pt 2):1245-56 [PMID: 18380125]
  6. PLoS One. 2016 Oct 19;11(10):e0163785 [PMID: 27760150]
  7. Can J Exp Psychol. 2017 Sep;71(3):226-242 [PMID: 28604032]
  8. Sociometry. 1965 Sep;28:289-304 [PMID: 14341239]
  9. J Vis. 2007 Nov 21;7(14):4.1-17 [PMID: 18217799]
  10. IEEE Trans Vis Comput Graph. 2015 Jul;21(7):794-807 [PMID: 26357242]
  11. Neurosci Biobehav Rev. 2000 Aug;24(6):581-604 [PMID: 10940436]
  12. Percept Mot Skills. 1990 Feb;70(1):35-45 [PMID: 2326136]
  13. J Opt Soc Am A. 1988 Dec;5(12):2210-9 [PMID: 3230491]
  14. Acta Psychol (Amst). 2015 Sep;160:134-40 [PMID: 26245915]
  15. Int J Environ Res Public Health. 2019 Dec 09;16(24): [PMID: 31835309]
  16. Trends Cogn Sci. 2009 Mar;13(3):127-34 [PMID: 19217822]
  17. Psychon Bull Rev. 2020 Oct;27(5):856-881 [PMID: 32367351]
  18. Cogn Emot. 2011 Jun;25(4):756-64 [PMID: 21547777]
  19. Cogn Sci. 2009 Nov;33(8):1468-82 [PMID: 21585512]
  20. Cognition. 2019 Mar;184:28-43 [PMID: 30557748]

Word Cloud

Created with Highcharts 10.0.0gazebehaviorinteractionperceivedevaluatedreactionspre-touchvirtuallookingbehaviorsagentscommunicationbefore-touchinsidefieldviewnonverbalcuesaffectinteractionssituationsinfluencepreferencesrealityperceptionsavatarsperformedoutsideface-lookingExperimentnumberordergaze-stepshumanmovementBackground:Reactionhuman-lookingsignificantlywelldirectlystudiestowardseveralalthoughapproachedagent'sreactionSpecificallyconsideredimpactsituationcontextcanconditionparticipant'sperceptionpresentstudyinvestigatedfactorsdefinehumanoidavatarenvironmentpeople'sMethods:twoexperimentsassessdifferencesapproachesFoVimplementedfourdifferentbehaviors:hand-lookingface-then-handhand-then-facealsoparticipants'basedhuman-likenessnaturalnesslikeability1stepsgender2Results:two-stepnaturalface-firstdefiningone-steppreferablehand-firstRegardinglocationapproachresultsshowrelativelycomplexincludingfundamentalimprovingDiscussion:inclusionpartpossibletouchhelpfuldevelopingresponsivegivesanotherchannelincreasingimmersionenhanceexperienceVirtualRealityenvironmentsextendingfrontiershapticcomplementingalreadystudiedEvaluatingcomputer-human

Similar Articles

Cited By