Evaluation of Unimodal and Multimodal Communication Cues for Attracting Attention in Human–Robot Interaction

Springer Science and Business Media LLC - Tập 7 - Trang 89-96 - 2014
Elena Torta1, Jim van Heumen1, Francesco Piunti2, Luca Romeo2, Raymond Cuijpers1
1Eindhoven University of Technology, Eindhoven, The Netherlands
2Universita Politecnica delle Marche, Ancona, Italy

Tóm tắt

One of the most common tasks of a robot companion in the home is communication. In order to initiate an information exchange with its human partner, the robot needs to attract the attention of the human. This paper presents results of two user studies ( $$\mathrm{N}=12$$ ) to evaluate the effectiveness of unimodal and multimodal communication cues for attracting attention. Results showed that unimodal communication cues which involve sound generate the fastest reaction times. Contrary to expectations, multimodal communication cues resulted in longer reaction times with respect to the unimodal communication cue that produced the shortest reaction time.

Tài liệu tham khảo

Allison B, Nourbakhsh I, Simmons R (2002) The role of expressiveness and attention in human–robot interaction. In: Robotics and Automation, 2002. Proceedings. ICRA’02. IEEE International Conference on, vol 4. pp 4138–4142 Chaminade T, Okka MM (2013) Comparing the effect of humanoid and human face for the spatial orientation of attention. Front Neurorobot 7:12 Diederich A (1995) Intersensory facilitation of reaction time: evaluation of counter and diffusion coactivation models. J Math Psychol 39(2):197–215 Finke M, Koay KL, Dautenhahn K, Nehaniv CL, Walters ML, Saunders J (2005) Hey, i’m over here-how can a robot attract people’s attention? In: Robot and human interactive communication, 2005. ROMAN 2005. IEEE international workshop on. IEEE. pp 7–12 Frassinetti F, Bolognini N, Làdavas E (2002) Enhancement of visual perception by crossmodal visuo-auditory interaction. Exp Brain Res 147(3):332–343 Hoque MM, Onuki T, Kobayashi Y, Kuno Y (2011) Controlling human attention through robot’s gaze behaviors. In: Human system interactions (HSI). 2011 4th International Conference on. IEEE, pp 195–202 Hoque MM, Deb K, Das D, Kobayashi Y, Kuno Y (2013) An intelligent human–robot interaction framework to control the human attention. In: Informatics, electronics & vision (ICIEV). 2013 International Conference on. IEEE, pp 1–6 Miller J (1982) Divided attention: evidence for coactivation with redundant signals. Cogn psychol 14(2):247–279 Mutlu B, Shiwa T, Kanda T, Ishiguro H, Hagita N (2009) Footing in human–robot conversations: how robots might shape participant roles using gaze cues. In: HRI’09 Proceedings of the 4th ACM/IEEE international conference on Human robot interaction, pp 61–68 Nagai Y, Hosoda K, Morita A, Asada M (2003) A constructive model for the development of joint attention. Conn Sci 15(4):211–229 Odgaard E, Arieh Y, Marks L (2004) Brighter noise: sensory enhancement of perceived loudness by concurrent visual stimulation. Cogn Affect Behav Neurosci 4:127–132 Salem M, Eyssel F, Rohlfing K, Kopp S, Joublin F (2011) Effects of gesture on the perception of psychological anthropomorphism: a case study with a humanoid robot. In: Mutlu B, Bartneck C, Ham J, Evers V, Kanda T (eds) Social robotics, vol 7072. Lecture notes in computer science Springer, Berlin, pp 31–41 Schauerte B, Fink GA (2010) Focusing computational visual attention in multi-modal human-robot interaction. In: International conference on multimodal interfaces and the workshop on machine learning for multimodal interaction. ACM, New York, p 6 Clair ASt, Mead R, Mataric MJ (2011) Investigating the effects of visual saliency on deictic gesture production by a humanoid robot. In: RO-MAN, 2011. IEEE, pp 210–216 Staudte M, Crocker MW (2011) Investigating joint attention mechanisms through spoken humanrobot interaction. Cognition 120(2):268–291 Stein BE, Meredith MA (1993) The merging of the senses. The MIT Press, Cambridge Sugiyama O, Kanda T, Imai M, Ishiguro H, Hagita N, Anzai Y (2006) Humanlike conversation with gestures and verbal cues based on a three-layer attention-drawing model. Connection science 18(4):379–402 Torta E, Heumen J, Cuijpers RH, Juola JF (2012) How can a robot attract the attention of its human partner? a comparative study over different modalities for attracting attention. In: Ge S, Khatib O, Cabibihan J-J, Simmons R, Williams M-A (eds) Social robotics, vol 7621. Lecture notes in computer science Springer, Berlin, pp 288–297 Wallace MT, Meredith MA, Stein BE (1992) Integration of multiple sensory modalities in cat cortex. Exp Brain Res 91(3):484–488