Effects of anticipatory perceptual simulation on practiced human-robot tasks
Tóm tắt
With the aim of attaining increased fluency and efficiency in human-robot teams, we have developed a cognitive architecture for robotic teammates based on the neuro-psychological principles of anticipation and perceptual simulation through top-down biasing. An instantiation of this architecture was implemented on a non-anthropomorphic robotic lamp, performing a repetitive human-robot collaborative task. In a human-subject study in which the robot works on a joint task with untrained subjects, we find our approach to be significantly more efficient and fluent than in a comparable system without anticipatory perceptual simulation. We also show the robot and the human to improve their relative contribution at a similar rate, possibly playing a part in the human’s “like-me” perception of the robot. In self-report, we find significant differences between the two conditions in the sense of team fluency, the team’s improvement over time, the robot’s contribution to the efficiency and fluency, the robot’s intelligence, and in the robot’s adaptation to the task. We also find differences in verbal attitudes towards the robot: most notably, subjects working with the anticipatory robot attribute more human qualities to the robot, such as gender and intelligence, as well as credit for success, but we also find increased self-blame and self-deprecation in these subjects’ responses. We believe that this work lays the foundation towards modeling and evaluating artificial practice for robots working in collaboration with humans.
Tài liệu tham khảo
citation_journal_title=Behavioral and Brain Sciences; citation_title=Perceptual symbol systems; citation_author=L. Barsalou; citation_volume=22; citation_publication_date=1999; citation_pages=577-660; citation_id=CR1
citation_title=Learning and recognizing human dynamics in video sequences; citation_inbook_title=CVPR ’97: proceedings of the 1997 conference on computer vision and pattern recognition; citation_publication_date=1997; citation_pages=568; citation_id=CR2; citation_author=C. Bregler; citation_publisher=IEEE Computer Society
Duffy, B. (2000). The social robot. PhD thesis, University College Dublin, Ireland.
Endo, Y. (2005). Anticipatory and improvisational robot via recollection and exploitation of episodic memories. In Proceedings of the AAAI fall symposium.
citation_title=Collaboration, dialogue, and human-robot interaction; citation_inbook_title=Proceedings of the 10th international symposium of robotics research; citation_publication_date=2001; citation_id=CR5; citation_author=T. W. Fong; citation_author=C. Thorpe; citation_author=C. Baur; citation_publisher=Springer
Hamdan, R., Heitz, F., & Thoraval, L. (1999). Gesture localization and recognition using probabilistic visual learning. In Proceedings of the 1999 conference on computer vision and pattern recognition (CVPR ’99) (pp. 2098–2103), Ft Collins, CO, USA.
citation_title=The organization of behavior: a neuropsychological theory; citation_publication_date=1949; citation_id=CR7; citation_author=D. O. Hebb; citation_publisher=Wiley
Hoffman, G. (2007). Ensemble: fluency and embodiment for robots acting with humans. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA.
citation_title=Collaboration in human-robot teams; citation_inbook_title=Proceedings of the AIAA 1st intelligent systems technical conference; citation_publication_date=2004; citation_id=CR9; citation_author=G. Hoffman; citation_author=C. Breazeal; citation_publisher=AIAA
Hoffman, G., & Breazeal, C. (2006). Robotic partners’ bodies and minds: an embodied approach to fluid human-robot collaboration. In Fifth international workshop on cognitive robotics (AAAI’06).
citation_journal_title=IEEE Transactions on Robotics and Automation; citation_title=Cost-based anticipatory action-selection for human-robot fluency; citation_author=G. Hoffman, C. Breazeal; citation_volume=23; citation_issue=5; citation_publication_date=2007; citation_pages=952-961; citation_id=CR11
citation_title=Achieving fluency through perceptual-symbol practice in human-robot collaboration; citation_inbook_title=Proceedings of the ACM/IEEE international conference on human-robot interaction (HRI’08); citation_publication_date=2008; citation_id=CR12; citation_author=G. Hoffman; citation_author=C. Breazeal; citation_publisher=ACM
citation_journal_title=Journal of Counseling Psychology; citation_title=Development and validation of the working alliance inventory; citation_author=A. O. Horvath, L. S. Greenberg; citation_volume=36; citation_issue=2; citation_publication_date=1989; citation_pages=223-233; citation_doi=10.1037/0022-0167.36.2.223; citation_id=CR13
Jones, H., & Rock, S. (2002). Dialogue-based human-robot interaction for space construction teams. In IEEE aerospace conference proceedings (Vol. 7, pp. 3645–3653).
citation_journal_title=International Journal of Robotics Research; citation_title=Human-centered robotics and interactive haptic simulation; citation_author=O. Khatib, O. Brock, K. Chang, D. Ruspini, L. Sentis, S. Viji; citation_volume=23; citation_issue=2; citation_publication_date=2004; citation_pages=167-178; citation_doi=10.1177/0278364904041325; citation_id=CR15
Kimura, H., Horiuchi, T., & Ikeuchi, K. (1999). Task-model based human robot cooperation using vision. In Proceedings of the IEEE international conference on intelligent robots and systems (IROS’99) (pp. 701–706).
citation_journal_title=Cognitive Systems Research; citation_title=EMA: a process model of appraisal dynamics; citation_author=S. Marsella, J. Gratch; citation_volume=10; citation_issue=1; citation_publication_date=2009; citation_pages=70-90; citation_doi=10.1016/j.cogsys.2008.03.005; citation_id=CR17
citation_journal_title=Trends in Cognitive Sciences; citation_title=Joint action: bodies and minds moving together; citation_author=N. Sebanz, H. Bekkering, G. Knoblich; citation_volume=10; citation_issue=2; citation_publication_date=2006; citation_pages=70-76; citation_doi=10.1016/j.tics.2005.12.009; citation_id=CR18
citation_journal_title=Cognitive Neuropsychology; citation_title=The similarity-in-topography principle: Reconciling theories of conceptual deficits; citation_author=K. Simmons, L. W. Barsalou; citation_volume=20; citation_publication_date=2003; citation_pages=451-486; citation_doi=10.1080/02643290342000032; citation_id=CR19
citation_title=On the perceptual-motor and image-schematic infrastructure of language; citation_inbook_title=Grounding cognition: the role of perception and action in memory, language, and thinking; citation_publication_date=2005; citation_id=CR20; citation_author=M. J. Spivey; citation_author=D. C. Richardson; citation_author=M. Gonzalez-Marquez; citation_publisher=Cambridge University Press
citation_title=Visual attention and distributed processing of visual information for the control of humanoid robots; citation_inbook_title=Humanoid robots: human-like machines; citation_publication_date=2007; citation_pages=423-436; citation_id=CR21; citation_author=A. Ude; citation_author=J. Moren; citation_author=G. Cheng; citation_publisher=I-Tech Education and Publishing
Walker, W., Lamere, P., Kwok, P., Raj, B., Singh, R., Gouvea, E., Wolf, P., & Woelfe, J. (2004). Sphinx-4: a flexible open source framework for speech recognition (Tech. Rep. TR-2004-139). Sun Microsystems Laboratories.
citation_journal_title=Psychonomic Bulletin & Review; citation_title=Six views of embodied cognition; citation_author=M. Wilson; citation_volume=9; citation_issue=4; citation_publication_date=2002; citation_pages=625-636; citation_id=CR23
citation_journal_title=Psychological Bulletin; citation_title=The case for motor involvement in perceiving conspecifics; citation_author=M. Wilson, G. Knoblich; citation_volume=131; citation_publication_date=2005; citation_pages=460-473; citation_doi=10.1037/0033-2909.131.3.460; citation_id=CR24
Woern, H., & Laengle, T. (2000). Cooperation between human beings and robot systems in an industrial environment. In Proceedings of the mechatronics and robotics (Vol. 1, pp. 156–165).
Wren, C., Clarkson, B., & Pentland, A. (2000). Understanding purposeful human motion. In Proceedings of the Fourth IEEE international conference on automatic face and gesture recognition (pp. 378–383).
