Enhancing Emotional Support: The Effect of a Robotic Object on Human–Human Support QualitySpringer Science and Business Media LLC - Tập 14 - Trang 257-276 - 2021
Hadas Erel, Denis Trayman, Chen Levy, Adi Manor, Mario Mikulincer, Oren Zuckerman
Emotional support in the context of psychological caregiving is an important aspect of human–human interaction that can significantly increase well-being. In this study, we tested if non-verbal gestures of a non-humanoid robot can increase emotional support in a human–human interaction. Sixty-four participants were invited in pairs to take turns in disclosing a personal problem and responding in a supportive manner. In the experimental condition, the robotic object performed emphatic gestures, modeled according to the behavior of a trained therapist. In the baseline condition, the robotic object performed up-and-down gestures, without directing attention towards the participants. Findings show that the robot’s empathy-related gestures significantly improved the emotional support quality provided by one participant to another, as indicated by both subjective and objective measures. The non-humanoid robot was perceived as peripheral to the natural human–human interaction and influenced participants’ behavior without interfering. We conclude that non-humanoid gestures of a robotic object can enhance the quality of emotional support in intimate human–human interaction.
Projective Anthropomorphism as a Dialogue with OurselvesSpringer Science and Business Media LLC - Tập 14 - Trang 2063-2069 - 2021
Raya A. Jones
This paper introduces an original concept (projective anthropomorphism) towards exploring a psychological dimension that is irreducible to the forms of anthropomorphism investigated in both cognitive science and social robotics. Projective anthropomorphism is an unconscious bias towards anticipating humanlike characteristics in robots. An overview of the variety of ways in which projection has been conceptualised in psychology and psychoanalysis is provided before discussing implications for theorising projective anthropomorphism. The proposed concept alludes to the projection of existential anxieties and desires onto myths, legends, linguistic tropes, and science-fiction motifs of humanoid automata. Such motifs and their associated narratives populate contemporary popular culture, and feed into social representations of robots. The importance of considering projective anthropomorphism lies in the extent to which its phenomena channel people’s expectations and attitudes towards technological artefacts, as well as steering technological possibilities.
Reducing Computational Cost During Robot Navigation and Human–Robot Interaction with a Human-Inspired Reinforcement Learning ArchitectureSpringer Science and Business Media LLC - Tập 15 - Trang 1297-1323 - 2022
Rémi Dromnelle, Erwan Renaudo, Mohamed Chetouani, Petros Maragos, Raja Chatila, Benoît Girard, Mehdi Khamassi
We present a new neuro-inspired reinforcement learning architecture for robot online learning and decision-making during both social and non-social scenarios. The goal is to take inspiration from the way humans dynamically and autonomously adapt their behavior according to variations in their own performance while minimizing cognitive effort. Following computational neuroscience principles, the architecture combines model-based (MB) and model-free (MF) reinforcement learning (RL). The main novelty here consists in arbitrating with a meta-controller which selects the current learning strategy according to a trade-off between efficiency and computational cost. The MB strategy, which builds a model of the long-term effects of actions and uses this model to decide through dynamic programming, enables flexible adaptation to task changes at the expense of high computation costs. The MF strategy is less flexible but also 1000 times less costly, and learns by observation of MB decisions. We test the architecture in three experiments: a navigation task in a real environment with task changes (wall configuration changes, goal location changes); a simulated object manipulation task under human teaching signals; and a simulated human–robot cooperation task to tidy up objects on a table. We show that our human-inspired strategy coordination method enables the robot to maintain an optimal performance in terms of reward and computational cost compared to an MB expert alone, which achieves the best performance but has the highest computational cost. We also show that the method makes it possible to cope with sudden changes in the environment, goal changes or changes in the behavior of the human partner during interaction tasks. The robots that performed these experiments, whether real or virtual, all used the same set of parameters, thus showing the generality of the method.
Humanoid Robots – Artificial. Human-like. Credible? Empirical Comparisons of Source Credibility Attributions Between Humans, Humanoid Robots, and Non-human-like DevicesSpringer Science and Business Media LLC - Tập 14 Số 6 - Trang 1397-1411 - 2022
Marcel Finkel, Nicole C. Krämer
AbstractSource credibility is known as an important prerequisite to ensure effective communication (Pornpitakpan, 2004). Nowadays not only humans but also technological devices such as humanoid robots can communicate with people and can likewise be rated credible or not as reported by Fogg and Tseng (1999). While research related to the machine heuristic suggests that machines are rated more credible than humans (Sundar, 2008), an opposite effect in favor of humans’ information is supposed to occur when algorithmically produced information is wrong (Dietvorst, Simmons, and Massey, 2015). However, humanoid robots may be attributed more in line with humans because of their anthropomorphically embodied exterior compared to non-human-like technological devices. To examine these differences in credibility attributions a 3 (source-type) x 2 (information’s correctness) online experiment was conducted in which 338 participants were asked to either rate a human’s, humanoid robot’s, or non-human-like device’s credibility based on either correct or false communicated information. This between-subjects approach revealed that humans were rated more credible than social robots and smart speakers in terms of trustworthiness and goodwill. Additionally, results show that people’s attributions of theory of mind abilities were lower for robots and smart speakers on the one side and higher for humans on the other side and in part influence the attribution of credibility next to people’s reliance on technology, attributed anthropomorphism, and morality. Furthermore, no main or moderation effect of the information’s correctness was found. In sum, these insights offer hints for a human superiority effect and present relevant insights into the process of attributing credibility to humanoid robots.
Robot Likeability and Reciprocity in Human Robot Interaction: Using Ultimatum Game to determinate Reciprocal Likeable Robot StrategiesSpringer Science and Business Media LLC - Tập 13 - Trang 851-862 - 2020
Eduardo Benítez Sandoval, Jürgen Brandstatter, Utku Yalcin, Christoph Bartneck
Among of the factors that affect likeability, reciprocal response towards the other party is one of the multiple variables involved in social interaction. However, in HRI, likeability is constrained to robot behavior, since mass-produced robots will have identical physical embodiment. A reciprocal robot response is desirable in order to design robots as likeable agents for humans. In this paper, we discuss how perceived likeability in robots is a crucial multi-factorial phenomenon that has a strong influence on interactions based on reciprocal robot decisions. Our general research question is: What type of reciprocal robot behavior is perceived as likeable for humans when the robot’s decisions affect them? We designed a between/within
$$2 \times 2 \times 2$$
experiment in which the participant plays our novel Alternated Repeated Ultimatum Game (ARUG) for 20 rounds. The robot used in the experiment is an NAO robot using four different reciprocal strategies. Our results suggest that participants tend to reciprocate more towards the robot who starts the game and using the pure reciprocal strategy compared with other combined strategies (Tit for Tat, Inverse Tit for Tat and Reciprocal Offer and Inverse Reciprocal Offer). These results confirm that the Norm of the Reciprocity applies in HRI when participants play ARUG with social robots. However, the human reciprocal response also depends on the profits gained in the game and who starts the interaction. Similarly, the likeability score is affected by robot strategies such as reciprocal (Robot A) and generous (Robot C). and there are some discrepancies in the likeability score between the reciprocal robot and the generous robot behavior.
Effects of Experience and Workplace Culture in Human-Robot Team Interaction in Robotic Surgery: A Case StudySpringer Science and Business Media LLC - Tập 5 - Trang 75-88 - 2012
S. Cunningham, A. Chellali, I. Jaffre, J. Classe, C. G. L. Cao
Robots are being used in the operating room to aid in surgery, prompting changes to workflow and adaptive behavior by the users. This case study presents a methodology for examining human-robot team interaction in a complex environment, along with the results of its application in a study of the effects of experience and workplace culture, for human-robot team interaction in the operating room. The analysis of verbal and non-verbal events in robotic surgery in two different surgical teams (one in the US and one in France) revealed differences in workflow, timeline, roles, and communication patterns as a function of experience and workplace culture. Longer preparation times and more verbal exchanges related to uncertainty in use of the robotic equipment were found for the French team, who also happened to be less experienced. This study offers an effective method for studying human-robot team interaction and has implications for the future design and training of teamwork with robotic systems in other complex work environments.