Three Responses to Anthropomorphism in Social Robotics: Towards a Critical, Relational, and Hermeneutic Approach

Springer Science and Business Media LLC - Tập 14 - Trang 2049-2061 - 2021
Mark Coeckelbergh1
1Universitat Wien, Vienna, Austria

Tóm tắt

Both designers and users of social robots tend to anthropomorphize robots. Focusing on the question how to conceptualize the relation between robots and humans, this paper first outlines two opposite philosophical views regarding this relation, which are connected to various normative responses to anthropomorphism and anthropomorphization. Then it argues for a third view: navigating between what it calls “naïve instrumentalism” and “uncritical posthumanism”, it develops a hermeneutic, relational, and critical approach. Paradoxically, by unpacking the human dimension of robotics in its use and development, this view enables a critical discussion of anthropomorphizing robots. At the same time, and again somewhat paradoxically, it avoids a naïve instrumentalist position by taking robots’ role as an instrument in a larger con-technology seriously. As such, the third view questions the dualism assumed in the debate. The paper then explores what this means for the field of social robotics and the education of computer scientists and engineers. It proposes a reform based on a relational understanding of the field itself and offers suggestions for the role of users-citizens.

Tài liệu tham khảo

Fong T, Nourbakhsh I, Dautenhahn K (2003) A Survey of socially interactive robots. Robot Auton Syst 42:143–166 Dautenhahn K (2007) Socially intelligent robots: dimensions of human-robot interaction. Philos Trans R Soc Lond B Biol Sci 362:679–704 Duffy BR (2003) Anthropomorphism and the social robot. Robot Auton Syst 42:177–190 Ishiguro H (2006) Interactive humanoids and androids as ideal interfaces for humans. In Proceedings of the 11th international conference on Intelligent user interfaces (IUI '06). Association for Computing Machinery, New York, NY, USA, 2–9 Bartneck C, Kullc D, Croft E (2008) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1:71–81 Złotowski J, Proudfoot D, Yogeeswaran K et al (2015) Anthropomorphism: opportunities and challenges in human-robot interaction. Int J of Soc Robotics 7:347–360. https://doi.org/10.1007/s12369-014-0267-6 Kiesler S, Hinds P (2004) Introduction to this special issue on human–robot interaction. Hum-Comput Interact 19(1):1–8 Reeves B, Nass C (1996) The media equation: how people treat computers, television, and new media like real people and places. Cambridge University Press, Cambridge Kiesler S, Powers A, Fussell SR, Torrey C (2008) Anthropomorphic interactions with a robot and robot-like agent. Soc Cogn 26(2):169–181 Hegel F, Gieselmann S, Peters A, Holthaus P, Wrede B (2011) Towards a typology of meaningful signals and cues in social robotics. In: Proceedings of the IEEE international workshop on robot and human interactive communication, pp 72–78 Kuchenbrandt D, Eyssel F, Bobinger S, Neufeld M (2013) When a robot’s group membership matters. Int J Soc Robot 5(3):409–417 Krägeloh CU, Bharatharaj J, Kutty SKS, Irmala PR, Huang L (2019) Questionnaires to measure acceptability of social robots: a critical review. Robotics 8(4):88. https://doi.org/10.3390/robotics8040088 Peca A, Coeckelbergh M, Simut R, Costescu C, Pintea S, David D, Vanderborght B (2016) Robot enhanced therapy for children with autism disorders: measuring ethical acceptability. IEEE Technol Soc Mag 35(2):54–66 Eyssel F (2017) An experimental psychological perspective on social robotics. Robot Autonom Syst 87:363–371 Darling K (2017) Who’s Johnny? Anthropological Framing in Human-Robot Interaction, Integration, and Policy. In: Lin P, Jenkins R, Abney K (eds) Robot Ethics 2.0. Oxford University Press, New York Turkle S (2011) Alone together: why we expect more from technology and less from each other. Basic Books, New York Scheutz M (2012) The inherent dangers of unidirectional emotional bonds between humans and social robots. In: Lin P, Abney K, Bekey G (eds) Robot ethics. MIT Press, Cambridge, MA, pp 205–222 Mori M (1970) The uncanny valley. Energy 7:33–35 MacDorman KF, Isghiguro H (2006) The uncanny advantage of using androids in cognitive and social science research. Interact Stud 7:297–337 Freud S (2003) The uncanny. Penguin Classics, London Richardson K (2018) Technological animism. In: Swancutt K, Mazard M (eds) Animism beyond the soul: ontology, reflexivity, and the making of anthropological knowledge. Berghahn, New York, pp 110–128 Goetz J, Kiesler S, Powers A (2003) Matching robot appearance and behavior to tasks to improve human–robot cooperation. In: Proceedings of the 12th IEEE international workshop on robot and human interactive communication (ROMAN 2003), 2003, pp 55–60 Coeckelbergh M (2018) How to describe and evaluate “deception” phenomena: recasting the metaphysics, ethics, and politics of ICTs in terms of magic and performance and taking a relational and narrative turn. Ethics Inf Technol 20(2):71–85 Flusser V (1999). Shape of things: a philosophy of design. Reaction Books, London Bryson J (2010) Robots should be slaves. In: Wilks Y (ed) Close engagement with artificial companions. John Benjamins, Amsterdam, pp 63–74 Sparrow R, Sparrow L (2006) In the hands of machines? The future of aged care. Minds Mach 16:141–161 Haraway D (2000) A cyborg manifesto. In: David B, Kennedy BM (eds) The cybercultures reader. Routledge, London, pp 291–324 Latour B (1993) We have never been modern. Harvard University Press, Cambridge, MA, Catherine Porter (trans) Gunkel DJ (2007) Thinking otherwise: philosophy, communication. Purdue University Press, West Lafayette, Technology Gunkel DJ (2017) The other question: can and should robots have rights? Ethics Inf Technol 20:87–99 Ihde D (1990) Technology and the lifeworld. Indiana University Press, Bloomington Coeckelbergh M (2012) Growing moral relations: critique of moral status ascription. Palgrave Macmillan, New York Coeckelbergh M (2017) Using words and things: language and philosophy of technology. Routledge, New York Coeckelbergh M (2019) Moved by machines: performance metaphors and philosophy of technology. Routledge, New York Coeckelbergh M (2011) ‘You, robot: on the linguistic construction of artificial others‘ in: AI & Society 26(1): 61–69 Johnson DG (2006) Computer systems: moral entities but not moral agents’. Ethics Inf Technol 8:195–204 Coeckelbergh M (2018) Technology games: using wittgenstein for understanding and evaluating technology. Sci Eng Ethics 245:1503–1519 Coeckelbergh M, Reijers W (2016) Narrative technologies: a philosophical investigation of narrative capacities of technologies by using Ricoeur’s narrative theory. Hum Stud 39:325–346 Caliskan A, Bryson JJ, Narayanan A (2017) Semantics derived automatically from language corpora contain human-like biases. Science 356(6334):183–186 Lemaignan S, Fink J, Mondada F, P Dillenbourg (2015) You’re doing it wrong! studying unexpected behaviors in child-robot interaction. In Tapus A., André E., Martin JC., Ferland F., Ammi M. (eds) Social Robotics. ICSR 2015. Lecture Notes in Computer Science, vol 9388. Springer, Cham Coeckelbergh M (2015) The tragedy of the master: automation, vulnerability, and distance. Ethics Inf Technol 173:219–229 Marx K (1867/1976) Capital (Vol. I). Penguin Books, London (reprint in 1990). Foucault M (1980) Power/knowledge: selected interviews and other writings 1972—1977. Colin Gordon ed. Pantheon Books, New York Foucault M (1980) Power/knowledge. Gordon C (ed) Gordon C, Marshall L (trans). Pantheon Books, New York Foucault M (1988) Technologies of the self: a seminar with michel foucault. Martin LH, Gutman H, Hutton PH (eds). The University of Massachusetts Press, Amherst