Moral zombies: why algorithms are not moral agents

AI & SOCIETY - Tập 36 Số 2 - Trang 487-497 - 2021
Carissa Véliz1
1Institute for Ethics in AI, Faculty of Philosophy, Hertford College, University of Oxford, Oxford, UK

Tóm tắt

Abstract

In philosophy of mind, zombies are imaginary creatures that are exact physical duplicates of conscious subjects for whom there is no first-personal experience. Zombies are meant to show that physicalism—the theory that the universe is made up entirely out of physical components—is false. In this paper, I apply the zombie thought experiment to the realm of morality to assess whether moral agency is something independent from sentience. Algorithms, I argue, are a kind of functional moral zombie, such that thinking about the latter can help us better understand and regulate the former. I contend that the main reason why algorithms can be neither autonomous nor accountable is that they lack sentience. Moral zombies and algorithms are incoherent as moral agents because they lack the necessary moral understanding to be morally responsible. To understand what it means to inflict pain on someone, it is necessary to have experiential knowledge of pain. At most, for an algorithm that feels nothing, ‘values’ will be items on a list, possibly prioritised in a certain way according to a number that represents weightiness. But entities that do not feel cannot value, and beings that do not value cannot act for moral reasons.

Từ khóa


Tài liệu tham khảo

Arpaly N (2002) Unprincipled Virtue. An Inquiry Into Moral Agency. Oxford University Press, Oxford

Behdadi D, Munthe C (2020) A Normative approach to artificial moral agency. Minds Mach 30:195–218

Birhane A, van Dijk J (2020) Robot Rights? Let’s Talk about Human Welfare Instead. AAAI/ACM Conference on AI, Ethics, and Society, 207–213

Bostrom N (2012) The superintelligent will: motivation and instrumental rationality in advanced artificial agents. Mind Mach 22:71–85

Brislin SJ, Buchman-Schmitt JM, Joiner TE et al (2016) “Do unto others”? Distinct psychopathy facets predict reduced perception and tolerance of pain. Personal Disord 7(3):240–246

Bryson JJ, Diamantis ME, Grant TD (2017) Of, For, and by the people: the legal lacuna of synthetic persons. Artif Intell Law 25:273–291

Cave S, Nyrup R, Vold K et al (2019) Motivations and risks of machine ethics. Proc IEEE 107(3):562–574

Christman J (2015) Autonomy in Moral and Political Philosophy. In: Zalta EN (ed) The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/entries/autonomy-moral/

Coeckelbergh M (2016) Responsibility and the moral phenomenology of using self-driving cars. Appl Artif Intell 30(8):748–757

Danaher J (2016) Robots, law and the retribution gap. Ethics Inform Technol 18(4):299–309

Danaher J (2020) Welcoming robots into the moral circle: a defence of ethical behaviourism. Sci Eng Ethics 26:2023–2049

Darolia R, Koedel C, Martorell P, et al. (2015) Do Employers prefer workers who attend for-profit colleges? evidence from a field experiment. J Policy Anal Manag 34(4):881–903

Floridi L, Sanders JW (2004) On the Morality of Artificial Agents. Minds Mach 14:349–379

Frankfurt H (1988) Rationality and the Unthinkable. Cambridge University Press

Frankfurt H (1999) Necessity, Volition, and Love. Cambridge University Press

Gunkel D (2012) The Machine Question: Critical Perspectives on AI, Robots, and Ethics. MIT Press

Kant I (2019) Groundwork for the Metaphysics of Morals. Oxford University Press

Levy N (2007) The responsibility of the psychopath revisited. Philos Psychiatry Psychol 14(2):129–138

McKenna M (2013) Reasons-responsiveness, agents, and mechanisms. In: Shoemaker D (ed) Oxford Studies in Agency and Responsibility. Oxford University Press, p 151–183

Moor JH (2009) Four Kinds of Ethical Robots. Philos Now 72:12–14

Omohundro SM (2008) The Basic AI Drives. In: Wang P, Goertzel B, Franklin S (eds) Proceedings of the first artificial general intelligence conference. IOS Press, Amsterdam, p 483–492

O’Neil C (2016) Weapons of Math Destruction. Penguin

Schneewind J (1992) Autonomy, Obligation, and Virtue. In: Guyer P (ed) The Cambridge Companion to Kant. Cambridge University Press, p 309–341

Schroeder T, Arpaly N (1999) Alienation and Externality. Can J Philos 29(3):371–388

Searle J (1980) Minds, Brains and Programs. Behav Brain Sci 3:417–457

Sharkey N (2012) The evitability of autonomous robot warfare. Int Rev Red Cross 94(886):787–799

Sharkey A (2017) Can we program or train robots to be good? Ethics Inf Technol 22:283–295

Shoemaker D (2003) Caring, identification, and agency. Ethics 114(1):88–118

Sparrow R (2004) The turin triage test. Ethics Inf Technol 6:203–213

Sparrow R (2007) Killer Robots. J Appl Philos 24(1):62–77

van Wynsberghe A, Robbins S (2019) Critiquing the reasons for making artificial moral agents. Sci Eng Ethics 25:719–735

Varela FJ, Thompson E, Rosch E (1991) The Embodied Mind. Cognitive Science and Human Experience, MIT

Véliz C (2016) The challenge of determining whether and A.I. Is Sentient. Slate. https://slate.com/technology/2016/04/the-challenge-ofdetermining-whether-an-a-i-is-sentient.html. Accessed 14 April 2016

Velleman D (2002) Identification and Identity. In: Buss S, Overton L (eds) Contours of Agency: Essays on Themes from Harry Frankfurt. MIT Press, p 91–123

Wallach W, Allen C (2009) Moral Machines. Teaching Robots Right From Wrong. Oxford University Press, New York City

Watson G (2013) Moral Agency. The International Encyclopedia of Ethics. https://onlinelibrary.wiley.com/doi/abs/10.1002/9781444367072.wbiee294

Winfield AF, Michael K, Pitt J et al (2019) Machine ethics: the design and governance of ethical ai and autonomous systems. Proc IEEE 107(3):509–517