There Is No Techno-Responsibility Gap
Tóm tắt
In a landmark essay, Andreas Matthias claimed that current developments in autonomous, artificially intelligent (AI) systems are creating a so-called responsibility gap, which is allegedly ever-widening and stands to undermine both the moral and legal frameworks of our society. But how severe is the threat posed by emerging technologies? In fact, a great number of authors have indicated that the fear is thoroughly instilled. The most pessimistic are calling for a drastic scaling-back or complete moratorium on AI systems, while the optimists aim to show that the gap can be bridged nonetheless. Contrary to both camps, I argue against the prevailing assumption that there is a technology-based responsibility gap. I show how moral responsibility is a dynamic and flexible process, one that can effectively encompass emerging technological entities.
Từ khóa
Tài liệu tham khảo
Allen, C., & Wallach, W. (2009). Moral machines: Teaching robots right from wrong. Oxford University Press.
Allen, C., & Wallach, W. (2011). Moral machines: Contradiction in terms or abdication of human responsibility? In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 55–68). Cambridge: MIT Press.
Archard, D. (2013). Dirty hands and the complicity of the democratic public. Ethical Theory and Moral Practice, 16(4), 777–790.
Asaro, P. (2012). On banning autonomous weapon systems: Human rights, automation, and the dehumanization of lethal decision-making. International Review of the Red Cross, 94, 687–709.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
Char, D. S., Shah, N. H., & Magnus, D. (2018). Implementing machine learning in healthcare–Addressing ethical challenges. New England Journal of Medicine, 378, 981–983.
Coeckelbergh, M. (2010). Moral appearances: Emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235–241.
Coeckelbergh, M. (2019). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, forthcoming. https://doi.org/10.1007/s11948-019-00146-8.
D’Arms, J., & Jacobson, D. (2006). Anthropocentric constraints on human value. In R. Shafer-Landau (Ed.), Oxford studies in metaethics, vol. 1 (pp. 99–126). Oxford University Press.
Danaher, J. (2016a). The threat of algocracy: Reality, resistance and accommodation. Philosophy and Technology, 29(3), 245–268.
Danaher, J. (2016b). Robots, law and the retribution gap. Ethics and Information Technology, 18(4), 299–309.
Danaher, J. (2018). Toward an ethics of AI assistants: An initial framework. Philosophy and Technology, 31(4), 629–653.
Danaher, J. (2019). Automation and utopia: Human flourishing in a world without work. Harvard University Press.
de Jong, R. (2020). The retribution-gap and responsibility-loci related to robots and automated technologies: A reply to Nyholm. Science and Engineering Ethics, 26(2), 727–735.
Dignam, A. (2020). Artificial intelligence, tech corporate governance and the public interest regulatory response. Cambridge Journal of Regions, Economy and Society, 13(1), 37–54.
Doris, J. (2015). Talking to our selves: Reflection, ignorance, and agency. Oxford University Press.
Fischer, J. M., & Ravizza, M. S. J. (1998). Responsibility and control: A theory of moral responsibility. Cambridge University Press.
Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable? Artificial Intelligence and Law, 25(3), 305–323.
Friedman, B. (1997). Human values and the design of computer technology. Cambridge University Press.
Hanson, F. A. (2009). Beyond the skin bag: On the moral responsibility of extended agencies. Ethics and Information Technology, 11(1), 91–99.
Heersmink, R. (2017). Extended mind and cognitive enhancement: Moral aspects of cognitive artifacts. Phenomenology and the Cognitive Sciences, 16, 17–32.
Hellström, T. (2013). On the moral responsibility of military robots. Ethics and Information Technology, 15(2), 99–107.
Jacobson, D. (2013). Regret, agency, and error. In D. Shoemaker (Ed.), Oxford studies in agency and responsibility, vol. 1 (pp. 95–125). Oxford University Press.
Knight, W. (2016). Amazon working on making Alexa recognize your emotions. MIT Technology Review.
Köhler, S., Roughley, N., & Sauer, H. (2017). Technologically blurred accountability? In C. Ulbert et al. (Eds.), Moral agency and the politics of responsibility. London: Routledge.
Kraaijeveld, S. (2019). Debunking (the) retribution (gap). Science and Engineering Ethics. https://doi.org/10.1007/s11948-019-00148-6.
Marino, D., & Tamburrini, G. (2006). Learning robots and human responsibility. International Review of Information Ethics, 6(12), 46–51.
Mason, E. (2019). Between strict liability and blameworthy quality of will: Taking responsibility. In D. Shoemaker (Ed.), Oxford studies in agency and responsibility, vol. 6 (pp. 241–264). Oxford University Press.
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for actions of learning automata. Ethics and Information Technology, 6(3), 175–183.
Morley, S. (manuscript). Morally significant technology: A case against corporate self-regulation.
Nyholm, S. (2018). Attributing agency to automated systems: Reflections on human-robot collaborations and responsibility-loci. Science and Engineering Ethics, 24(4), 1201–1219.
Nyholm, S. (2020). Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield.
Oakley, J. (1992). Morality and the emotions. London: Routledge.
Oshana, M. (2002). The misguided marriage of responsibility and autonomy. The Journal of Ethics, 6(3), 261–280.
Purves, D., Jenkins, R., & Strawser, B. J. (2015). Autonomous machines, moral judgment, and acting for the right reasons. Ethical Theory and Moral Practice, 18(4), 851–872.
Rahwan, I. (2018). Society-in-the-loop: Programming the algorithmic social contract. Ethics and Information Technology, 20(1), 5–14.
Ren, F. (2009). Affective information processing and recognizing human emotion. Electronic Notes in Theoretical Computer Science, 225, 39–50.
Sharkey, N. (2010). Saying “no!” to lethal autonomous targeting. Journal of Military Ethics, 9(4), 369–383.
Shoemaker, D. (2011). Attributability, answerability, and accountability: Toward a wider theory of moral responsibility. Ethics, 121(3), 602–632.
Stout, N. (manuscript). Blame de re and de dicto.
Talbot, B., Jenkins, R., & Purves, D. (2017). When robots should do the wrong thing. In P. Lin, K. Abney, & R. Jenkins (Eds.), Robot ethics 2.0: From autonomous cars to artificial intelligence (pp. 258–273). Oxford University Press.
Tigard, D. (2019b). Taking the blame: Appropriate responses to medical error. Journal of Medical Ethics, 45(2), 101–105.
Tigard, D. (2020). Artificial moral responsibility: How we can and cannot hold machines responsible. Cambridge Quarterly of Healthcare Ethics, forthcoming.
Tigard, D., Conradie, N. H., & Nagel, S. K. (2020). Socially responsive technologies: Toward a co-developmental path. AI & Society, forthcoming. https://doi.org/10.1007/s00146-020-00982-4.
Vallor, S. (2015). Moral deskilling and upskilling in a new machine age: Reflections on the ambiguous future of character. Philosophy and Technology, 28(1), 107–124.
van de Poel, I., Fahlquist, J. N., Doorn, N., Zwart, S., & Royakkers, L. (2012). The problem of many hands: Climate change as an example. Science and Engineering Ethics, 18(1), 49–67.
Vargas, M. (2017). Implicit bias, responsibility, and moral ecology. In D. Shoemaker (Ed.), Oxford studies in agency and responsibility, vol. 4 (pp. 219–247). Oxford University Press.
Verbeek, P. P. (2008). Obstetric ultrasound and the technological mediation of morality. Human Studies, 31(1), 11–26.
Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 2019(2), 494–620.
Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136.