Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems
Tóm tắt
Từ khóa
Tài liệu tham khảo
European Commission: Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe, Brussels, COM (2018) 237 final., Apr 2018. Accessed 10 Mar 2020 [Online]. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52018DC0237&from=EN
McKinney, S.M., et al.: International evaluation of an AI system for breast cancer screening. Nature 577(7788), 89–94 (2020). https://doi.org/10.1038/s41586-019-1799-6
Ozer, M.E., Sarica, P.O., Arga, K.Y.: New machine learning applications to accelerate personalized medicine in breast cancer: rise of the support vector machines. OMICS J Integr Biol 24(5), 241–246 (2020). https://doi.org/10.1089/omi.2020.0001
Bica, I., Alaa, A.M., Lambert, C., van der Schaar, M.: From real-world patient data to individualized treatment effects using machine learning: current and future methods to address underlying challenges. Clin Pharmacol Ther (2020). https://doi.org/10.1002/cpt.1907
Yang, Y., Fasching, P.A., Tresp, V.: “Predictive modeling of therapy decisions in metastatic breast cancer with recurrent neural network encoder and multinomial hierarchical regression decoder.” IEEE Int Conf Healthcare Inform (ICHI) (2017). https://doi.org/10.1109/ICHI.2017.51
Hao K.: AI is helping triage coronavirus patients. The tools may be here to stay. MIT Technology Review. https://www.technologyreview.com/2020/04/23/1000410/ai-triage-covid-19-patients-health-care/ (2020). Accessed 20 July 2020
Ting, D.S.W., Carin, L., Dzau, V., Wong, T.Y.: Digital technology and COVID-19. Nat Med 26(4), 459–461 (2020). https://doi.org/10.1038/s41591-020-0824-5
Vaishya, R., Javaid, M., Khan, I.H., Haleem, A.: Artificial intelligence (AI) applications for COVID-19 pandemic. Diabetes Metab Syndr 14(4), 337–339 (2020). https://doi.org/10.1016/j.dsx.2020.04.012
Roberts, M., et al.: Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat Mach Intell 3(3), 199–217 (2021). https://doi.org/10.1038/s42256-021-00307-0
Wynants, L., et al.: Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal. BMJ 369, m1328 (2020). https://doi.org/10.1136/bmj.m1328
Bierhoff, H.-W., Rohmann, E.: Diffusion von Verantwortung. In: Heidbrink, L., Langbehn, C., Sombetzki, J. (eds.) Handbuch Verantwortung, pp. 1–21. Springer Fachmedien Wiesbaden, Wiesbaden (2016). https://doi.org/10.1007/978-3-658-06175-3_46-1
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015). https://doi.org/10.1038/nature14539
Dargan, S., Kumar, M., Ayyagari, M.R., Kumar, G.: A survey of deep learning and its applications: a new paradigm to machine learning. Arch Computat Methods Eng 27(4), 1071–1092 (2020). https://doi.org/10.1007/s11831-019-09344-w
Schölkopf, B., et al.: Toward causal representation learning. Proc IEEE 109(5), 612–634 (2021). https://doi.org/10.1109/JPROC.2021.3058954
Richens, J.G., Lee, C.M., Johri, S.: Improving the accuracy of medical diagnosis with causal machine learning. Nat Commun 11(1), 3923 (2020). https://doi.org/10.1038/s41467-020-17419-7
Haenssle, H.A., et al.: Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol 29(8), 1836–1842 (2018). https://doi.org/10.1093/annonc/mdy166
Ting, D.S.W., et al.: Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. JAMA 318(22), 2211–2223 (2017). https://doi.org/10.1001/jama.2017.18152
Budd, J., et al.: Digital technologies in the public-health response to COVID-19. Nat Med 26(8), 1183–1192 (2020). https://doi.org/10.1038/s41591-020-1011-4
Jongsma, K.R., Bekker, M.N., Haitjema, S., Bredenoord, A.L.: How digital health affects the patient-physician relationship: an empirical-ethics study into the perspectives and experiences in obstetric care. Pregnancy Hypertension 25, 81–86 (2021). https://doi.org/10.1016/j.preghy.2021.05.017
Braun, M., Hummel, P., Beck, S., Dabrock, P.: Primer on an ethics of AI-based decision support systems in the clinic. J Med Ethics (2020). https://doi.org/10.1136/medethics-2019-105860
Braun, M., Bleher, H., Hummel, P.: A leap of faith: is there a formula for ‘trustworthy’ AI? Hastings Cent Rep (2021). https://doi.org/10.1002/hast.1207
Santoni de Sio, F., Mecacci, G.: Four responsibility gaps with artificial intelligence: why they matter and how to address them. Philos Technol (2021). https://doi.org/10.1007/s13347-021-00450-x
Matthias, A.: The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6, 175–183 (2004). https://doi.org/10.1007/s10676-004-3422-1
Gunkel, D.J.: Mind the gap: responsible robotics and the problem of responsibility. Ethics Inf Technol (2017). https://doi.org/10.1007/s10676-017-9428-2
Nyholm, S.: Attributing agency to automated systems: reflections on human-robot collaborations and responsibility-loci. Sci Eng Ethics 24(4), 1201–1219 (2018). https://doi.org/10.1007/s11948-017-9943-x
Santoni de Sio, F., van den Hoven, J.: Meaningful human control over autonomous systems: a philosophical account. Front Robot AI 5, 15 (2018). https://doi.org/10.3389/frobt.2018.00015
Burton, S., Habli, I., Lawton, T., McDermid, J., Morgan, P., Porter, Z.: Mind the gaps: assuring the safety of autonomous systems from an engineering, ethical, and legal perspective. Artif Intell 279, 103201 (2020). https://doi.org/10.1016/j.artint.2019.103201
Nyholm, S.: Humans and robots: ethics, agency, and anthropomorphism. Rowman and Littlefield, London (2020)
Köhler, S., Roughley, N., Sauer, H.: Technologically blurred accountability. In: Ulbert, C., Finkenbusch, P., Debiel, T. (eds.) Moral agency and the politics of responsibility. Routledge, London (2017)
Tigard, D.W.: There is no techno-responsibility gap. Philos Technol (2020). https://doi.org/10.1007/s13347-020-00414-7
Sætra, H.S.: Confounding complexity of machine action: a Hobbesian account of machine responsibility. Int J Technoethics (IJT) 12(1), 87–100 (2021). https://doi.org/10.4018/IJT.20210101.oa1
Horizon: Commission expert group to advise on specific ethical issues raised by and driverless mobility (E03659), “Ethics of connected and automated vehicles: recommendations on road safety, privacy, fairness, explainability and responsibility.” Publication Office of the European Union, Luxembourg (2020)
Latané, B., Darley, J.: The unresponsive bystander: why doesn’t he help? Appleton, New York (1970)
Nollkaemper, A.: The duality of shared responsibility. Contemp Politics 24(5), 524–544 (2018). https://doi.org/10.1080/13569775.2018.1452107
Thompson, D.F.: Moral responsibility of public officials: the problem of many hands. Am Polit Sci Rev 74(4), 905–916 (1980). https://doi.org/10.2307/1954312
Braun, M.: Vulnerable life: reflections on the relationship between theological and philosophical ethics. Am J Bioethics 20(12), 21–23 (2020). https://doi.org/10.1080/15265161.2020.1832615
Lévinas E.: Otherwise than being or beyond essence, Translation of the 2d ed.(1978) of Autrement qu’être by Alphonso Lingis. Martinus Nijhoff Philosophy Texts 3. Kluwer Academic Publishers, Dordrecht (1991)
Bonhoeffer, D.: Ethik. Gütersloher Verlagshaus, Gütersloh (2015)
Dewey, J.: The quest for certainty: a study of the relation of knowledge and action. George Allen and Unwin Ltd, London (1930)
Coeckelbergh, M.: Artificial intelligence, responsibility attribution, and a relational justification of explainability. Sci Eng Ethics (2019). https://doi.org/10.1007/s11948-019-00146-8
Frede, D.: Aristoteles. Nikomachische Ethik, vol. 6. De Gruyter, Boston (2020)
Coeckelbergh, M.: Responsibility and the moral phenomenology of using self-driving cars. Appl Artif Intell 30(8), 748–757 (2016). https://doi.org/10.1080/08839514.2016.1229759
Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat Mach Intell 1(9), 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2
Arrieta, A.B., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012
Morley, J., et al.: The debate on the ethics of AI in health care: a reconstruction and critical review. Artif Intell (2019). https://doi.org/10.13140/RG.2.2.27135.76960
Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv J Law technol 31, 841–887 (2018). https://doi.org/10.2139/ssrn.3063289
London, A.: Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent Rep 49, 15–21 (2019). https://doi.org/10.1002/hast.973
Behdadi, D., Munthe, C.: A normative approach to artificial moral agency. Minds Mach (2020). https://doi.org/10.1007/s11023-020-09525-8
Bryson, J.J.: Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics Inf Technol 20(1), 15–26 (2018). https://doi.org/10.1007/s10676-018-9448-6
Floridi, L., Sanders, J.W.: On the morality of artificial agents. Mind Mach 14(3), 349–379 (2004). https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Coeckelbergh, M.: Moral appearances: emotions, robots, and human morality. Ethics Inf Technol 12(3), 235–241 (2010). https://doi.org/10.1007/s10676-010-9221-y
Himma, K.E.: Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent? Ethics Inf Technol 11(1), 19–29 (2009). https://doi.org/10.1007/s10676-008-9167-5
Torrance, S.: Ethics and consciousness in artificial agents. AI Soc 22(4), 495–521 (2008). https://doi.org/10.1007/s00146-007-0091-8
Gerdes, A.: The issue of moral consideration in robot ethics. ACM SIGCAS Comput Soc 45, 274–280 (2015). https://doi.org/10.1145/2874239.2874278
Gunkel, D.J.: The other question: can and should robots have rights? Ethics Inf Technol 20(2), 87–99 (2018). https://doi.org/10.1007/s10676-017-9442-4
Gunkel, D.J.: Perspectives on ethics of AI philosophy. In: Dubber, M.D., Pasquale, F., Das, S. (eds.) The oxford handbook of ethics of AI, pp. 539–553. Oxford Unicerity Press, New York (2020) . (Online)
Coeckelbergh, M.: Artificial intelligence, responsibility attribution, and a relational justification of explainability. Sci Eng Ethics 26(4), 2051–2068 (2020). https://doi.org/10.1007/s11948-019-00146-8
Lohsse, S., Schulze, R., Staudenmayer, D. (eds.): “Titelei/Inhaltsverzeichnis”, in liability for artificial intelligence and the internet of things: münster colloquia on EU law and the digital economy IV, 1st edn., pp. 1–8. Nomos Verlagsgesellschaft mbH & Co. KG, Baden-Baden (2019). https://doi.org/10.5771/9783845294797-1
Keßler, O.: Intelligente Roboter—neue Technologien im Einsatz. MultiMedia und Recht 18(9), 589–594 (2017)
Schaub, R.: Interaktion von Mensch und Maschine Haftungs- und immaterialgüterrechtliche Fragen bei eigenständigen Weiterentwicklungen autonomer Systeme. Juristenzeitung 72(7), 342–349 (2017)
Schaub, R.: Verantwortlichkeit für Algorithmen im Internet. Innovations-und Technikrecht (InTeR) 1, 2–7 (2019)
Borges, G.: New liability concepts: the potential of insurance and compensation funds, pp. 145–164. Nomos Verlagsgesellschaft mbH and Co KG (2019)
European Commission, “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics,” European Commission, Brussels, Report from the Commission to the European Parliament, the Council and the European Economic and Social Committee COM/2020/64 final. https://eur-lex.europa.eu/legal-content/en/TXT/?qid=1593079180383&uri=CELEX:52020DC0064 (2020). Accessed 23 Sept 2020
Expert Group on Liability and New Technologies—New Technologies Formation: Liability for Artificial Intelligens and other emerging digital technologies. European Commission. https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608 (2019). Accessed 23 Sept 2020
European Commission and Directorate-General for Communications Networks, Content and Technology: On artificial intelligence—a European approach to excellence and trust. European Commission, Brussels, White Paper COM/2020/65 final. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf (2020). Accessed 10 Mar 2020
Yu, K.-H., Beam, A.L., Kohane, I.S.: Artificial intelligence in healthcare. Nat Biomed Eng 2(10), 719–731 (2018). https://doi.org/10.1038/s41551-018-0305-z