Các phản ứng của tổ chức trước các vấn đề đạo đức của trí tuệ nhân tạo

AI & SOCIETY - Tập 37 - Trang 23-37 - 2021
Bernd Carsten Stahl1, Josephina Antoniou2, Mark Ryan3, Kevin Macnish4, Tilimbe Jiya5
1Centre for Computing and Social Responsibility, De Montfort University, Leicester, UK
2School of Sciences, UCLan Cyprus, Larnaca, Cyprus
3Wageningen Economic Research, Wageningen University and Research, Wageningen, The Netherlands
4Department of Philosophy, University of Twente, Enschede, The Netherlands
5Business Systems and Operations, University of Northampton, Northampton, UK

Tóm tắt

Đạo đức của trí tuệ nhân tạo (AI) là một chủ đề được bàn luận rộng rãi. Có nhiều sáng kiến nhằm phát triển các nguyên tắc và hướng dẫn để đảm bảo rằng việc phát triển, triển khai và sử dụng AI là chấp nhận được về mặt đạo đức. Điều chưa rõ ràng là các tổ chức sử dụng AI hiểu và giải quyết các vấn đề đạo đức này như thế nào trong thực tiễn. Trong khi có rất nhiều công trình lý thuyết về đạo đức AI, các hiểu biết thực nghiệm lại hiếm và thường mang tính chất giai thoại. Bài báo này lấp đầy khoảng trống trong hiểu biết hiện tại của chúng ta về cách các tổ chức xử lý các vấn đề đạo đức AI bằng cách trình bày các phát hiện thực nghiệm thu thập được thông qua một bộ mười nghiên cứu trường hợp và cung cấp một tài khoản phân tích xuyên trường hợp. Bài báo xem xét thảo luận về các vấn đề đạo đức của AI cũng như các chiến lược giảm thiểu đã được đề xuất trong văn học. Sử dụng bối cảnh này, phân tích xuyên trường hợp phân loại các phản ứng của tổ chức mà chúng tôi đã quan sát trong thực tiễn. Cuộc thảo luận cho thấy rằng các tổ chức rất nhận thức được tranh luận về đạo đức AI và rất tích cực tham gia vào các vấn đề đạo đức. Tuy nhiên, họ chỉ sử dụng một phân đoạn tương đối nhỏ trong các chiến lược giảm thiểu đã được đề xuất trong tài liệu. Những hiểu biết này rất quan trọng đối với các tổ chức triển khai hoặc sử dụng AI, cho tranh luận học thuật về đạo đức AI, nhưng có thể giá trị nhất đối với các nhà hoạch định chính sách liên quan đến cuộc tranh luận hiện tại về các phát triển chính sách phù hợp để giải quyết các vấn đề đạo đức do AI đặt ra.

Từ khóa

#trí tuệ nhân tạo #đạo đức AI #tổ chức #chiến lược giảm thiểu #nghiên cứu trường hợp

Tài liệu tham khảo

Aronson J (1995) A pragmatic view of thematic analysis. Qualit Rep 2:1–3 Baum S (2017) A survey of artificial general intelligence projects for ethics, risk, and policy. Social Science Research Network, Rochester Becker HA (2001) Social impact assessment. Eur J Oper Res 128:311–321. https://doi.org/10.1016/S0377-2217(00)00074-6 Berendt B (2019) AI for the common good?! Pitfalls, challenges, and ethics pen-testing. Paladyn J Behav Robot 10:44–65. https://doi.org/10.1515/pjbr-2019-0004 Bostrom N (2016) Superintelligence: paths, dangers, strategies, reprint edition. OUP Oxford, Oxford Brand T, Blok V (2019) Responsible innovation in business: a critical reflection on deliberative engagement as a central governance mechanism. J Respons Innov 6:4–24. https://doi.org/10.1080/23299460.2019.1575681 Braun V, Clarke V (2006) Using thematic analysis in psychology. Qual Res Psychol 3:77–101 Brinkman B, Flick C, Gotterbarn D et al (2017) Listening to professional voices: draft 2 of the ACM code of ethics and professional conduct. Commun ACM 60:105–111. https://doi.org/10.1145/3072528 British Academy, Royal Society (2017) Data management and use: Governance in the 21st century A joint report by the British Academy and the Royal Society. London Brooks RA (2002) Flesh and machines: how robots will change us. Pantheon Books, New York BSR (2018) Artificial intelligence: a rights-based blueprint for business paper 3: implementing human rights due diligence. BSR Bynum T (2008) Computer and information ethics. Stanford Encyclopedia of Philosophy Carroll AB (1991) The pyramid of corporate social responsibility: toward the moral management of organizational stakeholders. Bus Horiz 34:39–48 Carter O, Hohwy J, van Boxtel J et al (2018) Conscious machines: defining questions. Science 359:400–400. https://doi.org/10.1126/science.aar4163 Cavaye ALM (1996) Case study research: a multi-faceted research approach for IS. Inf Syst J 6:227–242. https://doi.org/10.1111/j.1365-2575.1996.tb00015.x CDEI (2019) Interim report: review into bias in algorithmic decision-making. Centre for Data Ethics and Innovation CEN-CENELEC (2017) Ethics assessment for research and innovation—Part 2: ethical impact assessment framework. CEN-CENELEC, Brussels European Parliament (2017) Civil law rules on robotics—European parliament resolution of 16 February 2017 with recommendations to the commission on civil law rules on robotics (2015/2103(INL)) Clarke R (2009) Privacy impact assessment: Its origins and development. Comput Law Secur Rev 25:123–135. https://doi.org/10.1016/j.clsr.2009.02.002 Clarke R (2019) Principles and business processes for responsible AI. Comput Law Secur Rev 35:410–422 CNIL (2015) Privacy impact assessment (PIA) good practice. CNIL European Commission (2018) Communication from the commission to the European Parliament, the European council, the Council, the European Economic and Social Committee and the Committee of the Regions Artificial Intelligence for Europe. European Commission European Commission (2020) White Paper on Artificial Intelligence: a European approach to excellence and trust. Brussels Council of Europe (2019) Unboxing artificial intelligence: 10 steps to protect human rights Committee on Bioethics (DH-BIO) (2019) Strategic action plan on human rights and technologies in biomedicine (2020–2025). Council of Europe Criado Perez C (2019) Invisible women: exposing data bias in a world designed for men, 01 Edition. Chatto & Windus d’Aquin M, Troullinou P, O’Connor NE, et al (2018) Towards an “Ethics in Design” methodology for AI research projects Darke P, Shanks G, Broadbent M (1998) Successfully completing case study research: combining rigour, relevance and pragmatism. Inf Syst J 8:273–289. https://doi.org/10.1046/j.1365-2575.1998.00040.x Dehaene S, Lau H, Kouider S (2017) What is consciousness, and could machines have it? Science 358:486–492 Doteveryone (2019) Consequence scanning—an agile practice for responsible innovators | doteveryone. https://www.doteveryone.org.uk/project/consequence-scanning/. Accessed 10 Apr 2020 EDPS (2020) A preliminary opinion on data protection and scientific research Eisenhardt KM (1989) Building theories from case study research. Acad Manag Rev 14:532–550. https://doi.org/10.2307/258557 Executive Office of the President (2016a) Artificial intelligence, automation, and the economy. Executive Office of the President National Science and Technology Council Committee on Technology Executive Office of the President (2016b) Preparing for the future of artificial intelligence. Executive Office of the President National Science and Technology Council Committee on Technology Expert Group on Liability and New Technologies (2019) Liability for artificial intelligence and other emerging digital technologies. European Commission, Luxembourg Floridi L (1999) Information ethics: on the philosophical foundation of computer ethics. Ethics Inf Technol 1:33–52 Floridi L, Cowls J (2019) A unified framework of five principles for AI in society. Harvard Data Sci Rev. https://doi.org/10.1162/99608f92.8cd550d1 Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14:349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d Fothergill BT, Knight W, Stahl BC, Ulnicane I (2019) Responsible data governance of neuroscience big data. Front Neuroinform. https://doi.org/10.3389/fninf.2019.00028 Friedman B, Kahn P, Borning A (2008) Value sensitive design and information systems. In: Himma K, Tavani H (eds) The handbook of information and computer ethics. Wiley Blackwell, New York, pp 69–102 Garriga E, Melé D (2004) Corporate social responsibility theories: mapping the territory. J Bus Ethics 53:51–71. https://doi.org/10.1023/B:BUSI.0000039399.90587.34 Gasser U, Almeida VAF (2017) A layered Model for AI governance. IEEE Internet Comput 21:58–62. https://doi.org/10.1109/MIC.2017.4180835 GDPR (2016) REGULATION (EU) 2016/679 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal of the European Union L119/1 Gotterbarn D, Miller K, Rogerson S (1999) Software engineering code of ethics is approved. Commun ACM 42:102–107 Haenlein M, Kaplan A (2019) A brief history of artificial intelligence: on the past, present, and future of artificial intelligence. Calif Manag Rev 61:5–14 Hagendorff T (2019) The ethics of AI ethics—an evaluation of guidelines. arXiv: 190303425 [cs, stat] Haraway D (2010) A cyborg manifesto. In: Szeman I, Kaposy T (eds) Cultural theory: an anthology. Wiley Blackwell, Chichester, pp 454–475 Hennen L (2002) Impacts of participatory technology assessment on its societal environment. In: Joss S, Belluci S (eds) Participatory technology assessment: European perspectives. University of Westminster, Centre for the Study of Democracy, London, pp 257–275 Himma KE (2004) The ethics of tracing hacker attacks through the machines of innocent persons. Int J Inf Ethics 2:1–13 HLEG on AI HEG on AI (2019) Ethics guidelines for trustworthy AI. European Commission—Directorate-General for Communication, Brussels Horvitz E (2017) AI, people, and society. Science 357:7–7. https://doi.org/10.1126/science.aao2466 House of Lords H of L (2018) AI in the UK: ready, willing and able? Select Committee on Artificial Intelligence, London ICO (2017) Big data, artificial intelligence, machine learning and data protection. Information Commissioner’s Office IEEE (2017) The IEEE global initiative on ethics of autonomous and intelligent systems. https://standards.ieee.org/develop/indconn/ec/autonomous_systems.html. Accessed 10 Feb 2018 IEEE (2019) IEEE SA—the ethics certification program for autonomous and intelligent systems (ECPAIS). https://standards.ieee.org/industry-connections/ecpais.html. Accessed 10 Apr 2020 Information Commissioner’s Office (2008) Privacy by design IRGC (2018) The governance of decision-making algorithms ISO (2008) BS ISO/IEC 38500:2008—Corporate governance of information technology ISO (2010) ISO 31000:2009(E)—Risk management. Principles and guidelines Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1:389–399. https://doi.org/10.1038/s42256-019-0088-2 Kaplan A, Haenlein M (2019) Siri, Siri, in my hand: Who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence. Bus Horiz 62:15–25 Keutel M, Michalik B, Richter J (2013) Towards mindful case study research in IS: a critical analysis of the past ten years. Eur J Inf Syst. https://doi.org/10.1057/ejis.2013.26 Khatri V, Brown CV (2010) Designing data governance. Commun ACM 53:148–152 Kurzweil R (2006) The singularity is near. Gerald Duckworth & Co Ltd, London Lord C (2019) Objections to Simpson’s argument in ‘Robots, Trust and War.’ Ethics Inf Technol 21:241–251. https://doi.org/10.1007/s10676-019-09505-2 Macnish K, Ryan M, Gregory A, et al (2019a) SHERPA Deliverable 1.1 Case studies. SHERPA project Macnish K, Ryan M, Stahl B (2019b) Understanding ethics and human rights in smart information systems. ORBIT J. https://doi.org/https://doi.org/10.29297/orbit.v2i1.102 Martin CD, Makoundou TT (2017) Taking the high road ethics by design in AI. ACM Inroads 8:35–37 Martinuzzi A, Blok V, Brem A et al (2018) Responsible research and innovation in industry—challenges. Insights Perspect Sustain 10:702. https://doi.org/10.3390/su10030702 Mikhailov D (2019) A new method for ethical data science. https://wellcome.ac.uk/news/new-method-ethical-data-science. Accessed 10 Apr 2020 Miles MB, Huberman AM (1994) Qualitative data analysis: an expanded sourcebook. SAGE, Thousand oaks Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell. https://doi.org/10.1038/s42256-019-0114-4 Morley J, Floridi L, Kinsey L, Elhalal A (2019) From what to how—an overview of ai ethics tools, methods and research to translate principles into practices. arXive Nemitz P (2018) Constitutional democracy and technology in the age of artificial intelligence. Phil Trans R Soc A 376:20180089. https://doi.org/10.1098/rsta.2018.0089 O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy. Penguin UK OECD (2019) Recommendation of the council on artificial intelligence. OECD Ouchchy L, Coin A, Dubljević V (2020) AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media. AI Soc. https://doi.org/10.1007/s00146-020-00965-5 Porter ME, Kramer MR (2006) The link between competitive advantage and corporate social responsibility. Harvard Bus Rev 84:78–92 PWC (2019) A practical guide to responsible artificial intelligence (AI) House of Commons Science and Technology Committee (2016) Robotics and artificial intelligence Ross D (2002) The right and the good. Clarendon Press, Oxford Ryan M (2020) In AI we trust: ethics, artificial intelligence, and reliability. Sci Eng Ethics 26:2749–2767. https://doi.org/10.1007/s11948-020-00228-y Ryan M (2020) The future of transportation: ethical, legal, social and economic impacts of self-driving vehicles in the year 2025. Sci Eng Ethics 26:1185–1208 Ryan M, Stahl BC (2020) Artificial intelligence ethics guidelines for developers and users: clarifying their content and normative implications. J Inf Commun Ethics Soc. https://doi.org/10.1108/JICES-12-2019-0138 Ryan M, Antoniou J, Brooks L et al (2020) The ethical balance of using smart information systems for promoting the United Nations’ sustainable development goals. Sustainability 12:4826. https://doi.org/10.3390/su12124826 Ryan M, Antoniou J, Brooks L, et al (2019) Technofixing the future: ethical side effects of using AI and big data to meet the SDGs. In: Proceeding of IEEE Smart World Congress 2019. IEEE, De Montford University, Leicester, UK Shilton K (2013) Value levers: building ethics into design. Sci Technol Human Values 38:374–397. https://doi.org/10.1177/0162243912436985 Simon J (2017) Value-sensitive design and responsible research and innovation. In: Hansson SO (ed) The ethics of technology: methods and approaches, 1st edn. Rowman & Littlefield International, London, pp 219–236 Simpson TW (2011) Robots, trust and war. Philos Technol 24:325–337. https://doi.org/10.1007/s13347-011-0030-y Stahl BC (2004) Information, ethics, and computers: the problem of autonomous moral agents. Mind Mach 14:67–83. https://doi.org/10.1023/B:MIND.0000005136.61217.93 Stahl BC, Coeckelbergh M (2016) Ethics of healthcare robotics: towards responsible research and innovation. Robot Auton Syst. https://doi.org/10.1016/j.robot.2016.08.018 Stahl BC, Wright D (2018) Ethics and privacy in AI and big data: implementing responsible research and innovation. IEEE Secur Priv 16:26–33. https://doi.org/10.1109/MSP.2018.2701164 Stahl BC, Andreou A, Brey P et al (2021) Artificial intelligence for human flourishing—beyond principles for machine learning. J Bus Res 124:374–388. https://doi.org/10.1016/j.jbusres.2020.11.030 Tipler FJ (2012) Inevitable existence and inevitable goodness of the singularity. J Conscious Stud 19:183–193 Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25:44–56. https://doi.org/10.1038/s41591-018-0300-7 Torrance S (2012) Super-intelligence and (super-)consciousness. Int J Mach Conscious 4:483–501. https://doi.org/10.1142/S1793843012400288 United Nations (2011) Guiding principles on business and human rights—implementing the United Nations “protect, respect and remedy” framework. United Nations Human Rights, New York United Nations (2015) Sustainable development goals—United Nations. In: United Nations Sustainable Development. https://www.un.org/sustainabledevelopment/sustainable-development-goals/. Accessed 9 Jun 2018 van der Blonk H (2003) Writing case studies in information systems research. J Inf Technol 18:45–52. https://doi.org/10.1080/0268396031000077440 van Rest J, Boonstra D, Evert M, et al (2014) Designing privacy-by-design. Brussels Wallach WA, Allen CB, Franklin SC (2011) Consciousness and ethics: artificially conscious moral agents. Int J Mach Conscious 3:177–192 Walsham G (1995) Interpretive case studies in IS research: nature and method. Eur J Inf Syst 4:74–81. https://doi.org/10.1057/ejis.1995.9 Walsham G (1996) Ethical theory, codes of ethics and IS practice. Inf Syst J 6:69–81. https://doi.org/10.1111/j.1365-2575.1996.tb00005.x WEF (2018) White paper: how to prevent discriminatory outcomes in machine learning Weizenbaum J (1977) Computer power and human reason: from judgement to calculation. W.H.Freeman & Co Ltd, London Wiener N (1954) The human use of human beings. Doubleday, New York Wiener N (1964) God and Golem, Inc. A comment on certain points where cybernetics impinges on religion. MIT Press, Cambridge World Economic Forum (2019) Responsible use of technology. WEB, Geneva Yin RK (2003) Applications of case study research, 2nd edn. Sage Publications Inc, Thousand Oaks Yin RK (2003) Case study research: design and methods, 3rd edn. Sage Publications Inc, Thousand Oaks