Một phân loại về sự hợp tác giữa con người và máy móc: nắm bắt tự động hóa và tính tự chủ kỹ thuật

AI & SOCIETY - Tập 36 - Trang 239-250 - 2020
Monika Simmler1, Ruth Frischknecht2
1Law School, University of St. Gallen, St. Gallen, Switzerland
2Institute for Systemic Management and Public Governance, University of St. Gallen, St. Gallen, Switzerland

Tóm tắt

Do những tiến bộ công nghệ liên tục, sự hợp tác giữa con người và kỹ thuật xã hội ngày càng trở nên phổ biến. Điều này tạo ra những thách thức về quản trị và trách nhiệm, cũng như những vấn đề trong nhiều lĩnh vực khác nhau. Do đó, việc làm quen cho các nhà ra quyết định và các nhà nghiên cứu về cốt lõi của sự hợp tác giữa con người và máy móc là rất quan trọng. Nghiên cứu này giới thiệu một phân loại cho phép xác định bản chất của sự tương tác giữa con người và máy móc. Một cuộc tổng quan tài liệu đã chỉ ra rằng tự động hóa và tính tự chủ kỹ thuật là các tham số chính để mô tả và hiểu sự tương tác này. Cả hai khía cạnh này cần được đánh giá cẩn thận, vì sự gia tăng của chúng có thể có những hậu quả sâu rộng. Do đó, hai khái niệm này cấu thành các trục của phân loại. Năm cấp độ tự động hóa và năm cấp độ tính tự chủ kỹ thuật được giới thiệu phía dưới, dựa trên giả định rằng cả tự động hóa và tính tự chủ đều mang tính chất dần dần. Các cấp độ tự động hóa được phát triển từ các phương pháp hiện có; các cấp độ tự chủ được suy ra cẩn thận từ một cuộc tổng quan văn liệu. Việc sử dụng phân loại cũng được giải thích, cũng như những hạn chế và hướng đi cho nghiên cứu tiếp theo.

Từ khóa

#hợp tác giữa con người và máy móc #phân loại #tự động hóa #tính tự chủ kỹ thuật #quản trị

Tài liệu tham khảo

Alonso E, Mondragón E (2004) Agency, learning and animal-based reinforcement learning. In: Nickles M, Rovatsos M, Weiss G (eds) Agents and computational autonomy – potential risks and solutions. Springer, Berlin, pp 1–6 Balkin JM (2015) The path of robotics law. 6 California Law Review, Circuit 45. Beck S (2015) Technisierung des Mensche: Vermenschlichung der Technik. Neue Herausforderungen für das rechtliche Konzept “Verantwortung”. In: Gruber MC, Bung J, Ziemann S (eds) Autonome Automaten: Künstliche Körper und artifizielle Agenten in der technisierten Gesellschaft. BWV Verlag, Berlin, pp 173–187 Beer JM, Fisk AD, Rogers WA (2014) Toward a framework for levels of robot autonomy in human-robot interaction. J Hum Robot Interact 3:74–99 Bradshaw JM, Feltovich PJ, Jung H, Kulkarni S, Taysom W, Uszok A (2004) Dimensions of adjustable autonomy and mixed-initiative interaction. In: Nickles M, Rovatos M, Weiss G (eds) Agents and computational autonomy: potential, risks, and solutions. Springer, Berlin, pp 17–39 Castelfranchi C, Falcone R (2004) Founding autonomy: The dialectics between (social) environment and agent’s architecture and powers. In: Nickles M, Rovatos M, Weiss G (eds) Agents and computational autonomy: potential, risks, and solutions. Springer, Berlin, pp 40–54 Chinen MA (2016) The co-evolution of autonomous machines and legal responsibility. Va J Law Technol 20:338 Danaher J, Hogan MJ, Noone C, Kennedy R, Behan A, De Paor A et al (2017) Algorithmic governance: developing a research agenda through the power of collective intelligence. Big Data Soc 4:1–21. https://doi.org/10.1177/2053951717726554 Endsley MR (1987) The application of human factors to the development of expert systems for advanced cockpits. Proc Hum Factors Soc Annu Meet 31(12):1388–1392. https://doi.org/10.1177/154193128703101219 Flemisch F, Heesen M, Hesse T, Kelsch J, Schieben A, Beller J (2012) Towards a dynamic balance between humans and automation: authority, ability, responsibility and control in shared and cooperative control situations. Cogn Technol Work 14:3–18. https://doi.org/10.1007/s10111-011-0191-6 Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14:349–379. https://doi.org/10.1023/b:mind.0000035461.63578.9d Franklin S, Graesser A (1997) Is It an agent, or just a program?: a taxonomy for autonomous agents. In: Müller JP, Wooldridge MJ, Jennings NR (eds) Intelligent agents III agent theories, architectures, and languages. ATAL 1996. Lecture notes in computer science (lecture notes in artificial intelligence). Springer, Berlin, pp 21–35 Gransche B, Shala E, Hubig C, Alpsancar S, Harrach S (2014) Wandel von Autonomie und Kontrolle durch neue Mensch-Technik-Interaktionen. Grundsatzfragen autonomieorientierter Mensch-Technik-Verhältnisse. Fraunhofer Verlag, Stuttgart Hertzberg J (2015) Technische Gestaltungsoptionen für autonom agierende Komponenten und Systeme. In: Hilgendorf E, Hötitzsch S (eds) Das Recht vor den Herausforderungen der modernen Technik. Nomos, Baden-Baden, pp 63–74 Hilgendorf E (2017) Automated driving and the law. In: Hilgendorf E, Seidel U (eds) Robotics, autonomics, and the law. Nomos, Baden-Baden, pp 171–194 Janssen M, Kuk G (2016) The challenges and limits of big data algorithms in technocratic governance. Gov Inf Q 33:371–377. https://doi.org/10.1016/j.giq.2016.08.011 Jordan MI, Mitchell TM (2015) Machine learning: trends, perspectives, and prospects. Science 349:255–260. https://doi.org/10.1126/science.aaa8415 Kaber DB (2018) Issues in human–automation interaction modeling: presumptive aspects of frameworks of types and levels of automation. J Cogn Eng Decis Mak 12:7–24. https://doi.org/10.1177/1555343417737203 Kirchkamp O, Strobel C (2019) Sharing responsibility with a machine. J Behav Exp Econ 80:25–33. https://doi.org/10.1016/j.socec.2019.02.010 Korsgaard CM (2014) The normative constitution of agency. In: Vargas M, Yaffe G (eds) Rational and social agency: the philosophy of Michael Bratman. Oxford University Press, New York, pp 190–215 Lambe P (2007) Organising knowledge: taxonomies. Knowledge and organisational effectiveness. Chandos, Oxford Loh W, Loh J (2017) Autonomy and responsibility in hybrid systems. In: Lin P, Jenkins R, Abney K (eds) Robot ethics 2.0: from autonomous cars to artificial intelligence. Oxford University Press, Oxford. https://doi.org/10.1093/oso/9780190652951.003.0003 Martin K (2018) Ethical implications and accountability of algorithms. J Bus Ethics 160:835–850. https://doi.org/10.1007/s10551-018-3921-3 Matthias A (2004) The responsibility gap: ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6:175–183. https://doi.org/10.1007/s10676-004-3422-1 Misselhorn C (2015) Collective agency and cooperation in natural and artificial systems. Springer International Publishing, Cham. https://doi.org/10.1007/978-3-319-15515-9_1 Mittelstadt BD, Allo P, Taddeo M, Wachter S, Floridi L (2016) The ethics of algorithms: mapping the debate. Big Data Soc 3:1–21. https://doi.org/10.1177/2053951716679679 Müller-Hengstenberg CD, Kirn S (2016) Rechtliche Risiken autonomer und vernetzter Systeme: eine Herausforderung. Walter de Gruyter GmbH, Berlin NHTSA (2013) Preliminary statement of policy concerning automated vehicles. US National Highway Traffic Safety Administration, 30 May 2013 Nickerson R, Muntermann J, Varshney U, Isaac H (2009) Taxonomy development in information systems: developing a taxonomy of mobile applications. https://halshs.archives-ouvertes.fr/halshs-00375103/document. Accessed 3 Aug 2019 Nof SY (2009) Automation: what it means to us around the world. In: Nof S (ed) Springer handbook of automation. Springer, Berlin, pp 13–52 Nunes I, Jannach D (2017) A systematic review and taxonomy of explanations in decision support and recommender systems. User Model User Adapt Interact 27:393–444. https://doi.org/10.1007/s11257-017-9195-0 Onnasch L, Maier X, Jürgensohn T (2016) Mensch-Roboter-Interaktion-Eine Taxonomie für alle Anwendungsfälle. Bundesanstalt für Arbeitsschutz und Arbeitsmedizin (BAuA), Dortmund Pagallo U (2017) From automation to autonomous systems: a legal phenomenology with problems of accountability. In: Proceedings of the twenty-sixth international joint conference on artificial intelligence (IJCAI-17), pp 17–23. https://doi.org/10.24963/ijcai.2017/3 Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types and levels of human interaction with automation. IEEE Trans Syst Man Cybern Part A Syst Hum 30:286–297. https://doi.org/10.1109/3468.844354 Proud RW, Hart JJ, Mrozinski RB (2003). Methods for determining the level of autonomy to design into a human spaceflight vehicle: a function specific approach. NASA Johnson Space Center Report, NASA Road, Houston, TX, 2003 Rammert W (2009) Hybride Handlungsträgerschaft: Ein soziotechnisches Modell verteilten Handelns. In: Herzog O, Schildhauer T (eds) Intelligente Objekte. Springer, Berlin, pp 23–33 Rammert W, Schulz-Schaeffer I (2002) Technik und Handeln: wenn soziales Handeln sich auf menschliches Verhalten und technische Artefakte verteilt. In: Rammert W, Schulz-Schaeffer I (eds) Können Maschinen handeln?: soziologische Beiträge zum Verhältnis von Mensch und Technik. Campus Verlag, Frankfurt, pp 11–64 Rijsdijk SA, Hultink EJ, Diamantopoulos A (2007) Product intelligence: its conceptualization, measurement and impact on consumer satisfaction. J Acad Mark Sci 35:340–356. https://doi.org/10.1007/s11747-007-0040-6 Riley V (1989) A general model of mixed-initiative human-machine systems. Proc Hum Factors Soc Ann Meet 33:124–128. https://doi.org/10.1177/154193128903300227 Russell SJ, Norvig P (2014) Artificial intelligence: a modern approach. Pearson education limited, Malaysia Santosuosso A, Bottalico B (2017) Autonomous systems and the law: why intelligence matters. In: Hilgendorf E, Seidel U (eds) Robotics, autonomics, and the law. Nomos, Baden-Baden, pp 27–58 Sartor G, Omicini A (2016) The autonomy of technological systems and responsibilities for their use. In: Bhuta N, Beck S, Geiss R, Lui HY, Kress C (eds) Autonomous weapon systems: law, ethics, policy. Cambridge University Press, Cambridge, pp 39–74 Sheridan TB, Verplank WL (1978). Human and computer control of undersea teleoperators. Institute of Technology Cambridge, Cambridge. https://www.dtic.mil/dtic/tr/fulltext/u2/a057655.pdf. Accessed 23 May 2019 Shin D, Park YJ (2019) Role of fairness, accountability, and transparency in algorithmic affordance. Comput Hum Behav 98:277–284. https://doi.org/10.1016/j.chb.2019.04.019 Shneiderman B (2016) The dangers of faulty, biased, or malicious algorithms requires independent oversight. Proc Natl Acad Sci USA 113:13538–13540. https://doi.org/10.1073/pnas.1618211113 Simmler M (2019) Maschinenethik und strafrechtliche Verantwortlichkeit. In: Bendel O (ed) Handbuch Maschinenethik. Springer, Wiesbaden, pp 1–18 Sommerville I (2007) Software engineering. Pearson Education Limited, Essex Thürmel S (2015) The participatory turn: a multidimensional gradual agency concept for human and non-human actors. In: Misselhorn C (ed) Collective agency and cooperation in natural and artificial systems. Springer, Cham, pp 45–60 Vagia M, Transeth AA, Fjerdingen SA (2016) A literature review on the levels of automation during the years. What are the different taxonomies that have been proposed? Appl Ergon 53:190–202. https://doi.org/10.1016/j.apergo.2015.09.013 Verhagen H (2004) Autonomy and reasoning for natural and artificial agents. In: Nickles M, Rovatsos M, Weiss G (eds) Agents and computational autonomy. Lecture notes in computer science, vol 2969. Springer, Berlin, pp 83–94 Wein LE (1992) Responsibility of intelligent artifacts: toward an automation jurisprudence. Harvard J Law Technol 6:103–154. https://heinonline.org/HOL/P?h=hein.journals/hjlt6&i=109. Accessed 8 Aug 2019 Weyer J (2006) Die Kooperation menschlicher Akteure und nicht-menschlicher Agenten: Ansatzpunkte einer Soziologie hybrider Systeme. Working Paper, 16-2006. Wirtschafts- und Sozialwissenschaftliche Fakultät Universität Dortmund, Dortmund, pp 1–36. https://nbn-resolving.de/urn:nbn:de:0168-ssoar-120992. Accessed 10 June 2019 Weyer J, Reineke S (2005) Creating order in hybrid systems: reflections on the interaction of man and smart machines. Working Paper, 7-2005. Technische Universität Dortmund, Dortmund, pp 1–48. https://nbn-resolving.de/urn:nbn:de:0168-ssoar-109749. Accessed 10 June 2019 Zarsky T (2016) The trouble with algorithmic decisions: an analytic road map to examine efficiency and fairness in automated and opaque decision making. Sci Technol Hum Values 41:118–132. https://doi.org/10.1177/0162243915605575