Hard choices in artificial intelligence

Artificial Intelligence - Tập 300 - Trang 103555 - 2021
Roel Dobbe1, Thomas Krendl Gilbert2, Yonatan Mintz3
1Faculty of Technology, Policy, and Management, Delft University of Technology, the Netherlands
2Center for Human-Compatible AI, UC Berkeley, United States of America
3University of Wisconsin-Madison, United States of America

Tài liệu tham khảo

Hawkins Henley Hill Harmon Hao Hao Ackerman, 2000, The intellectual challenge of CSCW: the gap between social requirements and technical feasibility, Hum.-Comput. Interact., 15, 179, 10.1207/S15327051HCI1523_5 Schiff, 2021, AI ethics in the public, private, and NGO sectors: a review of a global document collection, IEEE Trans. Technol. Soc., 2, 31, 10.1109/TTS.2021.3052127 Andersen, 2018 2020 Mittelstadt, 2019, Principles alone cannot guarantee ethical AI, Nat. Mach. Intell., 1, 501, 10.1038/s42256-019-0114-4 2021 2021 Gebru Mitchell, 2019, Model cards for model reporting, 220 Raji, 2019, Actionable auditing: investigating the impact of publicly naming biased performance results of commercial AI products, 429 Green, 2020, Algorithmic realism: expanding the boundaries of algorithmic thought, 19 Greenbaum, 1992 Agre, 1997 Dreyfus, 2014 Winner, 1980, 121 McCarthy, 2006, A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955, AI Mag., 27, 12 Milli Hadfield-Menell, 2017, Inverse reward design, 6765 S. Russell, Provably beneficial artificial intelligence, Exponential Life, the Next Step. Leveson, 2012 Greenbaum, 1992 Shilton, 2018, Values and ethics in human-computer interaction, foundations and trends®, Hum.-Comput. Interact., 12, 107 Halloran, 2009, The value of values: resourcing co-design of ubiquitous computing, CoDesign, 5, 245, 10.1080/15710880902920960 Wiener, 1988 Von Foerster, 2007 Pask, 1976 Dewey, 1896, The reflex arc concept in psychology, Psychol. Rev., 3, 357, 10.1037/h0070405 Wallach, 2019, Toward the agile and comprehensive international governance of AI and robotics [point of view], Proc. IEEE, 107, 505, 10.1109/JPROC.2019.2899422 Cihon, 2019, Standards for AI governance: international standards to enable global coordination in ai research & development Erdélyi, 2018, Regulating artificial intelligence: proposal for a global solution, 95 Klonick, 2019, The Facebook oversight board: creating an independent institution to adjudicate online free expression, Yale Law J., 129, 2418 Voigt, 2017 Smuha, 2021, From a ‘race to AI'to a ‘race to AI regulation’: regulatory competition for artificial intelligence, Law Innov. Technol., 13, 57, 10.1080/17579961.2021.1898300 Zwetsloot, 2018, Beyond the AI arms race: America, China, and the dangers of zero-sum thinking, Foreign Aff., 16 Yeung, 2017, ‘hypernudge’: big data as a mode of regulation by design, Inf. Commun. Soc., 20, 118, 10.1080/1369118X.2016.1186713 Seaver, 2019, Knowing algorithms, 412 Gillespie, 2014, The relevance of algorithms, vol. 167, 167 Chang, 1997 Chang, 2002, The possibility of parity, Ethics, 112, 659, 10.1086/339673 Chang, 2017, Hard choices, J. Am. Philos. Assoc., 3, 1, 10.1017/apa.2017.7 de Haan, 2015 van der Voort, 2019, Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making?, Gov. Inf. Q., 36, 27, 10.1016/j.giq.2018.10.011 Anderson, 2006, The epistemology of democracy, Episteme, 3, 8, 10.1353/epi.0.0000 Glanville, 2004, The purpose of second-order cybernetics, Kybernetes, 33, 1379, 10.1108/03684920410556016 Agre, 1997, Toward a critical technical practice: Lessons learned in trying to reform AI Williamson, 2002 Chang, 2002, The possibility of parity, Ethics, 112, 659, 10.1086/339673 Schiffer, 1999, The epistemic theory of vagueness, Philos. Perspect., 13, 481 Gómez-Torrente, 1997, Two problems for an epistemicist view of vagueness, Philos. Issues, 8, 237, 10.2307/1523008 MacAskill, 2019, Practical ethics given moral uncertainty, Utilitas, 31, 231, 10.1017/S0953820819000013 N. Soares, B. Fallenstein, Aligning superintelligence with human interests: a technical research agenda, Machine Intelligence Research Institute (MIRI) technical report 8. N. Soares, The value learning problem, Machine Intelligence Research Institute, Berkley. MacAskill, 2016, Normative uncertainty as a voting problem, Mind, 125, 967, 10.1093/mind/fzv169 Von Neumann, 2007 Hildebrandt, 2019, Privacy as protection of the incomputable self: from agnostic to agonistic machine learning, Theor. Inq. Law, 20, 83, 10.1515/til-2019-0004 Hadfield-Menell, 2019, Incomplete contracting and AI alignment, 417 Irving, 2019, AI safety needs social scientists, Distill, 4, e14, 10.23915/distill.00014 Hadfield-Menell, 2016, Cooperative inverse reinforcement learning, 3909 Russell, 2019 E. Barnes, J.R.G. Williams, A theory of metaphysical indeterminacy. MacAskill, 2013, The infectiousness of nihilism, Ethics, 123, 508, 10.1086/669564 O. Keyes, Counting the countless: Why data science is a profound threat for queer people, Real Life 2. Mouffe, 1999, Deliberative democracy or agonistic pluralism?, Soc. Res., 745 Crawford, 2016, Can an algorithm be agonistic? Ten scenes from life in calculated publics, Sci. Technol. Hum. Values, 41, 77, 10.1177/0162243915589635 Hoffmann, 2019, Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse, Inf. Commun. Soc., 22, 900, 10.1080/1369118X.2019.1573912 Eubanks, 2018 James, 1896 J. Dewey, Public & its problems. R. Benjamin, Race after technology: abolitionist tools for the New Jim Code, Social Forces. Krais, 1993, Gender and symbolic violence: female oppression in the light of Pierre Bourdieu's theory of social practice, 156 Heidegger, 1962 Stark, 2019, Facial recognition is the plutonium of AI, XRDS: crossroads, ACM Mag. Stud., 25, 50 Benjamin, 2020, Race after technology: abolitionist tools for the New Jim Code, Soc. Forces, 98, 1, 10.1093/sf/soz162 Garcia, 2020, No: critical refusal as feminist data practice, 199 Wittgenstein, 1953 L. Lessig, Code: And other laws of cyberspace, ReadHowYouWant. com, 2009. Gerla, 2016, Comments on some theories of fuzzy computation, Int. J. Gen. Syst., 45, 372, 10.1080/03081079.2015.1076403 Narayanan I.A.B.W. Group, Proceedings of the IEEE algorithmic bias working group. S. Rea, A survey of fair and responsible machine learning and artificial intelligence: implications of consumer financial services, Available at SSRN 3527034. Corbett-Davies Binns, 2018, Fairness in machine learning: lessons from political philosophy, 149 Trist, 1981 Eckersley Agre, 1994, Surveillance and capture: two models of privacy, Inf. Soc., 10, 101, 10.1080/01972243.1994.9960162 Amrute, 2019, Of techno-ethics and techno-affects, Feminist Rev., 123, 56, 10.1177/0141778919879744 Friedman, 1996, Bias in computer systems, ACM Trans. Inf. Syst., 14, 330, 10.1145/230538.230561 Dobbe Unger, 1983, The critical legal studies movement, Harvard Law Rev., 561, 10.2307/1341032 Irani, 2010, Postcolonial computing: a lens on design and development, 1311 Achiam Choudhury, 2019, On the utility of model learning in HRI, 317 Yu, 2019, Meta-inverse reinforcement learning with probabilistic context variables, 11772 Dreyfus, 2011 Baumer, 2011, When the implication is not to design (technology), 2271 Guo Åström, 2010 Parasuraman, 1997, Humans and automation: use, misuse, disuse, abuse, Hum. Factors, 39, 230, 10.1518/001872097778543886 Barocas, 2016, Big data's disparate impact, Calif. Law Rev., 104, 671 West, 2019, 1 Fisac, 2018, A general safety framework for learning-based control in uncertain robotic systems, IEEE Trans. Autom. Control, 64, 2737, 10.1109/TAC.2018.2876389 Flew, 2009, The citizen's voice: Albert Hirschman's exit, voice and loyalty and its contribution to media citizenship debates, Media, Cult. Soc., 31, 977, 10.1177/0163443709344160 Hirschman, 1970 Crawford, 2019 Li, 2007 Kadir Green, 2019, The principles and limits of algorithm-in-the-loop decision making, 50:1 von Krogh, 2018, Artificial intelligence in organizations: new opportunities for phenomenon-based theorizing, Acad. Manag. Discov., 4, 404, 10.5465/amd.2018.0084 Gasser, 2020, The role of professional norms in the governance of artificial intelligence, 141 Crawford, 2019 Carlini de Bruijn, 2009, System and actor perspectives on sociotechnical systems, IEEE Trans. Syst. Man Cybern., Part A, Syst. Hum., 39, 981, 10.1109/TSMCA.2009.2025452 Selbst, 2019 Benjamin, 2019 Bender, 2021, On the dangers of stochastic parrots: can language models be too big? &#x1f99C, 610 Dobbe, 2019 Börzel, 1998, Organizing Babylon-On the different conceptions of policy networks, Public Adm., 76, 253, 10.1111/1467-9299.00100 Rittel, 1973, Dilemmas in a general theory of planning, Policy Sci., 4, 155, 10.1007/BF01405730 Irani, 2016, Stories we tell about labor: turkopticon and the trouble with “design”, 4573 Haraway, 1988, Situated knowledges: the science question in feminism and the privilege of partial perspective, Fem. Stud., 14, 575, 10.2307/3178066 Harding, 1986 Wagner, 2020, Accountability by design in technology research, Comput. Law Secur. Rev., 37, 10.1016/j.clsr.2020.105398 Bødker, 2009 Bødker, 2018, Participatory design that matters— facing the big issues, ACM Trans. Comput.-Hum. Interact., 25, 4:1, 10.1145/3152421 Bannon, 2018, Reimagining participatory design, Interactions, 26, 26, 10.1145/3292015 Gurses, 2017 Kostova Niebuhr, 1986