First- and Second-Level Bias in Automated Decision-making

Philosophy & Technology - Tập 35 Số 2 - Trang 1-20 - 2022
Franke, Ulrik1,2
1RISE Research Institutes of Sweden, Kista, Sweden
2KTH Royal Institute of Technology, Stockholm, Sweden

Tóm tắt

Recent advances in artificial intelligence offer many beneficial prospects. However, concerns have been raised about the opacity of decisions made by these systems, some of which have turned out to be biased in various ways. This article makes a contribution to a growing body of literature on how to make systems for automated decision-making more transparent, explainable, and fair by drawing attention to and further elaborating a distinction first made by Nozick (1993) between first-level bias in the application of standards and second-level bias in the choice of standards, as well as a second distinction between discrimination and arbitrariness. Applying the typology developed, a number of illuminating observations are made. First, it is observed that some reported bias in automated decision-making is first-level arbitrariness, which can be alleviated by explainability techniques. However, such techniques have only a limited potential to alleviate first-level discrimination. Second, it is argued that second-level arbitrariness is probably quite common in automated decision-making. In contrast to first-level arbitrariness, however, second-level arbitrariness is not straightforward to detect automatically. Third, the prospects for alleviating arbitrariness are discussed. It is argued that detecting and alleviating second-level arbitrariness is a profound problem because there are many contrasting and sometimes conflicting standards from which to choose, and even when we make intentional efforts to choose standards for good reasons, some second-level arbitrariness remains.

Tài liệu tham khảo

Altman, A. (2020). Discrimination. In E.N. Zalta (Ed.) The Stanford Encyclopedia of Philosophy, Winter 2020 edn, Metaphysics Research Lab, Stanford University. https://plato.stanford.edu/archives/win2020/entries/discrimination/ . Arrow, K.J. (1951). Social choice and individual values. Wiley, Cowles Commission Mongr, No. 12. citation_journal_title=IEEE Consumer Electronics Magazine; citation_title=Are smartphones ubiquitous?: An in-depth survey of smartphone adoption by seniors; citation_author=A Berenguer, J Goncalves, S Hosio, D Ferreira, T Anagnostopoulos, V Kostakos; citation_volume=6; citation_issue=1; citation_publication_date=2016; citation_pages=104-110; citation_doi=10.1109/MCE.2016.2614524; citation_id=CR3 citation_journal_title=Science; citation_title=Sex bias in graduate admissions: Data from Berkeley; citation_author=PJ Bickel, EA Hammel, JW O’Connell; citation_volume=187; citation_issue=4175; citation_publication_date=1975; citation_pages=398-404; citation_doi=10.1126/science.187.4175.398; citation_id=CR4 citation_journal_title=Philosophy & Technology; citation_title=Algorithmic accountability and public reason; citation_author=R Binns; citation_volume=31; citation_issue=4; citation_publication_date=2018; citation_pages=543-556; citation_doi=10.1007/s13347-017-0263-5; citation_id=CR5 Binns, R. (2018b). Fairness in machine learning: Lessons from political philosophy. In S.A. Friedler C. Wilson (Eds.) Proceedings of the 1st Conference on Fairness, Accountability and Transparency, PMLR, New York, NY, USA, Proceedings of Machine Learning Research, (Vol. 81 pp. 149–159). Borges, J.L. (2007 [1942]). Funes the Memorious [Funes el memorioso]. In DA Yates JE Irby (Eds.) Labyrinths, New Directions, pp 59–66, translation by James E. Irby. Carcary, M., Maccani, G., Doherty, E., & Conway, G. (2018). Exploring the determinants of IoT adoption: Findings from a systematic literature review. In International Conference on Business Informatics Research. https://doi.org/10.1007/978-3-319-99951-7_8 (pp. 113–125). Springer. citation_journal_title=Nature News; citation_title=Can we open the black box of AI?; citation_author=D Castelvecchi; citation_volume=538; citation_issue=7623; citation_publication_date=2016; citation_pages=20; citation_doi=10.1038/538020a1038/538020a; citation_id=CR9 Cavazos, J.G., Phillips, P.J., Castillo, C.D., & O’Toole, A.J. (2020). Accuracy comparison across face recognition algorithms: Where are we on measuring race bias? IEEE Transactions on Biometrics, Behavior, and Identity Science, https://doi.org/10.1109/TBIOM.2020.3027269 . citation_journal_title=Big data; citation_title=Fair prediction with disparate impact: A study of bias in recidivism prediction instruments; citation_author=A Chouldechova; citation_volume=5; citation_issue=2; citation_publication_date=2017; citation_pages=153-163; citation_doi=10.1089/big.2016.0047; citation_id=CR11 citation_journal_title=Communications of the ACM; citation_title=A snapshot of the frontiers of fairness in machine learning; citation_author=A Chouldechova, A Roth; citation_volume=63; citation_issue=5; citation_publication_date=2020; citation_pages=82-89; citation_doi=10.1145/3376898; citation_id=CR12 Corbett-Davies, S., Pierson, E., Feller, A., Goel, S., & Huq, A. (2017). Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. https://doi.org/10.1145/3097983.3098095 (pp. 797–806). Cross, T. (2020). Artificial intelligence and its limits: Steeper than expected. The Economist Technology Quarterly, June 13. citation_journal_title=Proceedings of the National Academy of Sciences; citation_title=Extraneous factors in judicial decisions; citation_author=S Danziger, J Levav, L Avnaim-Pesso; citation_volume=108; citation_issue=17; citation_publication_date=2011; citation_pages=6889-6892; citation_doi=10.1073/pnas.1018033108; citation_id=CR15 Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. https://doi.org/10.1109/CVPR.2009.5206848 (pp. 248–255). IEEE. Dexe, J., Franke, U., Avatare Nöu, A., & Rad, A. (2020). Towards increased transparency with value sensitive design. In Artificial Intelligence in HCI. HCI International 2020. https://doi.org/10.1007/978-3-030-50334-5_1 (pp. 3–15). Springer. citation_journal_title=Science Advances; citation_title=The accuracy, fairness, and limits of predicting recidivism; citation_author=J Dressel, H Farid; citation_volume=4; citation_issue=1; citation_publication_date=2018; citation_pages=eaao5580; citation_doi=10.1126/sciadv.aao5580; citation_id=CR18 citation_journal_title=Communications of the ACM; citation_title=Techniques for interpretable machine learning; citation_author=M Du, N Liu, X Hu; citation_volume=63; citation_issue=1; citation_publication_date=2019; citation_pages=68-77; citation_doi=10.1145/3359786; citation_id=CR19 Dworkin, R. (1978). Taking rights seriously. Harvard University Press, edition including the appendix “A Reply to Critics”. citation_journal_title=The Economist; citation_title=Design bias: Working in the dark; citation_author=; citation_volume=439; citation_issue=9240; citation_publication_date=2021; citation_pages=10; citation_id=CR21 citation_journal_title=Organizational Behavior and Human Decision Processes; citation_title=Power and overconfident decision-making; citation_author=NJ Fast, N Sivanathan, ND Mayer, AD Galinsky; citation_volume=117; citation_issue=2; citation_publication_date=2012; citation_pages=249-260; citation_doi=10.1016/j.obhdp.2011.11.009; citation_id=CR22 citation_journal_title=Communications of the ACM; citation_title=A covenant with transparency: Opening the black box of models; citation_author=KR Fleischmann, WA Wallace; citation_volume=48; citation_issue=5; citation_publication_date=2005; citation_pages=93-97; citation_doi=10.1145/1060710.1060715; citation_id=CR23 citation_journal_title=Minds and Machines; citation_title=AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations; citation_author=L Floridi, J Cowls, M Beltrametti, R Chatila, P Chazerand, V Dignum, C Luetge, R Madelin, U Pagallo, F Rossi; citation_volume=28; citation_issue=4; citation_publication_date=2018; citation_pages=689-707; citation_doi=10.1007/s11023-018-9482-5; citation_id=CR24 citation_journal_title=American Psychologist; citation_title=Dyads and triads at 35,000 feet: Factors affecting group process and aircrew performance; citation_author=HC Foushee; citation_volume=39; citation_issue=8; citation_publication_date=1984; citation_pages=885; citation_doi=10.1037/0003-066X.39.8.885; citation_id=CR25 citation_journal_title=Philosophy & Technology; citation_title=Rawls’s original position and algorithmic fairness; citation_author=U Franke; citation_volume=34; citation_issue=4; citation_publication_date=2021; citation_pages=1803-1817; citation_doi=10.1007/s13347-021-00488-x; citation_id=CR26 Friedman, B., Kahn, P.H., Borning, A., & Huldtgren, A. (2013). Value sensitive design and information systems. In N Doorn, D Schuurbiers, I van de Poel, & ME Gorman (Eds.) Early engagement and new technologies: Opening up the laboratory. https://doi.org/10.1007/978-94-007-7844-3_4 (pp. 55–95). Netherlands, Dordrecht: Springer. citation_journal_title=ACM Computing Surveys (CSUR); citation_title=A survey of methods for explaining black box models; citation_author=R Guidotti, A Monreale, S Ruggieri, F Turini, F Giannotti, D Pedreschi; citation_volume=51; citation_issue=5; citation_publication_date=2018; citation_pages=1-42; citation_doi=10.1145/3236009; citation_id=CR28 Hankerson, D., Marshall, A.R., Booker, J., El Mimouni, H., Walker, I., & Rode, J.A. (2016). Does technology have race?. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. https://doi.org/10.1145/2851581.2892578 (pp. 473–486). Heidari, H., Ferrari, C., Gummadi, K.P., & Krause, A. (2018). Fairness behind a veil of ignorance: A welfare analysis for automated decision making. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (pp. 1273–1283). Holstein, K., Wortman Vaughan, J., Daumé, IIIH., Dudik, M., & Wallach, H. (2019). Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI conference on human factors in computing systems. https://doi.org/10.1145/3290605.3300830 (pp. 1–16). Hutson, M. (2020). Eye-catching advances in some AI fields are not real. Science https://doi.org/10.1126/science.abd0313 . Ji, Y., Zhang, X., Ji, S., Luo, X., & Wang, T. (2018). Model-reuse attacks on deep learning systems. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. https://doi.org/10.1145/3243734.3243757 (pp. 349–363). citation_journal_title=Science; citation_title=Machine learning: Trends, perspectives, and prospects; citation_author=MI Jordan, TM Mitchell; citation_volume=349; citation_issue=6245; citation_publication_date=2015; citation_pages=255-260; citation_doi=10.1126/science.aaa8415; citation_id=CR34 Kant, I. (1948 [1785]). The Moral Law: Groundwork for the Metaphysics of Morals. Routledge, translated and analyzed by H.J. Paton. The page number, as is customary, refers to the pagination of the standard Royal Prussian Academy edition. Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. In 8th innovations in theoretical computer science conference (ITCS 2017), Schloss Dagstuhl-Leibniz-Zentrum für Informatik. https://doi.org/10.4230/LIPIcs.ITCS.2017.43 , (Vol. 67 p. 43). citation_journal_title=Proceedings of the National Academy of Sciences; citation_title=Racial disparities in automated speech recognition; citation_author=A Koenecke, A Nam, E Lake, J Nudell, M Quartey, Z Mengesha, C Toups, JR Rickford, D Jurafsky, S Goel; citation_volume=117; citation_issue=14; citation_publication_date=2020; citation_pages=7684-7689; citation_doi=10.1073/pnas.1915768117; citation_id=CR37 Kuhlman, C., Jackson, L., & Chunara, R. (2020). No computation without representation: Avoiding data and algorithm biases through diversity. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. https://doi.org/10.1145/3394486.3411074 (pp. 3593–3593). citation_journal_title=Philosophy & Technology; citation_title=Algorithmic decision-making based on machine learning from Big data: Can transparency restore accountability?; citation_author=PB Laat; citation_volume=31; citation_issue=4; citation_publication_date=2018; citation_pages=525-541; citation_doi=10.1007/s13347-017-0293-z; citation_id=CR39 Livermore, D.A. (2016). Driven by difference: How great companies fuel innovation through diversity, 1st edn. AMACOM. Mackie, J. (1977). Ethics: Inventing right and wrong. Penguin. Nagel, T. (1986). The view from nowhere. Oxford University Press. Narveson, J. (2002). Respecting persons in theory and practice: Essays on moral and political philosophy. Rowman & Littlefield. citation_journal_title=Nature; citation_title=More accountability for big-data algorithms; citation_author=; citation_volume=537; citation_issue=7621; citation_publication_date=2016; citation_pages=449; citation_doi=10.1038/537449a; citation_id=CR44 citation_journal_title=Review of General Psychology; citation_title=Confirmation bias: A ubiquitous phenomenon in many guises; citation_author=RS Nickerson; citation_volume=2; citation_issue=2; citation_publication_date=1998; citation_pages=175-220; citation_doi=10.1037/1089-2680.2.2.175; citation_id=CR45 Nozick, R. (1974). Anarchy, state, and utopia. Basic Books. Nozick, R. (1989). The examined life: Philosophical meditations. Simon and Schuster. Nozick, R. (1993). The nature of rationality. Princeton University Press. citation_journal_title=Science; citation_title=Dissecting racial bias in an algorithm used to manage the health of populations; citation_author=Z Obermeyer, B Powers, C Vogeli, S Mullainathan; citation_volume=366; citation_issue=6464; citation_publication_date=2019; citation_pages=447-453; citation_doi=10.1126/science.aax2342; citation_id=CR49 O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown. Rawls, J. (1999). A theory of justice/Revised edition. Oxford University Press. Rubin, E. (2018). Ireneo funes: Superman or failure? a Husserlian analysis. In A.J. García-Osuna (Ed.) Borges, Language and Reality, Palgrave Macmillan. https://doi.org/10.1007/978-3-319-95912-2_4 (pp. 51–61). citation_journal_title=Technology Analysis & Strategic Management; citation_title=Path dependence in the innovation of complex technologies; citation_author=RW Rycroft, DE Kash; citation_volume=14; citation_issue=1; citation_publication_date=2002; citation_pages=21-35; citation_doi=10.1080/09537320220125865; citation_id=CR53 citation_journal_title=New England Journal of Medicine; citation_title=Racial bias in pulse oximetry measurement; citation_author=MW Sjoding, RP Dickson, TJ Iwashyna, SE Gay, TS Valley; citation_volume=383; citation_issue=25; citation_publication_date=2020; citation_pages=2477-2478; citation_doi=10.1056/NEJMc2029240; citation_id=CR54 citation_journal_title=Journal of Intelligent Information Systems; citation_title=Classification accuracy is not enough; citation_author=BL Sturm; citation_volume=41; citation_issue=3; citation_publication_date=2013; citation_pages=371-406; citation_doi=10.1007/s10844-013-0250-y; citation_id=CR55 citation_journal_title=Quarterly Journal of Experimental Psychology; citation_title=Moral fatigue: The effects of cognitive fatigue on moral reasoning; citation_author=S Timmons, RM Byrne; citation_volume=72; citation_issue=4; citation_publication_date=2019; citation_pages=943-954; citation_doi=10.1177/1747021818772045; citation_id=CR56 citation_journal_title=Philosophy & Technology; citation_title=Democratizing algorithmic fairness; citation_author=PH Wong; citation_volume=33; citation_issue=2; citation_publication_date=2019; citation_pages=225-244; citation_doi=10.1007/s13347-019-00355-w; citation_id=CR57 World Bank. (2021). World development report 2021: Data for better lives. The World Bank, https://doi.org/10.1596/978-1-4648-1600-0 . Yang, K., Qinami, K., Fei-Fei, L., Deng, J., & Russakovsky, O. (2020). Towards fairer datasets: Filtering and balancing the distribution of the people subtree in the imagenet hierarchy. In Proceedings of the 2020 conference on fairness, accountability, and transparency. https://doi.org/10.1145/3351095.3375709 (pp. 547–558). citation_journal_title=Philosophy & Technology; citation_title=Transparency in algorithmic and human decision-making: Is there a double standard?; citation_author=J Zerilli, A Knott, J Maclaurin, C Gavaghan; citation_volume=32; citation_issue=4; citation_publication_date=2019; citation_pages=661-683; citation_doi=10.1007/s13347-018-0330-6; citation_id=CR60