Varieties of transparency: exploring agency within AI systems

AI & SOCIETY - Tập 38 - Trang 1321-1331 - 2022
Gloria Andrada1, Robert W. Clowes2, Paul R. Smart3
1Instituto de Filosofia da Nova, Faculdade de Ciências Sociais e Humanas, Universidade Nova de Lisboa, Lisbon, Portugal
2Lisbon Mind and Reasoning Group, Instituto de Filosofia da Nova, Faculdade de Ciências Sociais e Humanas, Universidade Nova de Lisboa, Lisbon, Portugal
3Electronics and Computer Science, University of Southampton, Southampton, UK

Tóm tắt

AI systems play an increasingly important role in shaping and regulating the lives of millions of human beings across the world. Calls for greater transparency from such systems have been widespread. However, there is considerable ambiguity concerning what “transparency” actually means, and therefore, what greater transparency might entail. While, according to some debates, transparency requires seeing through the artefact or device, widespread calls for transparency imply seeing into different aspects of AI systems. These two notions are in apparent tension with each other, and they are present in two lively but largely disconnected debates. In this paper, we aim to further analyse what these calls for transparency entail, and in so doing, clarify the sorts of transparency that we should want from AI systems. We do so by offering a taxonomy that classifies different notions of transparency. After a careful exploration of the different varieties of transparency, we show how this taxonomy can help us to navigate various domains of human–technology interactions, and more usefully discuss the relationship between technological transparency and human agency. We conclude by arguing that all of these different notions of transparency should be taken into account when designing more ethically adequate AI systems.

Tài liệu tham khảo

AI HLEG (High-Level Expert Group on Artificial Intelligence) (2019) Ethics Guidelines for Trustworthy AI. European Commission, Brussels, Belgium. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai Andrada G (2020) Transparency and the phenomenology of extended cognition. LÍMITE Interdiscipl J Philos Psychol 15(20):1–17 Andrada G (2021) Mind the notebook. Synthese 198:4689–4708 Bostrom N (2014) Superintelligence: paths, dangers, strategies. Oxford University Press, Oxford Bratman ME (2000) Reflection, planning, and temporally extended agency. Philos Rev 109(1):35–61 Bucher T (2012) Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media Soc 14(7):1164–1180 Carter JA (2020) Intellectual autonomy, epistemic dependence and cognitive enhancement. Synthese 197(7):2937–2961 Clark A (2008) Supersizing the mind: embodiment, action, and cognitive extension. Oxford University Press, New York Clark A, Chalmers D (1998) The extended mind. Analysis 58(1):7–19 Clowes RW (2015) Thinking in the cloud: the cognitive incorporation of cloud-based technology. Philos Technol 28(2):261–296. https://doi.org/10.1007/s13347-014-0153-z Clowes RW (2019a) Immaterial engagement: Human agency and the cognitive ecology of the Internet. Phenomenol Cogn Sci 18(1):259–279 Clowes RW (2019b) Screen reading and the creation of new cognitive ecologies. AI Soc 34:705–720 Clowes RW (2020) The internet extended person: exoself or doppelganger? LÍMITE Interdiscipl J Philos Psychol 15(22):1–23 Coeckelbergh M (2020) AI ethics. MIT Press, Cambridge Cristianini N, Scantamburlo T (2020) On social machines for algorithmic regulation. AI Soc 35:645–662 de Fine Licht K, de Fine Licht J (2020) Artificial intelligence, transparency, and public decision-making. AI Soc 35(4):917–926 Diakopoulos N (2020) Transparency. In: Dubber MD, Pasquale F, Das S (eds) The oxford handbook of ethics of AI. Oxford University Press, New York, pp 197–213 Dreyfus SE, Dreyfus HL (1980) A five-stage model of the mental activities involved in directed skill acquisition. In: Operations Research Center, University of California, Berkeley, California Ferreira FGDC, Gandomi AH, Cardoso RTN (2021) Artificial intelligence applied to stock market trading: a review. IEEE Access 9:30898–30917 Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28(4):689–707 Gallagher S (2005) How the body shapes the mind. Oxford University Press, Oxford Gillett AJ, Heersmink R (2019) How navigation systems transform epistemic virtues: knowledge, issues and solutions. Cogn Syst Res 56:36–49 Heersmink R (2013) A taxonomy of cognitive artifacts: function, information, and categories. Rev Philos Psychol 4(3):465–481 Heersmink R (2015) Dimensions of integration in embedded and extended cognitive systems. Phenomenol Cogn Sci 14(3):577–598 Heersmink R, Sutton J (2020) Cognition and the web: extended, transactive, or scaffolded? Erkenntnis 85:139–164 Heidegger M (1927) Being and time. Basil Blackwell, Oxford Lupton, D. (2016) Digital health technologies and digital data: new ways of monitoring, measuring and commodifying human bodies. In: Olleros FX, Zhegu M (eds) Research handbook on digital transformations. Edward Elgar Publishing Ltd., Cheltenham Maravita A, Iriki A (2004) Tools for the body (schema). Trends Cogn Sci 8(2):79–86 Merleau-Ponty M (1945) Phenomenology of Perception. Routledge Press, London Müller VC (2020) Ethics of artificial intelligence and robotics. In: Zalta EN (ed) The stanford encyclopedia of philosophy (Fall 2020 ed.). Stanford University, Stanford, California, USA. https://plato.stanford.edu/archives/fall2020/entries/ethics-ai/ Nguyen CT (2021) Transparency is surveillance. Philos Phenomenol Res. https://doi.org/10.1111/phpr.12823 O’Neill O (2020) Questioning Trust. In: Simon J (ed) The routledge handbook of trust and philosophy. Routledge, New York, pp 17–27 Russell SJ (2019) Human compatible: AI and the problem of control. Viking Press, New York Smart PR, Heersmink R, Clowes RW (2017) The cognitive ecology of the internet. In: Cowley SJ, Vallée-Tourangeau F (eds) Cognition beyond the brain: computation, interactivity and human artifice (2nd ed, pp 251–282). Springer International Publishing, Cham, Switzerland Turilli M, Floridi L (2009) The ethics of information transparency. Ethics Inf Technol 11(2):105–112 Walmsley J (2020) Artificial intelligence and the value of transparency. AI Soc 36(2):585–595 Wang F-Y (2008) Toward a revolution in transportation operations: AI for complex systems. IEEE Intell Syst 23(6):8–13 Weller A (2019) Transparency: motivations and challenges. In: Samek W, Montavon G, Vedaldi A, Hansen LK, Müller K-R (eds) Explainable AI: interpreting, explaining and visualizing deep learning (Vol 11700, pp 23–40). Springer, Cham, Switzerland Wheeler M (2019) The reappearing tool: transparency, smart technology, and the extended mind. AI Soc 34(4):857–866 Zednik C (2021) Solving the black box problem: a normative framework for explainable artificial intelligence. Philos Technol 34:265–288