A sociotechnical perspective for the future of AI: narratives, inequalities, and human control

Ethics and Information Technology - Tập 24 Số 1 - 2022
L. Sartori1, Andreas Theodorou2
1Department of Political and Social Sciences, University of Bologna, Via Bersaglieri 6, 40125, Bologna, Italy
2Department of Computer Science, Umeå University, 90187, Umeå, Sweden

Tóm tắt

AbstractDifferent people have different perceptions about artificial intelligence (AI). It is extremely important to bring together all the alternative frames of thinking—from the various communities of developers, researchers, business leaders, policymakers, and citizens—to properly start acknowledging AI. This article highlights the ‘fruitful collaboration’ that sociology and AI could develop in both social and technical terms. We discuss how biases and unfairness are among the major challenges to be addressed in such a sociotechnical perspective. First, as intelligent machines reveal their nature of ‘magnifying glasses’ in the automation of existing inequalities, we show how the AI technical community is calling for transparency and explainability, accountability and contestability. Not to be considered as panaceas, they all contribute to ensuring human control in novel practices that include requirement, design and development methodologies for a fairer AI. Second, we elaborate on the mounting attention for technological narratives as technology is recognized as a social practice within a specific institutional context. Not only do narratives reflect organizing visions for society, but they also are a tangible sign of the traditional lines of social, economic, and political inequalities. We conclude with a call for a diverse approach within the AI community and a richer knowledge about narratives as they help in better addressing future technical developments, public debate, and policy. AI practice is interdisciplinary by nature and it will benefit from a socio-technical perspective.

Từ khóa


Tài liệu tham khảo

Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., and Robinson, D.G. (2020). Roles for computing in social computing in social change. In: Conference on Fairness, Accountability, and Transparency (FAT* ‘20)

Adams, R. (2020). Helen A’Loy and other tales of female automata: A gendered reading of the narratives of hopes and fears of intelligent machines and artificial intelligence. AI & Society, 35, 569–579. https://doi.org/10.1007/s00146-019-00918-7

Aggarwal, N. (2020). The norms of algorithmic credit scoring. Cambridge Law Journal. https://doi.org/10.2139/ssrn.3569083

Albright, B. (2019). If you give a judge a risk score: Evidence from Kentucky bail decisions. Retrieved from https://thelittledataset.com/about_files/albright_judge_score.pdf

Aler Tubella, A., Theodorou, A., Dignum, F., and Dignum, V. (2019). Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI). DOI: https://doi.org/10.24963/ijcai.2019/802

AlerTubella, A., Theodorou, A., Dignum, V., & Michael, L. (2020). Contestable black boxes. RuleML+RR. Springer.

Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society. https://doi.org/10.1177/1461444816676645

Axelrod, R. (1997). The complexity of cooperation: Agent-based models of competition and collaboration. Princeton University Press.

Bainbridge, W. S., Brent, E. E., Carley, K. M., Heise, D. R., Macy, M. W., Markovsky, B., & Skvoretz, J. (1994). Artificial social intelligence. Annual Review of Sociology, 20(1), 407–436.

BarredoArrieta, A., Diaz Rodriguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado González, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, V. R., Chatila, R., & Herrera, F. (2019). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion. https://doi.org/10.1016/j.inffus.2019.12.012

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Wiley.

Bimber, B. (2003). Information and American democracy. Cambridge University Press.

Boden, M. (1977). Artificial intelligence and natural man. MIT Press.

Boden, M. (2016). AI: Its nature and future. Oxford University Press.

Bogart, L. (1956). The age of television: A study of viewing habits and the impact of television on American life. Ungar Pub Co.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies (1st ed.). Oxford University Press Inc.

Broussard, M. (2018). Artificial unintelligence. MIT Press.

Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: The legal lacuna of synthetic persons. Artificial Intelligence Law, 25, 273–291. https://doi.org/10.1007/s10506-017-9214-9

Bryson, J. J., & Theodorou, A. (2019). How society can maintain human-centric artificial intelligence. In M. Toivonen-Noro, E. Saari, H. Melkas, & M. Hasu (Eds.), Human-centered digitalization and services (pp. 305–323). Springer.

Bryson, J. J., & Winfield, A. (2017). Standardizing ethical design for artificial intelligence and autonomous systems. Computer, 50(5), 116–119. https://doi.org/10.1109/MC.2017.154

Bucher, T. (2016). The algorithmic imaginary: Exploring the ordinary affects of Facebook algorithms. Information, Communication & Society, 20(1), 30–44.

Bundeskartellamt. (2018). No proceeding against Lufthansa for abusive pricing. Retrieved from https://www.bundeskartellamt.de/SharedDocs/Entscheidung/EN/Fallberichte/Missbrauchsaufsicht/2018/B9-175-17.pdf?__blob=publicationFile&v=2

Buolamwini, J. and Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency (FAT*), No. 81. pp. 77–91

Burrell, J. (2016). How the machine “Thinks”: Understanding opacity in machine learning algorithms. Big Data & Society. https://doi.org/10.1177/2053951715622512

Čače, I., & Bryson, J. J. (2007). Agent based modelling of communication costs: Why information can be free. In C. Lyon, C. L. Nehaniv, & A. Cangelosi (Eds.), Emergence of communication and language. Springer.

Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334), 183–186. https://doi.org/10.1126/science.aal4230

Cave, S., & Dihal, K. (2019). Hopes and fears for intelligent machines in fiction and reality. Nature Machine Intelligence, 1, 74–78.

Christin, A. (2020). The ethnographer and the algorithm: Beyond the black box. Theory & Society, 49, 897–918. https://doi.org/10.1007/s11186-020-09411-3

Clarke, M. (2020) Examinations and high stakes decision making in the era of COVID-19. Retrieved from https://blogs.worldbank.org/education/examinations-and-high-stakes-decision-making-era-covid-19

Collins, R. (1979). The bankers by Martin Mayer. American Journal of Sociology, 85(1), 190–194.

Crawford, K., Whittaker, M., Elish, M.C., Barocas, S., Plasek, A., Ferryman, K. (2016). The AI now report: The social and economic implications of artificial intelligence technologies in the near-term. Report prepared for the AI now public symposium, hosted by the White House and New York University’s Information Law Institute. Retrieved from https://artificialintelligencenow.com/media/documents/AINowSummaryReport_3.pdf

Cross, K (2016). When robots are an instrument of male desire. Retrieved from https://medium.com/theestablishment/when-robots-are-an-instrument-of-male-desire-ad1567575a3d.

D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press.

Dastin, J. (2018). Amazon scrapped a secret AI recruitment tool that showed bias against women. Reuters 10 October 2018

De Stefano, V. (2019). Introduction: Automation, artificial intelligence, and labour protection. Comparative Labor Law & Policy Journal, 41, 15.

Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Switzerland: Springer Nature. https://doi.org/10.1007/978-3-030-30371-6.

Dignum, V., Muller, C., and Theodorou, A. (2020). Final analysis of the EU whitepaper on AI, June 12th, ALLAI

Dourish, P., & Bell, G. (2011). Divining a digital future: Mess and mythology in ubiquitous computing. The MIT Press.

Edelman, B. L., & Svirsky, D. (2017). Racial discrimination in the sharing economy: Evidence from a field experiment. American Economic Journal: Applied Economics, 9(2), 1–22.

Edelman, G. M., & Mountcastle, V. B. (1978). The mindful brain: Cortical organization and the group-selective theory of higher brain function. MIT Press.

Elish, M. C., & Boyd, D. (2017). Situating methods in the magic of big data and artificial intelligence. Communication Monographs, 85(1), 57–80.

Eubanks, V. (2018). Automating inequality. How high-tech tools profile, police, and punish the poor. St. Martin’s Press.

European Parliament and Council of European Union (2016) General data protection regulations (GDPR). Pub. L. No. 2016/679

Fischer, C. (1992). America calling. University of California Press.

Floridi, L. (2020). AI and its new winter: From myths to realities. Philosophy & Technology. https://doi.org/10.1007/s13347-020-00396-6

Fourcade, M., & Healy, K. (2017). Seeing like a market. Socio-Economic Review, 15(1), 9–29.

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technology, Forecasting and Social Change, 114, 254–280.

Garfinkel, H. (1967). Studies in ethnomethodology. Prentice-Hall.

Goffman, E. (1974). Frame analysis. Harvard University Press.

GPAI (2021). Working group on the future of work. Retrieved from https://gpai.ai/projects/future-of-work/

Green, B. (2019). “Good” isn’t enough. AI for social good workshop (NeurIPS2019)

Guidotti, R., Monreale, A., & Pedreschi, D. (2019a). The AI black box explanation problem. ERCIM News, 116, 12–13.

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019b). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 93.

Horowitz, M., & Scharre, P. (2015). Meaningful human control in weapon systems: A primer, Working paper (Center for a New American Security).

Houser, K., & Raymond, A. (2020). It is time to move beyond the ‘AI Race’ narrative: Why investment and international cooperation must win the day. Northwestern Journal of Technology and Intellectual Property, 18, 129.

Hu, M. (2017). Algorithmic Jim Crow. Fordham Law Review, 86, 633.

Ingold, D., and Soper, S. (2016). Amazon doesn’t consider the race of its customers. Should it?. Bloomberg. Retrieved https://www.bloomberg.com/graphics/2016-amazon-same-day/

Kasirzadeh, A. (2021). Reasons, values, stakeholders: A philosophical framework for explainable artificial intelligence. In: Conference on Fairness, Accountability, and Transparency (FAccT '21). DOI:https://doi.org/10.1145/3442188.3445866

Kemper, D., & Kolkman. (2019). Transparent to whom? No algorithmic accountability without a critical audience. Information, Communication & Society, 22(14), 2081–2096.

Kleinberg, J., Ludwig, J., Mullainathan, S., Sunstein, C.R. (2019). Discrimination in the age of algorithms. National Bureau of Economic Research

Kranzberg, M. (1986). Technology and history: Kranzberg’s laws. Technology and Culture, 27(3), 544–560.

Kraut, R., Kiesler, S., Boneva, B., Cummings, J., Helgeson, V., & Crawford, A. (2002). Internet paradox revisited. Journal of Social Issues, 58(1), 49–74.

Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016). How we analyzed the COMPAS recidivism algorithm. Propublica.

Leavy, S. O'Sullivan, B. and Siapera, E. (2020). Data, power and bias in artificial intelligence. Retrieved from https://arxiv.org/abs/2008.07341

Lee, D. (2018). Google translate now offers gender-specific translations for some languages. The Verge.

Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392

Lerner, S. (2018). NHS might replace nurses with robot medics such as carebots: Could this be the future of medicine? Tech Times. https://www.techtimes.com/articles/229952/20180611/nhs-might-replace-nurses-with-robot-medics-such-as-carebots-could-this-be-the-future-of-medicine.htm.

Liu, Z. (2021). Sociological perspectives on artificial intelligence: A typological reading. Sociology Compass, 15(3), e12851.

Manheim, K. M., & Kaplan, L. (2019). Artificial intelligence: Risks to privacy and democracy. Yale Journal of Law and Technology, 21, 106.

Marcelo, O. R., Prates, P. H., Avelar, L., & Lamb, C. (2020). Assessing gender bias in machine translation: A case study with Google translate. Neural Computing and Applications, 32, 6363–6381. https://doi.org/10.1007/s00521-019-04144-6

Marda, V., & Narayan, S. (2021). On the importance of ethnographic methods in AI research. Nature Machine Intelligence, 2(3), 187–189.

Mau, S. (2019). The metric society: On the quantification of the social. Wiley.

McCarthy, J., Minsky, M., Rochester, N., & Shannon, C. E. (1955). A proposal for the dartmouth summer research project on artificial intelligence. AI Magazine, 27, 12.

Messeri, L., & Vertesi, J. (2015). The greatest missions never flown: Anticipatory discourse and the “Projectory” in technological communities. Technology and Culture, 56(1), 54–85.

Methnani, L., AlerTubella, A., Dignun, V., & Theodorou, A. (2021). Let me take over: Variable autonomy for meaningful human control. Frontiers in AI. https://doi.org/10.3389/frai.2021.737072

Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.

Molina, M., & Garip, F. (2019). Machine learning for sociology. Annual Review of Sociology, 45(1), 27–45.

Mosco, V. (2004). The digital sublime. MIT Press.

Nasiripour, S., Natarajan, S. (2019). Apple co-founder says Goldman’s apple card algorithm discriminates. Bloomberg. Retrieved from https://www.bloomberg.com/news/articles/2019-11-10/apple-co-founder-says-goldman-s-apple-card-algo-discriminates

Natale, S., & Ballatore, A. (2020). Imagining the thinking machine: Technological myths and the rise of artificial intelligence. Convergence, 26(1), 3–18.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.

Norris, P. (2004). The bridging and bonding role of online communities. In P. Howard & S. Jones (Eds.), Society online. Sage.

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Books.

Paraschakis, D. (2017). Towards an ethical recommendation framework. In: 11th International Conference on Research Challenges in Information Science (RCIS). DOI: https://doi.org/10.1109/RCIS.2017.7956539.

Park, S., & Humphry, J. (2019). Exclusion by design: Intersections of social, digital and data exclusion. Information, Communication & Society, 22(7), 934–953. https://doi.org/10.1080/1369118X.2019.1606266

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.

Pedreschi, D., Giannotti, F., Guidotti, R., Monreale, A., Ruggieri, F., & Turini, F. (2019). Meaningful explanations of black box AI decision systems. Proceedings of the AAAI Conference on Artificial Intelligence, 33, 9780–9784.

Powell, A., Shennan, S., & Thomas, M. G. (2009). Late Pleistocene demography and the appearance of modern human behavior. Science, 324(5932), 1298–1301. https://doi.org/10.1126/science.1170165

Rosenblat, A., Levy, K., Barocas, S., & Hwang, T. (2017). Discriminating tastes: Uber’s customer ratings as vehicles for workplace discrimination. Policy & Internet, 9(3), 256–279.

Royal Society. (2017). Machine learning: The power and promise of computers that learn by example. The Royal Society.

Royal Society. (2018). Portrayals and perceptions of AI and why they matter. The Royal Society.

Salganick, M. (2017). Bit by bit: Social research in the digital age. Princeton University Press.

Samuel, A. L. (1962). Artificial intelligence: A frontier of automation. The ANNALS of the American Academy of Political and Social Science, 340(1), 10–20. https://doi.org/10.1177/000271626234000103

Santoni de Sio, F., & van den Hoven J. (2018). Meaningful human control over autonomous systems: A philosophical account. Front Robot AI, 5, 5. https://doi.org/10.3389/frobt.2018.00015.

Schippers, B. (2020). Artificial intelligence and democratic politics. Political Insight, 11(1), 32–35. https://doi.org/10.1177/2041905820911746

Schwartz, R. D. (1989). Artificial intelligence as a sociological phenomenon. The Canadian Journal of Sociology / Cahiers Canadiens de Sociologie, 14(2), 179–202. https://doi.org/10.2307/3341290.

Sproull, L., & Kiesler, S. (1991). Connections. New ways of working in the networked organization. MIT Press.

Stewart, A. J., McCarty, N., & Bryson, J. J. (2020). Polarization under rising inequality and economic decline. Science Advances. https://doi.org/10.1126/sciadv.abd4201

Suchman, L., Blomberg, J., Orr, J. E., & Trigg, R. (1999). Reconstructing technologies as social practice. American Behavioral Scientist, 43(3), 392–408. https://doi.org/10.1177/00027649921955335

Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.

Theodorou, A. (2020). Why artificial intelligence is a matter of design. In B. P. Goecke & A. M. der Pütten (Eds.), Artificial intelligence (pp. 105–131). Brill and Mentis.

Theodorou, A., & Dignum, V. (2020). Towards ethical and socio-legal governance in AI. Nature Machine Intelligence, 2(1), 10–12. https://doi.org/10.1038/s42256-019-0136-y

Theodorou, A., Wortham, R. H., & Bryson, J. J. (2017). Designing and implementing transparency for real time inspection of autonomous robots. Connection Science, 29(3), 230–241. https://doi.org/10.1080/09540091.2017.1310182

Turiel, E. (2002). The culture of morality: Social development, context, and conflict. Cambridge University Press.

Turkle, S. (1995). Life on the screen: Identity in the age of the internet. Weidenfeld & Nicolson.

UNESCO (2019). I’d blush if I could: Closing gender divides in digital skills through education. Retrieved from https://unesdoc.unesco.org/ark:/48223/pf0000367416

Ünver, H. A. (2018). Artificial intelligence, authoritarianism and the future of political systems. Centre for Economics and Foreign Policy Studies.

Van de Poel, I. (2013). Translating values into design requirements. Philosophy and engineering: Reflections on practice, principles and process (pp. 253–266). Springer.

van den Hoven, J. (2005). Design for values and values for design. Journal of the Australian Computer Society, 7(2), 4–7.

Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S. D., Tegmark, M., & Fuso Nerini, F. (2020). The role of artificial intelligence in achieving the sustainable development goals. Nature Communications. https://doi.org/10.1038/s41467-019-14108-y

Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Transparent, explainable, and accountable AI for robotics. Science Robotics. https://doi.org/10.1126/scirobotics.aan6080

Ward, G. (2006). Narrative and ethics: The structures of believing and the practices of hope. Literature and Theology, 20(4), 438–461.

Wellman, B., Haase, A. Q., Witte, J., & Hampton, K. (2001). Does the internet increase, decrease, or supplement social capital?: Social networks, participation, and community commitment. American Behavioral Scientist, 45(3), 436–455. https://doi.org/10.1177/00027640121957286

Wolfe, A. (1991). Mind, Self, Society, and Computer: Artificial Intelligence and the Sociology of Mind. American Journal of Sociology, 96(5), 1073–1096.

Woolgar, S. (1985). Why not a sociology of machines? The case of sociology and artificial intelligence. Sociology, 19, 557–572.

Wortham, R. H., Theodorou, A., & Bryson, J. J. (2017). Robot transparency: Improving understanding of intelligent behaviour for designers and users. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) (pp. 274–289). Springer.

Zanzotto, M. F. (2019). Viewpoint: Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research, 64(1), 243–252. https://doi.org/10.1613/jair.1.11345

Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. Future of Humanity Institute, University of Oxford.

Zhang, D., Mishra, S., Brynjolfsson, E., Etchemendy, J., Ganguli, D., Grosz, B., Lyons, T., Manyika, J., Niebles, J. C., Sellitto, M., Shoham, M., Clark, J., & Perrault, R. (2021). The AI index 2021 annual report. Human-Centered AI Institute, Stanford University.

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs.