Epistemic Insights as Design Principles for a Teaching-Learning Module on Artificial Intelligence
Science & Education - 2024
Tóm tắt
In a historical moment in which Artificial Intelligence and machine learning have become within everyone’s reach, science education needs to find new ways to foster “AI literacy.” Since the AI revolution is not only a matter of having introduced extremely performant tools but has been determining a radical change in how we conceive and produce knowledge, not only technical skills are needed but instruments to engage, cognitively, and culturally, with the epistemological challenges that this revolution poses. In this paper, we argue that epistemic insights can be introduced in AI teaching to highlight the differences between three paradigms: the imperative procedural, the declarative logic, and the machine learning based on neural networks (in particular, deep learning). To do this, we analyze a teaching-learning activity designed and implemented within a module on AI for upper secondary school students in which the game of tic-tac-toe is addressed from these three alternative perspectives. We show how the epistemic issues of opacity, uncertainty, and emergence, which the philosophical literature highlights as characterizing the novelty of deep learning with respect to other approaches, allow us to build the scaffolding for establishing a dialogue between the three different paradigms.
Từ khóa
Tài liệu tham khảo
Alaa, A. M., & van der Schaar, M. (2019). Demystifying black-box models with symbolic metamodels. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d\textquotesingle Alché-Buc, E. Fox, & R. Garnett (Eds.), Advances in neural information processing systems (Vol. 32). Retrieved from https://proceedings.neurips.cc/paper_files/paper/2019/file/567b8f5f423af15818a068235807edc0-Paper.pdf
Anderson, C. (2008). The end of theory: The data deluge makes the scientific method obsolete. WIRED. Retrieved February 6, 2024, from http://www.wired.com/science/discoveries/magazine/16-07/pb_theory
Anegawa, S., Ho, I., Ly, K., Rounthwaite, J., & Migler, T. (2023). Learned monkeys: Emergent properties of deep reinforcement learning generated networks. In Springer proceedings in complexity (pp. 50–61). Springer International Publishing. https://doi.org/10.1007/978-3-031-28276-8_5
Barelli, E. (2022). Complex systems simulations to develop agency and citizenship skills through science education. Dissertation thesis, Alma Mater Studiorum Università di Bologna. Dottorato di ricerca in Data science and computation, 33 Ciclo. Retrieved from https://doi.org/10.48676/unibo/amsdottorato/10146
Berry, D. M. (2011). The computational turn: Thinking about the digital humanities. Culture Machine, 12, 1–22. Retrieved February 6, 2024, from https://culturemachine.net/wp-content/uploads/2019/01/10-Computational-Turn-440-893-1-PB.pdf
Billingsley, B. (2017). Teaching and learning about epistemic insight. School Science Review, 365, 59–64.
Billingsley, B., & Ramos Arias, A. (2017). Epistemic insight and classrooms with permeable walls. School Science Review, 99(367), 44–53.
Billingsley, B., Nassaji, M., Fraser, S., & Lawson, F. (2018). A framework for teaching epistemic insight in schools. Research in Science Education, 48(6), 1115–1131. https://doi.org/10.1007/s11165-018-9788-6
Boyd, D., & Crawford, K. (2012). Critical questions for big data. Information, Communication and Society, 15(5), 662–679. https://doi.org/10.1080/1369118x.2012.678878
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 205395171562251. https://doi.org/10.1177/2053951715622512
Carabantes, M. (2020). Black-box artificial intelligence: An epistemological and critical analysis. AI & Society, 35(2), 309–317. https://doi.org/10.1007/s00146-019-00888-w
Cilliers, P. (1998). Complexity and postmodernism: Understanding complex systems. Routledge.
Desai, J., Watson, D. I., Wang, V., Taddeo, M., & Floridi, L. (2022). The epistemological foundations of data science: A critical review. Synthese, 200(6). https://doi.org/10.1007/s11229-022-03933-2
Duit, R., Gropengießer, H., Kattmann, U., Komorek, M., & Parchmann, I. (2012). The model of educational reconstruction – A framework for improving teaching and learning science. In SensePublishers eBooks (pp. 13–37). SensePublishers. https://doi.org/10.1007/978-94-6091-900-8_2
Durán, J. M., & Jongsma, K. R. (2021). Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. Journal of Medical Ethics, 106820. https://doi.org/10.1136/medethics-2020-106820
Gabbrielli, M., & Martini, S. (2010). Programming languages: Principles and paradigms. Undergraduate Topics in Computer Science. https://doi.org/10.1007/978-1-84882-914-5
Goebel, R., Chander, A., Holzinger, K., Lecue, F., Akata, Z., Stumpf, S., Kieseberg, P., & Holzinger, A. (2018). Explainable AI: The new 42? In Lecture Notes in Computer Science (pp. 295–303). Springer Science+Business Media. https://doi.org/10.1007/978-3-319-99740-7_21
Gould, S. J. (1981). The mismeasure of man. W.W. Norton & Company.
Gunning, D., & Aha, D. W. (2019). DARPA’s explainable artificial intelligence (XAI) program. Ai Magazine, 40(2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850
Halevy, A., Norvig, P., & Pereira, F. L. (2009). The unreasonable effectiveness of data. IEEE Intelligent Systems, 24(2), 8–12. https://doi.org/10.1109/mis.2009.36
Hammoudeh, A., Tedmori, S., & Obeid, N. (2021). A reflection on learning from data: Epistemology issues and limitations. arXiv preprint arXiv. https://doi.org/10.48550/arXiv.2107.13270
Harman, G., & Kulkarni, S. (2007). Reliable reasoning: Induction and statistical learning theory. The MIT Press.
Hey, T., Tansley, S., & Tolle, K. (2009). The fourth paradigm: Data-intensive scientific discovery. Microsoft Research.
Hüllermeier, E., & Waegeman, W. (2021). Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning, 110(3), 457–506. https://doi.org/10.1007/s10994-021-05946-3
Ilkou, E., & Koutraki, M. (2020). Symbolic vs sub-symbolic AI methods: Friends or enemies? Proceedings of the CIKM 2020 Workshops, October 19-20, Galway, Ireland. Retrieved from https://ceur-ws.org/Vol-2699/paper06.pdf
Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 205395171452848. https://doi.org/10.1177/2053951714528481
Kläs, M., & Vollmer, A. M. (2018). Uncertainty in machine learning applications: A practice-driven classification of uncertainty. In Lecture Notes in Computer Science (pp. 431–438). Springer Science+Business Media. https://doi.org/10.1007/978-3-319-99229-7_36
Leiter, C., Zhang, R., Chen, Y., Belouadi, J., Larionov, D., Fresen, V., & Eger, S. (2023). ChatGPT: A meta-analysis after 2.5 months. arXiv (Cornell University). https://doi.org/10.48550/arXiv.2302.13795
Leonelli, S. (2012). Introduction: Making sense of data-driven research in the biological and biomedical sciences. Studies in History and Philosophy of Biological and Biomedical Sciences, 43(1), 1–3.
Lodi, Michael, & Martini, Simone. (2021). Computational Thinking, Between Papert and Wing. Science & Education, 30(4), 883–908. https://doi.org/10.1007/s11191-021-00202-5
MacKenzie, D. (2001). Mechanizing proof: Computing risk and trust. MIT Press.
Monett, D., & Lewis, C. W. P. (2017). Getting clarity by defining artificial intelligence—A survey. In Studies in applied philosophy, epistemology and rational ethics (pp. 212–214). Springer. https://doi.org/10.1007/978-3-319-96448-5_21
Newell, A., & Simon, H. A. (1972). Human problem solving. Prentice-Hall.
Nilsson, N. J. (2010). The quest for artificial intelligence. Cambridge University Press.
O’Neil, C. (2016). Weapons of math destruction, how big data increases inequality and threatens democracy. Broadway Books.
Porway, J. (2014). You can’t just hack your way to social change. Harvard Business Review. Retrieved February 6, 2024, from http://blogs.hbr.org/cs/2013/03/you_cant_just_hack_your_way_to.html
Prensky, M. (2009). H. sapiens digital: From digital immigrants and digital natives to digital wisdom. Innovate: Journal of Online Education, 5(3). Retrieved February 6, 2024, from https://www.learntechlib.org/p/104264/
Ravaioli, G. (2020). Epistemological activators and students' epistemologies in learning modern STEM topics. Doctoral dissertation. Alma Mater Studiorum Università di Bologna. Dottorato di ricerca in Fisica, 32 Ciclo. Retrieved from https://doi.org/10.6092/unibo/amsdottorato/9482
Ribes, D., & Jackson, S. J. (2013). Data bite man: The work of sustaining long-term study. In L. Gitelman (Ed.), ‘Raw Data’ is an Oxymoron (pp. 147–166). MIT Press.
Robinson, J. (1965). A machine-oriented logic based on the resolution principle. Journal of the ACM, 12(1), 23–41. https://doi.org/10.1145/321250.321253
Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408. https://doi.org/10.1037/h0042519
Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206–215. https://doi.org/10.1038/s42256-019-0048-x
Rudin, C., & Wagstaff, K. L. (2014). Machine learning for science and society. Machine Learning, 95(1), 1–9. https://doi.org/10.1007/s10994-013-5425-9
Russell, S. J., & Norvig, P. (2021). Artificial intelligence: A modern approach (Fourth ed.). Pearson Education Limited.
Sætra, H. S. (2018). Science as a vocation in the era of big data: The philosophy of science behind big data and humanity’s continued part in science. Integrative Psychological and Behavioral Science, 52(4), 508–522. https://doi.org/10.1007/s12124-018-9447-5
Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research and Development, 3(3), 210–229.
Sanusi, I. T., Oyelere, S. S., Vartiainen, H., Suhonen, J., & Tukiainen, M. (2022). A systematic review of teaching and learning machine learning in K-12 education. Education and Information Technologies, 28(5), 5967–5997. https://doi.org/10.1007/s10639-022-11416-7
Scott, M. L. (2009). Programming language pragmatics (3rd ed.). Elsevier/Morgan Kaufmann Pub.
Shapiro, R. B., Fiebrink, R., & Norvig, P. (2018). How machine learning impacts the undergraduate computing curriculum. Communications of the ACM, 61(11), 27–29. https://doi.org/10.1145/3277567
Tedre, M., Toivonen, T., Kahila, J., Vartiainen, H., Valtonen, T., Jormanainen, I., & Pears, A. (2021). Teaching machine learning in K–12 classroom: Pedagogical and technological trajectories for artificial intelligence education. IEEE Access, 9, 110558–110572. https://doi.org/10.1109/access.2021.3097962
Touretzky, D. S., Gardner-McCune, C., Martin, F., & Seehorn, D. (2019). Envisioning AI for K-12: What should every child know about AI? Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9795–9799. https://doi.org/10.1609/aaai.v33i01.33019795
Tsamados, A., Aggarwal, N., Cowls, J., Morley, J., Roberts, H., Taddeo, M., & Floridi, L. (2021). The ethics of algorithms: Key problems and solutions. AI & Society, 37(1), 215–230. https://doi.org/10.1007/s00146-021-01154-8
Valiant, L. (2013). Probably approximately correct: Nature’s algorithms for learning and prospering in a complex world. Basic Books.
Van Zuylen, H. (2012). Difference between artificial intelligence and traditional methods. Artificial Intelligence Applications to Critical Transportation Issues, E-C168, 3–5.
Ventayen, R. J. M. (2023). OpenAI ChatGPT generated results: Similarity index of artificial intelligence-based contents. Social Science Research Network. https://doi.org/10.2139/ssrn.4332664
Wang, P. (2019). On defining artificial intelligence. Journal of Artificial General Intelligence, 10(2), 1–37. https://doi.org/10.2478/jagi-2019-0002
Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35. https://doi.org/10.1145/1118178.1118215
Wing, J. M. (2011). Research notebook: Computational thinking—what and why. In The Link Magazine. Carnegie Mellon University -- School of Computer Science. Retrieved February 6, 2024, from https://www.cs.cmu.edu/link/research-notebook-computational-thinking-what-and-why
Zednik, C. (2021). Solving the black box problem: A normative framework for explainable artificial intelligence. Philosophy & Technology, 34(2), 265–288. https://doi.org/10.1007/s13347-019-00382-7
Zednik, C., & Boelsen, H. (2021). Preface: Overcoming opacity in machine learning. In C. Zednik & H. Boelsen (Eds.), AISB 2021 Symposium Proceedings: Overcoming Opacity in Machine Learning. Retrieved February 6, 2024, from http://www.explanations.ai/symposium/AISB21_Opacity_Proceedings.pdf
Zeng, D. (2013). From computational thinking to AI thinking [A letter from the editor]. IEEE Intelligent Systems, 28(6), 2–4. https://doi.org/10.1109/mis.2013.141