A manifesto on explainability for artificial intelligence in medicine
Tài liệu tham khảo
Langer, 2021, What do we want from explainable artificial intelligence (XAI)? - A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research, Artificial Intelligence, 296, 10.1016/j.artint.2021.103473
Holzinger, 2019, Causability and explainability of artificial intelligence in medicine, WIREs Data Min Knowl Discov, 9
Tjoa, 2021, A survey on explainable artificial intelligence (XAI): toward medical XAI, IEEE Trans Neural Netw Learn Syst, 32, 4793, 10.1109/TNNLS.2020.3027314
Bozzola, 1996, A hybrid neuro-fuzzy system for ECG classification of myocardial infarction, 241
Adhikari, 2019, LEAFAGE: Example-based and feature importance-based explanations for black-box ML models, 1
Ahn, 2020, Explaining deep learning-based traffic classification using a genetic algorithm, IEEE Access, 9, 4738, 10.1109/ACCESS.2020.3048348
Holzinger, 2021, Toward human-AI interfaces to support explainability and causability in medical AI, IEEE Comput, 54, 78, 10.1109/MC.2021.3092610
Maweu, 2021, CEFEs: A CNN explainable framework for ECG signals, Artif Intell Med, 115, 10.1016/j.artmed.2021.102059
Pennisi, 2021, An explainable AI system for automated COVID-19 assessment and lesion categorization from CT-scans, Artif Intell Med, 118, 10.1016/j.artmed.2021.102114
Yeboah, 2020, An explainable and statistically validated ensemble clustering model applied to the identification of traumatic brain injury subgroups, IEEE Access, 8, 180690, 10.1109/ACCESS.2020.3027453
Gu, 2020, A case-based ensemble learning system for explainable breast cancer recurrence prediction, Artif Intell Med, 107, 10.1016/j.artmed.2020.101858
El-Sappagh, 2018, An ontology-based interpretable fuzzy decision support system for diabetes diagnosis, IEEE Access, 6, 37371, 10.1109/ACCESS.2018.2852004
Kavya, 2021, Machine learning and XAI approaches for allergy diagnosis, Biomed Signal Process Control, 69, 10.1016/j.bspc.2021.102681
Schoonderwoerd, 2021, Human-centered XAI: Developing design patterns for explanations of clinical decision support systems, Int J Hum Comput Stud, 154, 10.1016/j.ijhcs.2021.102684
Dragoni, 2020, Explainable AI meets persuasiveness: Translating reasoning results into behavioral change advice, Artif Intell Med, 105, 10.1016/j.artmed.2020.101840
Reyes, 2020, On the interpretability of artificial intelligence in radiology: Challenges and opportunities, Radiol Artif Intell, 2, 10.1148/ryai.2020190043
Landauer, 1995
Guidotti, 2019, A survey of methods for explaining black box models, ACM Comput Surv, 51, 93:1, 10.1145/3236009
Markus, 2020, The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies, J Biomed Inform
Barda, 2020, A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare, BMC Med Inform Decis Mak, 20, 1, 10.1186/s12911-020-01276-x
Mencar, 2018, Paving the way to explainable artificial intelligence with fuzzy modeling, 215
Zhou, 2021, Evaluating the quality of machine learning explanations: A survey on methods and metrics, Electronics, 10, 593, 10.3390/electronics10050593
Montavon, 2018, Methods for interpreting and understanding deep neural networks, Digit Signal Process, 73, 1, 10.1016/j.dsp.2017.10.011
Holzinger, 2021, Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI, Inf Fusion, 71, 28, 10.1016/j.inffus.2021.01.008
Hudec, 2021, Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions, Knowl Based Syst, 220, 10.1016/j.knosys.2021.106916
Brooke, 2003, SUS: A retrospective, J Usability Stud, 8, 29
Holzinger, 2020, Measuring the quality of explanations: the system causability scale (SCS), 1
Petkovic, 2018, Improving the explainability of random forest classifier–user centered approach, 204
Mensio M, Bastianelli E, Tiddi I, Rizzo G. Mitigating bias in deep nets with knowledge bases: The case of natural language understanding for robots. In: AAAI spring symposium: combining machine learning with knowledge engineering (1). 2020, p. 1–9.
Confalonieri, 2019
Adler-Milstein, 2021, Next-generation artificial intelligence for diagnosis: From predicting diagnostic labels to ”wayfinding”, JAMA, 10.1001/jama.2021.22396
Bellazzi, 2008, Predictive data mining in clinical medicine: current issues and guidelines, Int J Med Inform, 77, 81, 10.1016/j.ijmedinf.2006.11.006
Brachman, 2004
Nemati, 2002, Knowledge warehouse: an architectural integration of knowledge management, decision support, artificial intelligence and data warehousing, Decis Support Syst, 33, 143, 10.1016/S0167-9236(01)00141-5
Schreiber, 2000
Vaisman, 2022
European Commission, 2020
Jin, 2022, Evaluating explainable AI on a multi-modal medical imaging task: Can existing algorithms fulfill clinical requirements?, 11945
Payrovnaziri, 2020, Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review, J Am Med Inform Assoc, 27, 1173, 10.1093/jamia/ocaa053
Holzinger, 2021, Explainable AI and multi-modal causability in medicine, I-Com, 19, 171, 10.1515/icom-2020-0024
Powsner, 2000, Clinicians are from mars and pathologists are from venus: Clinician interpretation of pathology reports, Arch Pathol Lab Med, 124, 1040, 10.5858/2000-124-1040-CAFMAP
Chen, 2018, A natural language processing system that links medical terms in electronic health record notes to lay definitions: System development using physician reviews, J Med Internet Res, 20, 10.2196/jmir.8669
Rau, 2020, Parental understanding of crucial medical jargon used in prenatal prematurity counseling, BMC Med Inform Decis Mak, 20, 169, 10.1186/s12911-020-01188-w
Combi, 2017, A methodological framework for the integrated design of decision-intensive care pathways - an application to the management of COPD patients, J Heal Inform Res, 1, 157, 10.1007/s41666-017-0007-4
Holzinger, 2021, Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence, Inf Fusion, 79, 263
Mueller, 2021, The ten commandments of ethical medical AI, IEEE Comput, 54, 119, 10.1109/MC.2021.3074263
Stoeger, 2021, Medical artificial intelligence: The European legal perspective, Commun ACM, 64, 34, 10.1145/3458652
Hempel, 1948, Studies in the logic of explanation, Philos Sci, 15, 135, 10.1086/286983
Popper, 1935
Pearl, 2019, The seven tools of causal inference, with reflections on machine learning, Commun ACM, 62, 54, 10.1145/3241036
Miller, 2019, Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence, 267, 1, 10.1016/j.artint.2018.07.007
Kempt, 2022, Relative explainability and double standards in medical decision-making, Ethics Inf Technol, 24, 20, 10.1007/s10676-022-09646-x
Nicora, 2022, Evaluating pointwise reliability of machine learning prediction, J Biomed Inform, 10.1016/j.jbi.2022.103996
Weller, 2019, Transparency: Motivations and challenges, 23
Ying, 2019, GNNexplainer: Generating explanations for graph neural networks, 9240
Agarwal C, Lakkaraju H, Zitnik M. Towards a Unified Framework for Fair and Stable Graph Representation Learning. In: Proceedings of conference on uncertainty in artificial intelligence. 2021.
Abdul A, Vermeulen J, Wang D, Lim BY, Kankanhalli M. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. In: Proceedings of the international conference on human computer interaction. 2018, p. 1–18.
Wang D, Yang Q, Abdul A, Lim BY. Designing theory-driven user-centric explainable AI. In: Proceedings of the international conference on human computer interaction. 2019, p. 1–15.
Liao QV, Gruen D, Miller S. Questioning the AI: informing design practices for explainable AI user experiences. In: Proceedings of the international conference on human computer interaction. 2020, p. 1–15.
Holm, 2019, In defense of the black box, Science, 364, 26, 10.1126/science.aax0162
Ardila, 2019, End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography, Nat Med, 25, 954, 10.1038/s41591-019-0447-x
Kleppe, 2021, Designing deep learning studies in cancer diagnostics, Nat Rev Cancer, 21, 199, 10.1038/s41568-020-00327-9
Babic, 2021, Beware explanations from AI in health care, Science, 373, 284, 10.1126/science.abg1834
Raji ID, Smart A, White RN, Mitchell M, Gebru T, Hutchinson B, et al. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In: Proceedings of the international conference on fairness, accountability, and transparency. 2020, p. 33–44.
Rivera, 2020, Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension, BMJ, 370
Gysi, 2021, Network medicine framework for identifying drug-repurposing opportunities for COVID-19, Proc Natl Acad Sci, 118
Zitnik, 2019, Evolution of resilience in protein interactomes across the tree of life, Proc Natl Acad Sci, 116, 4426, 10.1073/pnas.1818013116
Gulshan, 2016, Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs, JAMA, 316, 2402, 10.1001/jama.2016.17216
Poplin, 2018, Prediction of cardiovascular risk factors from retinal fundus photographs via deep learning, Nat Biomed Eng, 2, 158, 10.1038/s41551-018-0195-0
Cao, 2022, AI in combating the COVID-19 pandemic, IEEE Intell Syst, 37, 3, 10.1109/MIS.2022.3164313
Rudie, 2020, Subspecialty-level deep gray matter differential diagnoses with deep learning and Bayesian networks on clinical brain MRI: A pilot study, Radiol Artif Intell, 2, 10.1148/ryai.2020190146