Explainable AI for clinical and remote health applications: a survey on tabular and time series data

Artificial Intelligence Review - Tập 56 - Trang 5261-5315 - 2022
Flavio Di Martino1, Franca Delmastro1
1Institute for Informatics and Telematics (IIT), National Research Council of Italy (CNR), Pisa, Italy

Tóm tắt

Nowadays Artificial Intelligence (AI) has become a fundamental component of healthcare applications, both clinical and remote, but the best performing AI systems are often too complex to be self-explaining. Explainable AI (XAI) techniques are defined to unveil the reasoning behind the system’s predictions and decisions, and they become even more critical when dealing with sensitive and personal health data. It is worth noting that XAI has not gathered the same attention across different research areas and data types, especially in healthcare. In particular, many clinical and remote health applications are based on tabular and time series data, respectively, and XAI is not commonly analysed on these data types, while computer vision and Natural Language Processing (NLP) are the reference applications. To provide an overview of XAI methods that are most suitable for tabular and time series data in the healthcare domain, this paper provides a review of the literature in the last 5 years, illustrating the type of generated explanations and the efforts provided to evaluate their relevance and quality. Specifically, we identify clinical validation, consistency assessment, objective and standardised quality evaluation, and human-centered quality assessment as key features to ensure effective explanations for the end users. Finally, we highlight the main research challenges in the field as well as the limitations of existing XAI methods.

Tài liệu tham khảo

Ahmad T, Munir A, Bhatti SH, Aftab M, Raza MA (2017) Survival analysis of heart failure patients: a case study. PLoS ONE 12(7):0181001 Alvarez Melis D, Jaakkola T (2018) Towards robust interpretability with self-explaining neural networks. In: Advances in Neural Information Processing Systems, vol 31 Alvarez-Melis D, Jaakkola TS (2018) On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049 Alves MA, Castro GZ, Oliveira BAS, Ferreira LA, Ramírez JA, Silva R, Guimarães FG (2021) Explaining machine learning based diagnosis of covid-19 from routine blood tests with decision trees and criteria graphs. Comput Biol Med 132:104335 Amann J, Blasimme A, Vayena E, Frey D, Madai VI (2020) Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 20(1):1–9 Ang ET, Nambiar M, Soh YS, Tan VY (2021) An interpretable intensive care unit mortality risk calculator. In: 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp 4152–4158. IEEE Antoniadi AM, Galvin M, Heverin M, Hardiman O, Mooney C (2021) Prediction of caregiver quality of life in amyotrophic lateral sclerosis using explainable machine learning. Sci Rep 11(1):1–13 Antoniadi AM, Du Y, Guendouz Y, Wei L, Mazo C, Becker BA, Mooney C (2021) Current challenges and future opportunities for xai in machine learning-based clinical decision support systems: a systematic review. Appl Sci 11(11):5088 Apley DW, Zhu J (2020) Visualizing the effects of predictor variables in black box supervised learning models. J R Stat Soc Ser B 82(4):1059–1086 Arık SO, Pfister T (2021) Tabnet: attentive interpretable tabular learning. In: AAAI, vol 35, pp 6679–6687 Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, García S, Gil-López S, Molina D, Benjamins R et al (2020) Explainable artificial intelligence (xai): concepts, taxonomies, opportunities and challenges toward responsible ai. Inf Fusion 58:82–115 Arrotta L, Civitarese G, Bettini C (2022) Dexar: deep explainable sensor-based activity recognition in smart-home environments. Proc ACM Interact Mob Wear Ubiquitous Technol 6(1):1–30 Bach S, Binder A, Montavon G, Klauschen F, Müller K-R, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE 10(7):0130140 Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 Barakat NH, Bradley AP (2007) Rule extraction from support vector machines: a sequential covering approach. IEEE Trans Knowl Data Eng 19(6):729–741 Barda AJ, Horvat CM, Hochheiser H (2020) A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare. BMC Med Inform Decis Mak 20(1):1–16 Bau D, Zhou B, Khosla A, Oliva A, Torralba A (2017) Network dissection: quantifying interpretability of deep visual representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 6541–6549 Beebe-Wang N, Okeson A, Althoff T, Lee S-I (2021) Efficient and explainable risk assessments for imminent dementia in an aging cohort study. IEEE J Biomed Health Inform 25(7):2409–2420 Bennett DA, Schneider JA, Buchman AA, Barnes LL, Boyle PA, Wilson RS (2012) Overview and findings from the rush memory and aging project. Curr Alzheimer Res 9(6):646–663 Bjerring JC, Busch J (2021) Artificial intelligence and patient-centered decision-making. Philos Technol 34(2):349–371 Bois MD, El Yacoubi MA, Ammi M (2020) Interpreting deep glucose predictive models for diabetic people using retain. In: International Conference on Pattern Recognition and Artificial Intelligence, pp 685–694. Springer Bonaz B, Sinniger V, Pellissier S (2020) Targeting the cholinergic anti-inflammatory pathway with vagus nerve stimulation in patients with covid-19? Bioelectron Med 6(1):1–7 Bruckert S, Finzel B, Schmid U (2020) The next generation of medical decision support: a roadmap toward transparent expert companions. Front Artif Intell 3:507973 Cavaliere F, Della Cioppa A, Marcelli A, Parziale A, Senatore R (2020) Parkinson’s disease diagnosis: towards grammar-based explainable artificial intelligence. In: 2020 IEEE Symposium on Computers and Communications (ISCC), pp 1–6. IEEE Chen P, Dong W, Wang J, Lu X, Kaymak U, Huang Z (2020) Interpretable clinical prediction via attention-based neural network. BMC Med Inform Decis Mak 20(3):1–9 Cheng F, Liu D, Du F, Lin Y, Zytek A, Li H, Qu H, Veeramachaneni K (2021) Vbridge: connecting the dots between features and data to explain healthcare models. IEEE Trans Vis Comput Gr 28(1):378–388 Chmiel F, Burns D, Azor M, Borca F, Boniface M, Zlatev Z, White N, Daniels T, Kiuber M (2021) Using explainable machine learning to identify patients at risk of reattendance at discharge from emergency departments. Sci Rep 11(1):1–11 Choi E, Bahadori MT, Sun J, Kulas J, Schuetz A, Stewart W (2016) Retain: an interpretable predictive model for healthcare using reverse time attention mechanism. In: Advances in Neural Information Processing Systems, vol 29 Cho S, Lee G, Chang W, Choi J (2020) Interpretation of deep temporal representations by selective visualization of internally activated nodes. arXiv preprint arXiv:2004.12538 Cinà G, Röber T, Goedhart R, Birbil I (2022) Why we do need explainable ai for healthcare. arXiv preprint arXiv:2206.15363 Clifford GD, Liu C, Moody B, Li-wei HL, Silva I, Li Q, Johnson A, Mark RG (2017) Af classification from a short single lead ecg recording: The physionet/computing in cardiology challenge 2017. In: 2017 Computing in Cardiology (CinC), pp 1–4. IEEE Clifford GD, Liu C, Moody BE, Roig JM, Schmidt SE, Li Q, Silva I, Mark RG (2017) Recent advances in heart sound analysis. Physiol Meas 38:10–25 Costa ABD, Moreira L, Andrade DCD, Veloso A, Ziviani N (2021) Predicting the evolution of pain relief: ensemble learning by diversifying model explanations. ACM Trans Comput Healthcare 2(4):1–28 Cox DR (1992) Regression models and life-tables. breakthroughs in statistics. Stat Soc 372:527–541 Curtis C, Shah SP, Chin S-F, Turashvili G, Rueda OM, Dunning MJ, Speed D, Lynch AG, Samarajiwa S, Yuan Y et al (2012) The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups. Nature 486(7403):346–352 Das A, Rad P (2020) Opportunities and challenges in explainable artificial intelligence (xai): a survey. arXiv preprint arXiv:2006.11371 Dau HA, Bagnall A, Kamgar K, Yeh C-CM, Zhu Y, Gharghabi S, Ratanamahatana CA, Keogh E (2019) The ucr time series archive. IEEE/CAA J Autom Sin 6(6):1293–1305 Davagdorj K, Bae J-W, Pham V-H, Theera-Umpon N, Ryu KH (2021) Explainable artificial intelligence based framework for non-communicable diseases prediction. IEEE Access 9:123672–123688 Deng H (2019) Interpreting tree ensembles with intrees. Int J Data Sci Anal 7(4):277–287 Devlin J, Chang M-W, Lee K, Toutanova K (2018) Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 Diprose WK, Buist N, Hua N, Thurier Q, Shand G, Robinson R (2020) Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. J Am Med Inform Assoc 27(4):592–600 Dissanayake T, Fernando T, Denman S, Sridharan S, Ghaemmaghami H, Fookes C (2020) A robust interpretable deep learning classifier for heart anomaly detection without segmentation. IEEE J Biomed Health Inform 25(6):2162–2171 Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 Drew BJ, Harris P, Zègre-Hemsey JK, Mammone T, Schindler D, Salas-Boni R, Bai Y, Tinoco A, Ding Q, Hu X (2014) Insights into the problem of alarm fatigue with physiologic monitor devices: a comprehensive observational study of consecutive intensive care unit patients. PLoS ONE 9(10):110274 Du M, Liu N, Hu X (2019) Techniques for interpretable machine learning. Commun ACM 63(1):68–77 Duckworth C, Chmiel FP, Burns DK, Zlatev ZD, White NM, Daniels TW, Kiuber M, Boniface MJ (2021) Using explainable machine learning to characterise data drift and detect emergent health risks for emergency department admissions during covid-19. Sci Rep 11(1):1–10 Duell J, Fan X, Burnett B, Aarts G, Zhou S-M (2021) A comparison of explanations given by explainable artificial intelligence methods on analysing electronic health records. In: 2021 IEEE EMBS International Conference on Biomedical and Health Informatics (BHI), pp 1–4. IEEE Dwivedi P, Khan AA, Mugde S, Sharma G (2021) Diagnosing the major contributing factors in the classification of the fetal health status using cardiotocography measurements: An automl and xai approach. In: 2021 13th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), pp 1–6. IEEE El-Bouri R, Eyre DW, Watkinson P, Zhu T, Clifton DA (2020) Hospital admission location prediction via deep interpretable networks for the year-round improvement of emergency patient care. IEEE J Biomed Health Inform 25(1):289–300 Elshawi R, Al-Mallah MH, Sakr S (2019) On the interpretability of machine learning-based model for predicting hypertension. BMC Med Inform Decis Mak 19(1):1–32 ElShawi R, Sherif Y, Al-Mallah M, Sakr S (2019) Ilime: local and global interpretable model-agnostic explainer of black-box decision. In: European Conference on Advances in Databases and Information Systems, pp 53–68. Springer Faruk MF (2021) Residualcovid-net: An interpretable deep network to screen covid-19 utilizing chest ct images. In: 2021 3rd International Conference on Electrical & Electronic Engineering (ICEEE), pp 69–72. IEEE Filtjens B, Ginis P, Nieuwboer A, Afzal MR, Spildooren J, Vanrumste B, Slaets P (2021) Modelling and identification of characteristic kinematic features preceding freezing of gait with convolutional neural networks and layer-wise relevance propagation. BMC Med Inform Decis Mak 21(1):1–11 Friedman JH (2001) Greedy function approximation: a gradient boosting machine. Ann Stat 29(5):1189–1232 Friedman JH, Popescu BE (2008) Predictive learning via rule ensembles. Ann Appl Stat 2(3):916–954 Ghorbani A, Wexler J, Zou JY, Kim B (2019) Towards automatic concept-based explanations. In: Advances in Neural Information Processing Systems, vol 32 Goldberger AL, Amaral LA, Glass L, Hausdorff JM, Ivanov PC, Mark RG, Mietus JE, Moody GB, Peng C-K, Stanley HE (2000) Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. Circulation 101(23):215–220 Goldstein A, Kapelner A, Bleich J, Pitkin E (2015) Peeking inside the black box: visualizing statistical learning with plots of individual conditional expectation. J Comput Graph Stat 24(1):44–65 Goyal Y, Feder A, Shalit U, Kim B (2019) Explaining classifiers with causal concept effect (cace). arXiv preprint arXiv:1907.07165 Guidotti R, Monreale A, Ruggieri S, Pedreschi D, Turini F, Giannotti F (2018) Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820 Gullapalli BT, Carreiro S, Chapman BP, Ganesan D, Sjoquist J, Rahman T (2021) Opitrack: a wearable-based clinical opioid use tracker with temporal convolutional attention networks. Proc ACM Interact Mob Wear Ubiquitous Technol 5(3):1–29 Gulum MA, Trombley CM, Kantardzic M (2021) A review of explainable deep learning cancer detection models in medical imaging. Appl Sci 11(10):4573 Gupta A, Jain J, Poundrik S, Shetty MK, Girish M, Gupta MD (2021) Interpretable ai model-based predictions of ecg changes in covid-recovered patients. In: 2021 4th International Conference on Bio-Engineering for Smart Technologies (BioSMART), pp 1–5. IEEE Guvenir HA, Acar B, Demiroz G, Cekin A (1997) A supervised machine learning algorithm for arrhythmia analysis. In: Computers in Cardiology 1997, pp 433–436. IEEE Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. In: Advances in Neural Information Processing Systems, vol 29 Hartl A, Bachl M, Fabini J, Zseby T (2020) Explainability and adversarial robustness for rnns. In: 2020 IEEE Sixth International Conference on Big Data Computing Service and Applications (BigDataService), pp 148–156. IEEE Hatwell J, Gaber MM, Atif Azad RM (2020) Ada-whips: explaining adaboost classification with applications in the health sciences. BMC Med Inform Decis Mak 20(1):1–25 He L, Liu H, Yang Y, Wang B (2021) A multi-attention collaborative deep learning approach for blood pressure prediction. ACM Trans Manag Inf Syst 13(2):1–20 He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 770–778 Holzinger AT, Muller H (2021) Toward human-ai interfaces to support explainability and causability in medical ai. Computer 54(10):78–86 Holzinger A, Carrington A, Müller H (2020) Measuring the quality of explanations: the system causability scale (scs). KI-Künstliche Intell 34(2):193–198 Horsak B, Slijepcevic D, Raberger A-M, Schwab C, Worisch M, Zeppelzauer M (2020) Gaitrec, a large-scale ground reaction force dataset of healthy and impaired gait. Sci Data 7(1):1–8 Howard AG, Zhu M, Chen B, Kalenichenko D, Wang W, Weyand T, Andreetto M, Adam H (2017) Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 Hsieh T-Y, Wang S, Sun Y, Honavar V (2021) Explainable multivariate time series classification: a deep neural network which learns to attend to important variables as well as time intervals. In: Proceedings of the 14th ACM International Conference on Web Search and Data Mining, pp 607–615 Ibrahim L, Mesinovic M, Yang K-W, Eid MA (2020) Explainable prediction of acute myocardial infarction using machine learning and shapley values. IEEE Access 8:210410–210417 Ishwaran H, Kogalur UB, Blackstone EH, Lauer MS (2008) Random survival forests. Ann Appl Stat 2(3):841–860 Ivaturi P, Gadaleta M, Pandey AC, Pazzani M, Steinhubl SR, Quer G (2021) A comprehensive explanation framework for biomedical time series classification. IEEE J Biomed Health Inform 25(7):2398–2408 Jian J-Y, Bisantz AM, Drury CG (1998) Towards an empirically determined scale of trust in computerized systems: distinguishing concepts and types of trust. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol 42, pp 501–505. SAGE Publications Sage CA, Los Angeles, CA Jiang J, Hewner S, Chandola V (2021) Explainable deep learning for readmission prediction with tree-glove embedding. In: 2021 IEEE 9th International Conference on Healthcare Informatics (ICHI), pp 138–147. IEEE Johnson AE, Pollard TJ, Shen L, Lehman L-wH, Feng M, Ghassemi M, Moody B, Szolovits P, Anthony Celi L, Mark RG (2016) Mimic-iii, a freely accessible critical care database. Sci Data 3(1):1–9 Jung J-M, Kim Y-H, Yu S, Kyungmi O, Kim CK, Song T-J, Kim Y-J, Kim BJ, Heo SH, Park K-Y et al (2019) Long-term outcomes of real-world Korean patients with atrial-fibrillation-related stroke and severely decreased ejection fraction. J Clin Neurol 15(4):545–554 Kapcia M, Eshkiki H, Duell J, Fan X, Zhou S, Mora B (2021) Exmed: an ai tool for experimenting explainable ai techniques on medical data analytics. In: 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), pp 841–845. IEEE Khodabandehloo E, Riboni D, Alimohammadi A (2021) Healthxai: collaborative and explainable ai for supporting early diagnosis of cognitive decline. Futur Gener Comput Syst 116:168–189 Kim S-H, Jeon E-T, Yu S, Oh K, Kim CK, Song T-J, Kim Y-J, Heo SH, Park K-Y, Kim J-M et al (2021) Interpretable machine learning for early neurological deterioration prediction in atrial fibrillation-related stroke. Sci Rep 11(1):1–9 Kim L, Kim J-A, Kim S (2014) A guide for the utilization of health insurance review and assessment service national patient samples. Epidemiology and health 36 Kim B, Wattenberg M, Gilmer J, Cai C, Wexler J, Viegas F, et al (2018) Interpretability beyond feature attribution: quantitative testing with concept activation vectors (tcav). In: International Conference on Machine Learning, pp 2668–2677. PMLR Kindermans P-J, Hooker S, Adebayo J, Alber M, Schütt KT, Dähne S, Erhan D, Kim B (2019) The (un) reliability of saliency methods. In: Explainable AI: interpreting, explaining and visualizing deep learning, pp 267–280. Springer Knaus WA, Harrell FE, Lynn J, Goldman L, Phillips RS, Connors AF, Dawson NV, Fulkerson WJ, Califf RM, Desbiens N et al (1995) The support prognostic model: objective estimates of survival for seriously ill hospitalized adults. Ann Intern Med 122(3):191–203 Kok I, Okay FY, Muyanli O, Ozdemir S (2022) Explainable artificial intelligence (xai) for internet of things: a survey. arXiv preprint arXiv:2206.04800 Kovalchuk SV, Kopanitsa GD, Derevitskii IV, Matveev GA, Savitskaya DA (2022) Three-stage intelligent support of clinical decision making for higher trust, validity, and explainability. J Biomed Inform 127:104013 Krishnakumar S, Abdou T (2020) Towards interpretable and maintainable supervised learning using shapley values in arrhythmia. In: Proceedings of the 30th Annual International Conference on Computer Science and Software Engineering, pp 23–32 Kumarakulasinghe NB, Blomberg T, Liu J, Leao AS, Papapetrou P (2020) Evaluating local interpretable model-agnostic explanations on clinical machine learning classification models. In: 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS), pp 7–12. IEEE Kwon BC, Choi M-J, Kim JT, Choi E, Kim YB, Kwon S, Sun J, Choo J (2018) Retainvis: visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. IEEE Trans Vis Comput Gr 25(1):299–309 Lauritsen SM, Kristensen M, Olsen MV, Larsen MS, Lauritsen KM, Jørgensen MJ, Lange J, Thiesson B (2020) Explainable artificial intelligence model to predict acute critical illness from electronic health records. Nat Commun 11(1):1–11 Lemeshow S, May S, Hosmer DW Jr (2011) Applied survival analysis: regression modeling of time-to-event data. Wiley, New York Leung CK, Fung DL, Mai D, Wen Q, Tran J, Souza J (2021) Explainable data analytics for disease and healthcare informatics. In: 25th International Database Engineering & Applications Symposium, pp 65–74 Li B, Sano A (2020) Extraction and interpretation of deep autoencoder-based temporal features from wearables for forecasting personalized mood, health, and stress. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4(2):1–26 Linardatos P, Papastefanopoulos V, Kotsiantis S (2020) Explainable ai: a review of machine learning interpretability methods. Entropy 23(1):18 Lin J, Keogh E, Lonardi S, Chiu B (2003) A symbolic representation of time series, with implications for streaming algorithms. In: Proceedings of the 8th ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery, pp 2–11 Lisboa PJ, Ortega-Martorell S, Olier I (2020) Explaining the neural network: A case study to model the incidence of cervical cancer. In: International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, pp 585–598. Springer Looveren AV, Klaise J (2021) Interpretable counterfactual explanations guided by prototypes. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp 650–665. Springer Lu J, Jin R, Song E, Alrashoud M, Al-Mutib KN, Al-Rakhami MS (2020) An explainable system for diagnosis and prognosis of covid-19. IEEE Internet Things J 8(21):15839–15846 Lundberg SM, Lee S-I (2017) A unified approach to interpreting model predictions. Adv Neural Inf Process Syst 30 Lundberg SM, Erion G, Chen H, DeGrave A, Prutkin JM, Nair B, Katz R, Himmelfarb J, Bansal N, Lee S-I (2020) From local explanations to global understanding with explainable ai for trees. Nat Mach Intell 2(1):2522–5839 Luo J, Ye M, Xiao C, Ma F (2020) Hitanet: Hierarchical time-aware attention networks for risk prediction on electronic health records. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp 647–656 Markus AF, Kors JA, Rijnbeek PR (2021) The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inform 113:103655 Ma D, Wang Z, Xie J, Guo B, Yu Z (2020) Interpretable multivariate time series classification based on prototype learning. In: International Conference on Green, Pervasive, and Cloud Computing, pp 205–216. Springer Maweu BM, Dakshit S, Shamsuddin R, Prabhakaran B (2021) Cefes: a cnn explainable framework for ecg signals. Artif Intell Med 115:102059 Mikalsen KØ, Bianchi FM, Soguero-Ruiz C, Skrøvseth SO, Lindsetmo R-O, Revhaug A, Jenssen R (2016) Learning similarities between irregularly sampled short multivariate time series from ehrs Mishra S, Dutta S, Long J, Magazzeni D (2021) A survey on the robustness of feature importance and counterfactual explanations. arXiv preprint arXiv:2111.00358 Mohseni S, Block JE, Ragan ED (2018) A human-grounded evaluation benchmark for local explanations of machine learning. arXiv preprint arXiv:1801.05075 Moncada-Torres A, van Maaren MC, Hendriks MP, Siesling S, Geleijnse G (2021) Explainable machine learning can outperform cox regression predictions and provide insights in breast cancer survival. Sci Rep 11(1):1–13 Mondal AK, Bhattacharjee A, Singla P, Prathosh A (2021) xvitcos: Explainable vision transformer based covid-19 screening using radiography. IEEE J Transl Eng Health Med 10:1–10 Moreno-Sanchez PA (2020) Development of an explainable prediction model of heart failure survival by using ensemble trees. In: 2020 IEEE International Conference on Big Data (Big Data), pp 4902–4910. IEEE Morris MD (1991) Factorial sampling plans for preliminary computational experiments. Technometrics 33(2):161–174 Mousavi S, Afghah F, Acharya UR (2020) Han-ecg: an interpretable atrial fibrillation detection model using hierarchical attention networks. Comput Biol Med 127:104057 Muller H, Mayrhofer MT, Van Veen E-B, Holzinger A (2021) The ten commandments of ethical medical ai. Computer 54(07):119–123 Nicolaides AN, Kakkos SK, Kyriacou E, Griffin M, Sabetai M, Thomas DJ, Tegos T, Geroulakos G, Labropoulos N, Doré CJ et al (2010) Asymptomatic internal carotid artery stenosis and cerebrovascular risk stratification. J Vasc Surg 52(6):1486–1496 Nori H, Jenkins S, Koch P, Caruana R (2019) Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223 Oba Y, Tezuka T, Sanuki M, Wagatsuma Y (2021) Interpretable prediction of diabetes from tabular health screening records using an attentional neural network. In: 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA), pp 1–11. IEEE Obeid I, Picone J (2016) The temple university hospital eeg data corpus. Front Neurosci 10:196 Ochoa JGD, Csiszár O, Schimper T (2021) Medical recommender systems based on continuous-valued logic and multi-criteria decision operators, using interpretable neural networks. BMC Med Inform Decis Mak 21(1):1–15 Okay FY, Yıldırım M, Özdemir S (2021) Interpretable machine learning: a case study of healthcare. In: 2021 International Symposium on Networks, Computers and Communications (ISNCC), pp 1–6. IEEE Oviedo F, Ren Z, Sun S, Settens C, Liu Z, Hartono NTP, Ramasamy S, DeCost BL, Tian SI, Romano G et al (2019) Fast and interpretable classification of small x-ray diffraction datasets using data augmentation and deep neural networks. NPJ Comput Mater 5(1):1–9 Pal A, Sankarasubbu M (2021) Pay attention to the cough: early diagnosis of covid-19 using interpretable symptoms embeddings with cough sound signal processing. In: Proceedings of the 36th Annual ACM Symposium on Applied Computing, pp 620–628 Pan SJ, Tsang IW, Kwok JT, Yang Q (2010) Domain adaptation via transfer component analysis. IEEE Trans Neural Networks 22(2):199–210 Pang X, Forrest CB, Lê-Scherban F, Masino AJ (2019) Understanding early childhood obesity via interpretation of machine learning model predictions. In: 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), pp 1438–1443. IEEE Pataky TC (2010) Generalized n-dimensional biomechanical field analysis using statistical parametric mapping. J Biomech 43(10):1976–1982 Payrovnaziri SN, Chen Z, Rengifo-Moreno P, Miller T, Bian J, Chen JH, Liu X, He Z (2020) Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review. J Am Med Inform Assoc 27(7):1173–1185 Penafiel S, Baloian N, Sanson H, Pino JA (2020) Predicting stroke risk with an interpretable classifier. IEEE Access 9:1154–1166 Pereira CR, Pereira DR, Da Silva FA, Hook C, Weber SA, Pereira LA, Papa JP (2015) A step towards the automated diagnosis of parkinson’s disease: Analyzing handwriting movements. In: 2015 IEEE 28th International Symposium on Computer-based Medical Systems, pp 171–176. IEEE Pereira T, Ding C, Gadhoumi K, Tran N, Colorado RA, Meisel K, Hu X (2019) Deep learning approaches for plethysmography signal quality assessment in the presence of atrial fibrillation. Physiol Meas 40(12):125002 Perez E, Strub F, De Vries H, Dumoulin V, Courville A (2018) Film: Visual reasoning with a general conditioning layer. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 32 Plawiak P (2017) Ecg signals (1000 fragments). Mendeley Data, v3 Pölsterl S, Navab N, Katouzian A (2016) An efficient training algorithm for kernel survival support vector machines. arXiv preprint arXiv:1611.07054 Prentzas N, Nicolaides A, Kyriacou E, Kakas A, Pattichis C (2019) Integrating machine learning with symbolic reasoning to build an explainable ai model for stroke prediction. In: 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), pp 817–821. IEEE Radford A, Narasimhan K, Salimans T, Sutskever I (2018) Improving language understanding by generative pre-training Rashed-Al-Mahfuz M, Haque A, Azad A, Alyami SA, Quinn JM, Moni MA (2021) Clinically applicable machine learning approaches to identify attributes of chronic kidney disease (ckd) for use in low-cost diagnostic screening. IEEE J Transl Eng Health Med 9:1–11 Reyna MA, Josef C, Seyedi S, Jeter R, Shashikumar SP, Westover MB, Sharma A, Nemati S, Clifford GD (2019) Early prediction of sepsis from clinical data: the physionet/computing in cardiology challenge 2019. In: 2019 Computing in Cardiology (CinC), p 1. IEEE Ribeiro MT, Singh S, Guestrin C (2016) “Why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 1135–1144 Ribeiro MT, Singh S, Guestrin C (2018) Anchors: high-precision model-agnostic explanations. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 32 Rojat T, Puget R, Filliat D, Del Ser J, Gelin R, Díaz-Rodríguez N (2021) Explainable artificial intelligence (xai) on timeseries data: a survey. arXiv preprint arXiv:2104.00950 Rubini LJ, Eswaran P (2015) Uci chronic kidney disease. School Inf. Comput., Sci. Univ, California, Irvine Sadhukhan D, Pal S, Mitra M (2018) Automated identification of myocardial infarction using harmonic phase distribution pattern of ecg data. IEEE Trans Instrum Meas 67(10):2303–2313 Sahakyan M, Aung Z, Rahwan T (2021) Explainable artificial intelligence for tabular data: a survey. IEEE Access 9:135392–135422 Sakr S, Elshawi R, Ahmed A, Qureshi WT, Brawner C, Keteyian S, Blaha MJ, Al-Mallah MH (2018) Using machine learning on cardiorespiratory fitness data for predicting hypertension: the henry ford exercise testing (fit) project. PLoS ONE 13(4):0195344 Saltelli A, Annoni P, Azzini I, Campolongo F, Ratto M, Tarantola S (2010) Variance based sensitivity analysis of model output. Design and estimator for the total sensitivity index. Comput Phys Commun 181(2):259–270 Schalk G, McFarland DJ, Hinterberger T, Birbaumer N, Wolpaw JR (2004) Bci 2000: a general-purpose brain-computer interface (bci) system. IEEE Trans Biomed Eng 51(6):1034–1043 Schölkopf B, Locatello F, Bauer S, Ke NR, Kalchbrenner N, Goyal A, Bengio Y (2021) Toward causal representation learning. Proc IEEE 109(5):612–634 Seedat N, Aharonson V, Hamzany Y (2020) Automated and interpretable m-health discrimination of vocal cord pathology enabled by machine learning. In: 2020 IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), pp 1–6. IEEE Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp 618–626 Sha C, Cuperlovic-Culf M, Hu T (2021) Smile: systems metabolomics using interpretable learning and evolution. BMC Bioinform 22(1):1–17 Shafer G (2016) Dempster’s rule of combination. Int J Approx Reason 79:26–40 Shamout FE, Zhu T, Sharma P, Watkinson PJ, Clifton DA (2019) Deep interpretable early warning system for the detection of clinical deterioration. IEEE J Biomed Health Inform 24(2):437–446 Shankaranarayana SM, Runje D (2019) Alime: autoencoder based approach for local interpretability. In: International Conference on Intelligent Data Engineering and Automated Learning, pp 454–463. Springer Shashikumar SP, Josef CS, Sharma A, Nemati S (2021) Deepaise-an interpretable and recurrent neural survival model for early prediction of sepsis. Artif Intell Med 113:102036 Shrikumar A, Greenside P, Kundaje A (2017): Learning important features through propagating activation differences. In: International Conference on Machine Learning, pp 3145–3153. PMLR Siddiqui SA, Mercier D, Munir M, Dengel A, Ahmed S (2019) Tsviz: demystification of deep learning models for time-series analysis. IEEE Access 7:67027–67040 Simonyan K, Vedaldi A, Zisserman A (2013) Deep inside convolutional networks: visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034 Slijepcevic D, Horst F, Lapuschkin S, Horsak B, Raberger A-M, Kranzl A, Samek W, Breiteneder C, Schöllhorn WI, Zeppelzauer M (2021) Explaining machine learning models for clinical gait analysis. ACM Trans Comput Healthcare 3(2):1–27 Song X, Yu AS, Kellum JA, Waitman LR, Matheny ME, Simpson SQ, Hu Y, Liu M (2020) Cross-site transportability of an explainable artificial intelligence model for acute kidney injury prediction. Nat Commun 11(1):1–12 Spildooren J, Vercruysse S, Desloovere K, Vandenberghe W, Kerckhofs E, Nieuwboer A (2010) Freezing of gait in parkinson’s disease: the impact of dual-tasking and turning. Mov Disord 25(15):2563–2570 Springenberg JT, Dosovitskiy A, Brox T, Riedmiller M (2014) triving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806 Su Z, Figueiredo MC, Jo J, Zheng K, Chen Y (2020) Analyzing description, user understanding and expectations of ai in mobile health applications. In: AMIA Annual Symposium Proceedings, vol. 2020, p 1170. American Medical Informatics Association Sun C, Dui H, Li H (2021) Interpretable time-aware and co-occurrence-aware network for medical prediction. BMC Med Inform Decis Mak 21(1):1–12 Sun Z, Dong W, Shi J, He K, Huang Z (2021) Attention-based deep recurrent model for survival prediction. ACM Trans Comput Healthcare 2(4):1–18 Sundararajan M, Taly A, Yan Q (2017) Axiomatic attribution for deep networks. In: International Conference on Machine Learning, pp 3319–3328. PMLR Tahmassebi A, Martin J, Meyer-Baese A, Gandomi AH (2020) An interpretable deep learning framework for health monitoring systems: a case study of eye state detection using eeg signals. In: 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pp 211–218. IEEE Thimoteo LM, Vellasco MM, Amaral J, Figueiredo K, Yokoyama CL, Marques E (2022) Explainable artificial intelligence for covid-19 diagnosis through blood test variables. J Control Autom Electr Syst 33(2):625–644 Thorsen-Meyer H-C, Nielsen AB, Nielsen AP, Kaas-Hansen BS, Toft P, Schierbeck J, Strøm T, Chmura PJ, Heimann M, Dybdahl L et al (2020) Dynamic and explainable machine learning prediction of mortality in patients in the intensive care unit: a retrospective study of high-frequency data in electronic patient records. Lancet Digit Health 2(4):179–191 Tjoa E, Guan C (2020) A survey on explainable artificial intelligence (xai): toward medical xai. IEEE Trans Neural Networks Learn Syst 32(11):4793–4813 Topol EJ (2019) High-performance medicine: the convergence of human and artificial intelligence. Nat Med 25(1):44–56 Van der Maaten L, Hinton G (2008) Visualizing data using t-sne. J Mach Learn Res 9(11):2579–2605 Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: Conference on Neural Information Processing Systems, vol 30 Wachter S, Mittelstadt B, Russell C (2017) Counterfactual explanations without opening the black box: automated decisions and the gdpr. Harv. JL & Tech. 31:841 Waitman LR, Aaronson LS, Nadkarni PM, Connolly DW, Campbell JR (2014) The greater plains collaborative: a pcornet clinical research data network. J Am Med Inform Assoc 21(4):637–641 Wang G, Zhou Y, Huang F-J, Tang H-D, Xu X-H, Liu J-J, Wang Y, Deng Y-L, Ren R-J, Xu W et al (2014) Plasma metabolite profiles of Alzheimer’s disease and mild cognitive impairment. J Proteome Res 13(5):2649–2658 Wang D, Yang Q, Abdul A, Lim BY (2019) Designing theory-driven user-centric explainable ai. In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp 1–15 Ward IR, Wang L, Lu J, Bennamoun M, Dwivedi G, Sanfilippo FM (2021) Explainable artificial intelligence for pharmacovigilance: what features are important when predicting adverse outcomes? Comput Methods Programs Biomed 212:106415 Weiss SM, Indurkhya N, Zhang T (2010) Fundamentals of predictive text mining. Springer, New York Wexler J, Pushkarna M, Bolukbasi T, Wattenberg M, Viégas F, Wilson J (2019) The what-if tool: interactive probing of machine learning models. IEEE Trans Vis Comput Gr 26(1):56–65 Wickstrøm K, Mikalsen KØ, Kampffmeyer M, Revhaug A, Jenssen R (2020) Uncertainty-aware deep ensembles for reliable and explainable predictions of clinical time series. IEEE J Biomed Health Inform 25(7):2435–2444 Wu M, Hughes M, Parbhoo S, Zazzi M, Roth V, Doshi-Velez F (2018) Beyond sparsity: tree regularization of deep models for interpretability. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol 32 Yan L, Zhang H-T, Goncalves J, Xiao Y, Wang M, Guo Y, Sun C, Tang X, Jing L, Zhang M et al (2020) An interpretable mortality prediction model for covid-19 patients. Nat Mach Intell 2(5):283–288 Yeh C-K, Kim B, Arik S, Li C-L, Pfister T, Ravikumar P (2020) On completeness-aware concept-based explanations in deep neural networks. Adv Neural Inf Process Syst 33:20554–20565 Ye L, Keogh E (2009) Time series shapelets: a new primitive for data mining. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp 947–956 Zafar MR, Khan NM (2019) Dlime: a deterministic local interpretable model-agnostic explanations approach for computer-aided diagnosis systems. arXiv preprint arXiv:1906.10263 Zeiler MD, Fergus R (2014) Visualizing and understanding convolutional networks. In: European Conference on Computer Vision, pp 818–833. Springer Zeiler MD, Taylor GW, Fergus R (2011) Adaptive deconvolutional networks for mid and high level feature learning. In: 2011 International Conference on Computer Vision, pp. 2018–2025. IEEE Zeng X, Yu G, Lu Y, Tan L, Wu X, Shi S, Duan H, Shu Q, Li H (2020) Pic, a paediatric-specific intensive care database. Sci Data 7(1):1–8 Zeng X, Hu Y, Shu L, Li J, Duan H, Shu Q, Li H (2021) Explainable machine-learning predictions for complications after pediatric congenital heart surgery. Sci Rep 11(1):1–11 Zhai B, Perez-Pozuelo I, Clifton EA, Palotti J, Guan Y (2020) Making sense of sleep: multimodal sleep stage classification in a large, diverse population using movement and cardiac sensing. Proc ACM Interact Mob Wear Ubiquitous Technol 4(2):1–33 Zhang G-Q, Cui L, Mueller R, Tao S, Kim M, Rueschman M, Mariani S, Mobley D, Redline S (2018) The national sleep research resource: towards a sleep data commons. J Am Med Inform Assoc 25(10):1351–1358 Zhang X, Yao L, Dong M, Liu Z, Zhang Y, Li Y (2020) Adversarial representation learning for robust patient-independent epileptic seizure detection. IEEE J Biomed Health Inform 24(10):2852–2859 Zhang O, Ding C, Pereira T, Xiao R, Gadhoumi K, Meisel K, Lee RJ, Chen Y, Hu X (2021) Explainability metrics of deep convolutional networks for photoplethysmography quality assessment. IEEE Access 9:29736–29745 Zhang Y, Yang D, Liu Z, Chen C, Ge M, Li X, Luo T, Wu Z, Shi C, Wang B et al (2021) An explainable supervised machine learning predictor of acute kidney injury after adult deceased donor liver transplantation. J Transl Med 19(1):1–15 Zheng K, Cai S, Chua HR, Wang W, Ngiam KY, Ooi BC (2020) Tracer: a framework for facilitating accurate and interpretable analytics for high stakes applications. In: Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, pp 1747–1763 Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A (2016) Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 2921–2929