XAI for intrusion detection system: comparing explanations based on global and local scope

Swetha Hariharan1, Reece Robinson2, Rendhir R. Prasad3, Ciza Thomas4, N. Balakrishnan5
1Indian Institute of Science
2SCT College of Engineering, Thiruvananthapuram, India
3Government Engineering College, Barton Hill, Thiruvananthapuram, India
4Directorate of Technical Education, Government of Kerala, Thiruvananthapuram, India
5Supercomputer Education and Research Center, Indian Institute of Science, Bangalore, India

Tóm tắt

Từ khóa


Tài liệu tham khảo

Hu, X., Li, T., Wu, Z., Gao, X., Wang, Z.: Research and application of intelligent intrusion detection system with accuracy analysis methodology. Infrared Phys. Technol. 88, 245–253 (2018)

Holzinger, A.: From machine learning to explainable AI. In: World symposium on digital intelligence for systems and machines (DISA), pp. 55–66 (2018)

National Academies of Sciences, Engineering, and Medicine et al.: Implications of artificial intelligence for cybersecurity. In: Proceedings of a Workshop. National Academies Press (2019)

Othman, S.M., Ba-Alwi, F.M., Alsohybe, N.T., Al-Hashida, A.Y.: Intrusion detection model using machine learning algorithm on big data environment. J. Big Data 5(1), 1–12 (2018)

Da Costa, K.A., Papa, J.P., Lisboa, C.O., Munoz, R., de Albuquerque, V.H.C.: Internet of things: a survey on machine learning-based intrusion detection approaches. Comput. Netw. 151, 147–157 (2019)

Hodo, E. et al.: Threat analysis of IoT networks using artificial neural network intrusion detection system, pp. 1–6. IEEE (2016)

Peng, K., et al.: Intrusion detection system based on decision tree over big data in fog environment. Wirel. Commun. Mob. Comput. 2018 (2018)

Zhang, Z., Shen, H.: Application of online-training SVMs for real-time intrusion detection with different considerations. Comput. Commun. 28(12), 1428–1442 (2005)

Sharma, Y., Verma, A., Rao, K., Eluri, V.: Reasonable explainability for regulating AI in health. ORF occasional paper (261) (2020)

Rudin, C., Radin, J.: Why are we using black box models in ai when we don’t need to? A lesson from an explainable AI competition. Harvard Data Sci. Rev. 1(2) (2019)

Paulauskas, N., Auskalnis, J.: Analysis of data pre-processing influence on intrusion detection using NSL-KDD dataset. In: Open Conference of Electrical, Electronic and Information Sciences (eStream), pp. 1–5. IEEE (2017)

Datta, H., Deshmukh, T.G., Puja Padiya, Y.: International Conference on Communication, Information & Computing Technology (ICCICT). Improving classification using preprocessing and machine learning algorithms on NSL-KDD dataset

Lipton, Z.: The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)

Freitas, A.A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014)

Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions, 4768–4777 (2017)

Altmann, A., Toloşi, L., Sander, O., Lengauer, T.: Permutation importance: a corrected feature importance measure. Bioinformatics 26(10), 1340–1347 (2010)

Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you? Explaining the predictions of any classifier, 1135–1144 (2016)

Goode, K., Hofmann, H.: Visual diagnostics of an explainer model: tools for the assessment of lime explanations. Stat. Anal. Data Min. ASA Data Sci. J. 14(2), 185–200 (2021)

Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

Zhao, Q., Hastie, T.: Causal interpretations of black-box models. J. Bus. Econ. Stat. 39(1), 272–281 (2021)

Goldstein, A., Kapelner, A., Bleich, J., Pitkin, E.: Peeking inside the black box: visualizing statistical learning with plots of individual conditional expectation. J. Comput. Graph. Stat. 24(1), 44–65 (2015)

Apley, D.W., Zhu, J.: Visualizing the effects of predictor variables in black box supervised learning models. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 82(4), 1059–1086 (2020)

Arya, V., et al.: One explanation does not fit all: a toolkit and taxonomy of ai explainability techniques. arXiv preprint arXiv:1909.03012 (2019)

Maonan Wang, Y.Y., Kangfeng Zheng, W.X.: An explainable machine learning framework for intrusion detection systems. IEEE Access 8(2020), 73127–73141 (2020)

Kaggle dataset. https://www.kaggle.com/sampadab17/network-intrusion-detection

NSL-KDD data set for network-based intrusion detection systems. https://www.unb.ca/cic/datasets/nsl.html

https://pair-code.github.io/facets/

Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8), 832 (2019)

Fisher, A., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. 20(177), 1–81 (2019)

Anjomshoae, S., Främling, K., Najjar, A.: Explanations of Black–Box Model Predictions by Contextual Importance and Utility, pp. 95–109. Springer, New York (2019)

Främling, K.: Decision Theory Meets Explainable AI, pp. 57–74. Springer, New York (2020)

Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods. arXiv preprint arXiv:1806.08049 (2018)