Explaining anomalies detected by autoencoders using Shapley Additive Explanations
Tài liệu tham khảo
Adadi, 2018, Peeking inside the black-box: A survey on explainable artificial intelligence (XAI), IEEE Access, 6, 52138, 10.1109/ACCESS.2018.2870052
Aggarwal, 2015, Outlier analysis, 237
Amarasinghe, 2018, Toward explainable deep neural network based anomaly detection, 311
An, 2015, Variational autoencoder based anomaly detection using reconstruction probability, Special Lecture on IE, 2, 1
Arp, 2014, DREBIN: Effective and explainable detection of android malware in your pocket, 23
Arrieta, 2020, Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, Information Fusion, 58, 82, 10.1016/j.inffus.2019.12.012
Ben-Gal, 2005, Outlier detection, 131
Bengio, 2007, Scaling learning algorithms towards AI, Large-Scale Kernel Machines, 34, 1
Bergman, 2020, Classification-based anomaly detection for general data
Bertsimas, 2017, Optimal classification trees, Machine Learning, 106, 1039, 10.1007/s10994-017-5633-9
Bertsimas, 2018
Breunig, M. M., Kriegel, H.-P., Ng, R. T., & Sander, J. (2000). LOF: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on management of data (pp. 93–104).
Carbonera, 2019, Local-set based-on instance selection approach for autonomous object modelling, International Journal of Advanced Computer Science and Applications, 10, Paper, 10.14569/IJACSA.2019.0101201
Chandola, 2009, Anomaly detection: A survey, ACM Computing Surveys, 41, 15, 10.1145/1541880.1541882
Chen, 2017, Outlier detection with autoencoder ensembles, 90
Collaris, 2018
Doshi-Velez, 2017, A roadmap for a rigorous science of interpretability, Stat, 1050, 28
Erfani, 2016, High-dimensional and large-scale anomaly detection using a linear one-class SVM with deep learning, Pattern Recognition, 58, 121, 10.1016/j.patcog.2016.03.028
External Data Source, 2018
External Data Source, 2018
Friedman, 2001, Greedy function approximation: a gradient boosting machine, The Annals of Statistics, 1189
Gilpin, 2018, Explaining explanations: An overview of interpretability of machine learning, 80
Golan, 2018, Deep anomaly detection using geometric transformations, 9758
Goodall, 2019, Situ: Identifying and explaining suspicious behavior in networks, IEEE Transactions on Visualization and Computer Graphics, 25, 204, 10.1109/TVCG.2018.2865029
Goodfellow, 2016
Goodman, 2017, European Union regulations on algorithmic decision-making and a “right to explanation”, AI Magazine, 38, 50, 10.1609/aimag.v38i3.2741
Guidotti, 2018, A survey of methods for explaining black box models, ACM Computing Surveys, 51, 93
Gunning, 2017
Hawkins, 2002, Outlier detection using replicator neural networks, 170
Hinton, 2006, A fast learning algorithm for deep belief nets, Neural Computation, 18, 1527, 10.1162/neco.2006.18.7.1527
Hinton, 2006, Reducing the dimensionality of data with neural networks, Science, 313, 504, 10.1126/science.1127647
Hodge, 2004, A survey of outlier detection methodologies, Artificial Intelligence Review, 22, 85, 10.1023/B:AIRE.0000045502.10941.a9
Hoffman, 2018
Jolliffe, 2011
Kauffmann, 2020, Towards explaining anomalies: a deep taylor decomposition of one-class models, Pattern Recognition, 101, 10.1016/j.patcog.2020.107198
Kindermans, 2017, The (UN) reliability of saliency methods, Stat, 1050, 2
Kopp, 2020, Anomaly explanation with random forests, Expert Systems with Applications, 149, 10.1016/j.eswa.2020.113187
Lipton, 2018, The mythos of model interpretability, Queue, 16, 31, 10.1145/3236386.3241340
Liu, 2013
Liu, 2018, Contextual outlier interpretation, 2461
Liu, 2017, Towards better analysis of machine learning models: A visual analytics perspective, Visual Informatics, 1, 48, 10.1016/j.visinf.2017.01.006
Liu, 2000, Clustering through decision tree construction, 20
Lundberg, 2018
Lundberg, 2017, A unified approach to interpreting model predictions, 4765
Maaten, 2008, Visualizing data using t-SNE, Journal of Machine Learning Research, 9, 2579
Melis, 2018, Towards robust interpretability with self-explaining neural networks, 7786
Miller, 2019, Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence, 267, 1, 10.1016/j.artint.2018.07.007
Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. In Proceedings of the conference on fairness, accountability, and transparency (pp. 279–288).
Montavon, 2017, Methods for interpreting and understanding deep neural networks, Digital Signal Processing
Nguyen, 2019, GEE: A gradient-based explainable variational autoencoder for network anomaly detection, 91
Olszewska, 2019, Designing transparent and autonomous intelligent vision systems, 850
Olvera-López, 2010, A review of instance selection methods, Artificial Intelligence Review, 34, 133, 10.1007/s10462-010-9165-y
Palczewska, 2014, Interpreting random forest classification models using a feature contribution method, 193
Pang, 2020
Paula, 2016, Deep learning anomaly detection as support fraud investigation in brazilian exports and anti-money laundering, 954
Radev, 2004, Centroid-based summarization of multiple documents, Information Processing & Management, 40, 919, 10.1016/j.ipm.2003.10.006
Ribeiro, 2016, Why should i trust you?: Explaining the predictions of any classifier, 1135
Rumelhart, 1985
Sakurada, 2014, Anomaly detection using autoencoders with nonlinear dimensionality reduction, 4
Samek, 2017, Evaluating the visualization of what a deep neural network has learned, IEEE Transactions on Neural Networks and Learning Systems, 28, 2660, 10.1109/TNNLS.2016.2599820
Shortliffe, 1975, A model of inexact reasoning in medicine, Mathematical Biosciences, 23, 351, 10.1016/0025-5564(75)90047-4
Shrikumar, 2017, Learning important features through propagating activation differences, 3145
Singh, 2012, Outlier detection: applications and techniques, International Journal of Computer Science Issues (IJCSI), 9, 307
Song, 2017, A hybrid semi-supervised anomaly detection model for high-dimensional data, Computational Intelligence and Neuroscience, 2017, 10.1155/2017/8501683
Štrumbelj, 2009, Explaining instance classifications with interactions of subsets of feature values, Data & Knowledge Engineering, 68, 886, 10.1016/j.datak.2009.01.004
Takeishi, 2019, Shapley Values of reconstruction errors of PCA for explaining anomaly detection, 793
Takeishi, 2020
Tan, 2016
Yang, 2019
Zhou, C., & Paffenroth, R. C. (2017). Anomaly detection with robust deep autoencoders. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 665–674).