Greybox XAI: A Neural-Symbolic learning framework to produce interpretable predictions for image classification
Tài liệu tham khảo
Preece, 2018
Gunning, 2017
Goodman, 2017, European union regulations on algorithmic decision-making and a “right to explanation”, AI Mag., 38, 50
R. Caruana, Y. Lou, J. Gehrke, P. Koch, M. Sturm, N. Elhadad, Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission, in: Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’15, 2015, pp. 1721–1730.
Zhu, 2018, Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation, 1
Arrieta, 2020, Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, Inf. Fusion, 58, 82, 10.1016/j.inffus.2019.12.012
Miller, 2019, Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence, 267, 1, 10.1016/j.artint.2018.07.007
Hendricks, 2018, Women also snowboard: Overcoming bias in captioning models, 793
Doran, 2017
Ribeiro, 2016
Ribeiro, 2016, Why should i trust you?: Explaining the predictions of any classifier, 1135
Lundberg, 2017, A unified approach to interpreting model predictions, 4765
Alvarez-Melis, 2018
Slack, 2019
Ras, 2021
R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-CAM: Visual explanations from deep networks via gradient-based localization, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 618–626.
J. Adebayo, J. Gilmer, M. Muelly, I. Goodfellow, M. Hardt, B. Kim, Sanity checks for saliency maps, in: Proceedings of the International Conference on Neural Information Processing Systems, 2018, pp. 9505–9515.
A. Bennetot, J.-L. Laurent, R. Chatila, N. Díaz-Rodríguez, Towards Explainable Neural-Symbolic Visual Reasoning, in: Proceedings of the Neural-Symbolic Learning and Reasoning Workshop, NeSy-2019 At International Joint Conference on Artificial Intelligence (IJCAI), Macau, China, 2019.
Guidotti, 2018, A survey of methods for explaining black box models, ACM Comput. Surv., 51, 93:1
F.K. Dos̃ilović, M. Brc̃ić, N. Hlupić, Explainable artificial intelligence: A survey, in: 41st International Convention on Information and Communication Technology, Electronics and Microelectronics, MIPRO, 2018, pp. 210–215.
I. Donadello, L. Serafini, A.D. Garcez, Logic tensor networks for semantic image interpretation, in: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI, 2017, pp. 1596–1602.
Donadello, 2018
d’Avila Garcez, 2019, Neural-symbolic computing: An effective methodology for principled integration of machine learning and reasoning, J. Appl. Log. IfCoLog J. Log. Appl. (FLAP), 6, 611
I. Donadello, M. Dragoni, C. Eccher, Persuasive Explanation of Reasoning Inferences on Dietary Data, in: First Workshop on Semantic Explainability @ ISWC 2019, 2019.
Guidotti, 2018, A survey of methods for explaining black box models, ACM Comput. Surv. (CSUR), 51, 1, 10.1145/3236009
Buhrmester, 2019
Andreas, 2019
Fodor, 2002
Stone, 2017, Teaching compositionality to CNNs, 5058
Lake, 2015, Human-level concept learning through probabilistic program induction, Science, 350, 1332, 10.1126/science.aab3050
Hupkes, 2019
Mao, 2019
De Kok, 1999, Object-based classification and applications in the alpine forest environment, Int. Arch. Photogramm. Remote Sens., 32, 3
Huber, 2004, Parts-based 3d object classification, II
Bernstein, 2005, Part-based statistical models for object classification and detection, 734
Felzenszwalb, 2009, Object detection with discriminatively trained part-based models, IEEE Trans. Pattern Anal. Mach. Intell., 32, 1627, 10.1109/TPAMI.2009.167
Everingham, 2012
W. Ge, X. Lin, Y. Yu, Weakly supervised complementary parts models for fine-grained image classification from the bottom up, in: Proceedings of the IEEE Conference on Computer Vision and Rattern Recognition, 2019, pp. 3034–3043.
Holzinger, 2019, Causability and explainability of artificial intelligence in medicine, Wiley Interdiscip. Rev. Data Min. Knowl. Discov., 9, 10.1002/widm.1312
Holzinger, 2021, Towards multi-modal causability with graph neural networks enabling information fusion for explainable AI, Inf. Fusion, 71, 28, 10.1016/j.inffus.2021.01.008
Pearl, 2009
Holzinger, 2020, Measuring the quality of explanations: The system causability scale (SCS), KI - Künstliche Intelligenz, 34, 193, 10.1007/s13218-020-00636-z
Hu, 2018, Squeeze-and-excitation networks, 7132
Steiner, 2021
Tolstikhin, 2021
J. Zhuang, B. Gong, L. Yuan, Y. Cui, H. Adam, N. Dvornek, S. Tatikonda, J. Duncan, T. Liu, Surrogate Gap Minimization Improves Sharpness-Aware Training, in: ICLR, 2022.
A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, N. Houlsby, An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, in: ICLR, 2021.
Chen, 2021
X. Zhai, A. Kolesnikov, N. Houlsby, L. Beyer, Scaling Vision Transformers, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2022, pp. 12104–12113.
A. Chavan, Z. Shen, Z. Liu, Z. Liu, K.-T. Cheng, E.P. Xing, Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2022, pp. 4931–4941.
C. Zhang, M. Zhang, S. Zhang, D. Jin, Q. Zhou, Z. Cai, H. Zhao, X. Liu, Z. Liu, Delving Deep Into the Generalization of Vision Transformers Under Distribution Shifts, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR, 2022, pp. 7277–7286.
Obeso, 2022, Visual vs internal attention mechanisms in deep neural networks for image classification and object detection, Pattern Recognit., 123, 10.1016/j.patcog.2021.108411
Díaz-Rodríguez, 2022, EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case, Information Fusion, 79, 58, 10.1016/j.inffus.2021.09.022
Garnelo, 2019, Reconciling deep learning with symbolic artificial intelligence: representing objects and relations, Curr. Opin. Behav. Sci., 29, 17, 10.1016/j.cobeha.2018.12.010
Manhaeve, 2018, DeepProbLog: Neural probabilistic logic programming, 3749
Petroni, 2019
Bollacker, 2019, Extending knowledge graphs with subjective influence networks for personalized fashion, 203
Shang, 2019
Aamodt, 1994, Case-based reasoning: Foundational issues, Methodol. Var. Syst. Approaches, 7, 39
R. Caruana, Case-Based Explanation for Artificial Neural Nets, in: Artificial Neural Networks in Medicine and Biology, Proceedings of the ANNIMAB-1 Conference, 2000, pp. 303–308.
Keane, 2019
2007
Donadello, 2016, Integration of numeric and symbolic information for semantic image interpretation, Intelligenza Artificiale, 10, 33, 10.3233/IA-160093
Lamy, 2017, Formalization of the semantics of iconic languages: An ontology-based method and four semantic-powered applications, Knowl.-Based Syst., 135, 159, 10.1016/j.knosys.2017.08.011
Marra, 2019
Marra, 2019
Lipton, 2018, The mythos of model interpretability, Queue, 16, 30:31, 10.1145/3236386.3241340
Montavon, 2018, Methods for interpreting and understanding deep neural networks, Digit. Signal Process., 73, 1, 10.1016/j.dsp.2017.10.011
Bursac, 2008, Purposeful selection of variables in logistic regression, Source Code Biol. Med., 3, 17, 10.1186/1751-0473-3-17
Rokach, 2014
Imandoust, 2013, Application of k-nearest neighbor (knn) approach for predicting economic events: Theoretical background, Int. J. Eng. Res. Appl., 3, 605
Quinlan, 1987, Generating production rules from decision trees., 304
Berg, 2007, Bankruptcy prediction by generalized additive models, Appl. Stoch. Models Bus. Ind., 23, 129, 10.1002/asmb.658
Griffiths, 2008
Alvarez-Melis, 2018, Towards robust interpretability with self-explaining neural networks, 7786
Baum, 2004
Blundell, 2015
Kremen, 2009, Semantic annotation of objects, 223
Baader, 2003, 43
Auer, 2007, Dbpedia: A nucleus for a web of open data, 722
Miller, 1990, Introduction to WordNet: An on-line lexical database, Int. J. Lexicogr., 3, 235, 10.1093/ijl/3.4.235
Kiddon, 2012, Knowledge extraction and joint inference using tractable Markov logic, 79
Balasubramanian, 2012, Rel-grams: a probabilistic model of relations in text, 101
Hitzler, 2009
Antoniou, 2004, Web ontology language: Owl, 67
Norton, 2018, Log odds and the interpretation of logit models, Health Serv. Res., 53, 859, 10.1111/1475-6773.12712
Chen, 2018
Kervadec, 2020, Bounding boxes for weakly supervised segmentation: Global constraints get close to full supervision
Lamas, 2020, MonuMAI: Dataset, deep learning pipeline and citizen science based app for monumental heritage taxonomy and classification, Neurocomputing, 420, 266, 10.1016/j.neucom.2020.09.041
Touvron, 2020
Sanfeliu, 1983, A distance measure between attributed relational graphs for pattern recognition, IEEE Trans. Syst. Man Cybern., 353, 10.1109/TSMC.1983.6313167
Jiang, 2021, Optimized loss functions for object detection and application on nighttime vehicle detection, Proc. Inst. Mech. Eng. D, 236, 1568, 10.1177/09544070211036366
Qin, 2018, Weighted focal loss: An effective loss function to overcome unbalance problem of chest X-ray14, IOP Conf. Ser. Mater. Sci. Eng., 428, 10.1088/1757-899X/428/1/012022
Wachter, 2017
R.K. Mothilal, A. Sharma, C. Tan, Explaining machine learning classifiers through diverse counterfactual explanations, in: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 2020, pp. 607–617.
Del Ser, 2022
Verma, 2020
Dandl, 2020, Multi-objective counterfactual explanations, 448
Van Looveren, 2019
Karimi, 2019
Laugel, 2017
Ribeiro, 2018, Anchors: High-precision model-agnostic explanations
Müller, 2021, Kandinsky patterns, Artificial Intelligence, 300, 10.1016/j.artint.2021.103546
Holzinger, 2019, KANDINSKY patterns as IQ-test for machine learning, 1
