Visual explanation of black-box model: Similarity Difference and Uniqueness (SIDU) method
Tài liệu tham khảo
Xu, 2021, Deep regionlets: blended representation and deep learning for generic object detection, IEEE Trans. Pattern Anal. Mach. Intell., 43, 1914, 10.1109/TPAMI.2019.2957780
Pei, 2021, Effects of image degradation and degradation removal to CNN-based image classification, IEEE Trans. Pattern Anal. Mach. Intell., 43, 1239, 10.1109/TPAMI.2019.2950923
González-Gonzalo, 2020, Iterative augmentation of visual evidence for weakly-supervised lesion localization in deep interpretability frameworks: application to color fundus images, IEEE Trans. Med. Imaging, 39, 3499, 10.1109/TMI.2020.2994463
Li, 2022, A survey of data-driven and knowledge-aware explainable AI, IEEE Trans. Knowl. Data Eng., 34, 29
bai, 2021, Explainable deep learning for efficient and robust pattern recognition: a survey of recent developments, Pattern Recognit., 120, 108102, 10.1016/j.patcog.2021.108102
Shin, 2021, Embodying algorithms, enactive artificial intelligence and the extended cognition: you can see as much as you know about algorithm, J. Inf. Sci., 10.1177/0165551520985495
Shin, 2021
Shin, 2021, The effects of explainability and causability on perception, trust, and acceptance: implications for explainable AI, Int. J. Hum. Comput. Stud., 10.1016/j.ijhcs.2020.102551
Weitz, 2021, “Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design, J. Multimodal User Interfaces, 15, 87, 10.1007/s12193-020-00332-0
Ribeiro, 2016, ” Why should i trust you?” Explaining the predictions of any classifier, 1135
Selvaraju, 2020, Grad-CAM: visual explanations from deep networks via gradient-based localization, Int. J. Comput. Vis., 128, 336, 10.1007/s11263-019-01228-7
Petsiuk, 2018, RISE: randomized input sampling for explanation of black-box models
Muddamsetty, 2020, SIDU: similarity difference and uniqueness method for explainable AI, 3269
Doshi-Velez, 2018, Considerations for evaluation and generalization in interpretable machine learning, 3
Dombrowski, 2022, Towards robust explanations for deep neural networks, Pattern Recognit., 121, 108194, 10.1016/j.patcog.2021.108194
Li, 2020, Enhanced transport distance for unsupervised domain adaptation, 13936
Ren, 2014, Band-reweighed gabor kernel embedding for face image representation and recognition, IEEE Trans. Image Process., 23, 725, 10.1109/TIP.2013.2292560
Montavon, 2019, Layer-wise relevance propagation: an overview, 11700
Bach, 2015, On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation, PloS One, 10, 1047, 10.1371/journal.pone.0130140
Zhou, 2016, Learning deep features for discriminative localization, 2921
Simonyan, 2014, Deep inside convolutional networks: visualising image classification models and saliency maps
Fong, 2017, Interpretable explanations of black boxes by meaningful perturbation, 3429
A. Shrikumar, P. Greenside, A. Kundaje, Learning important features through propagating activation differences, in: Proceedings of the International Conference on Machine Learning (ICML), PMLR, pp. 3145–3153.
Goodfellow, 2015, Explaining and harnessing adversarial examples
Madry, 2018, Towards deep learning models resistant to adversarial attacks
Moosavi-Dezfooli, 2016, Deepfool: a simple and accurate method to fool deep neural networks, 2574
Russakovsky, 2015, Imagenet large scale visual recognition challenge, Int. J. Comput. Vis., 115, 10.1007/s11263-015-0816-y
Muddamsetty, 2021, Multi-level quality assessment of retinal fundus images using deep convolution neural networks
He, 2016, Deep residual learning for image recognition, 770
Simonyan, 2015, Very deep convolutional networks for large-scale image recognition
Jiang, 2014, Saliency in crowd, 17
Bergstrom, 2014
Das, 2017, Human attention in visual question answering: do humans and deep networks look at the same regions?, Comput. Vis. Image Underst., 163, 90, 10.1016/j.cviu.2017.10.001
Jiang, 2015, Salicon: saliency in context, 1072
McDonnell, 2009, Eye-catching crowds: saliency based selective variation, ACM Trans. Graph. (TOG), 28, 10.1145/1531326.1531361
Riche, 2013, Saliency and human fixations: state-of-the-art and study of comparison metrics, 1153
Bylinskii, 2018, What do different evaluation metrics tell us about saliency models?, IEEE Trans. Pattern Anal. Mach. Intell., 41, 10.1109/TPAMI.2018.2815601
Daniel, 1990, Applied nonparametric statistics