Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
Tóm tắt
Từ khóa
Tài liệu tham khảo
fox, 2017, Explainable planning, Proc IJCAI Workshop, 24
harbers, 0, Self-explaining agents in virtual training
langley, 2017, Explainable agency for intelligent autonomous systems, Proc AAAI, 4762, 10.1609/aaai.v31i2.19108
garcia, 2018, Explain yourself: A natural language interface for scrutable autonomous robots, HRI Workshop on Explainable Robotic Systems
neerincx, 2018, Using perceptual and cognitive explanations for enhanced human-agent team performance, Proc Int Conf Eng Psychol Cognit Ergonom, 204, 10.1007/978-3-319-91122-9_18
puri, 2017, MAGIX Model agnostic globally interpretable explanations
varshney, 2018, Why interpretability in machine learning? An answer using distributed detection and data fusion theory
2016, European Union’s General Data Protection Regulation
miller, 2017, Explanation in Artificial Intelligence Insights from the Social Sciences
danjuma, 2015, Performance Evaluation of Machine Learning Algorithms in Post-operative Life Expectancy in the Lung Cancer Patients
dignum, 2017, Responsible artificial intelligence: Designing AI for human values, ITU J ICT Discoveries, 1, 1
baum, 2017, A survey of artificial general intelligence projects for ethics risk and policy
prabhakar, 2017, Powerful but limited: A DARPA perspective on AI, Proc DARPA
igami, 2017, Artificial intelligence as structural estimation Economic interpretations of deep blue bonanza and AlphaGo
weld, 2018, The challenge of crafting intelligible intelligence
piltaver, 2014, Comprehensibility of classification trees—Survey design validation, Proc ITI, 5
akyol, 2016, Price of transparency in strategic machine learning
van lent, 2004, An explainable artificial intelligence system for small-unit tactical behavior, Proc Conf Innov Appl Artif Intell, 900
doran, 2017, What Does Explainable AI Really Mean? A New Conceptualization of Perspectives
swartout, 1988, Explanation in expert systems: A survey
bojarski, 2017, Explaining how a deep neural network trained with end-to-end learning steers a car
koh, 2017, Understanding black-box predictions via influence functions
lundberg, 2017, A unified approach to interpreting model predictions, Proc Adv Neural Inf Process Syst, 4768
lipton, 2016, The mythos of model interpretability, Proc ICML Workshop Human Interpret Mach Learn, 96
howell, 2018, A framework for addressing fairness in consequential machine learning, Proc FAT Conf Tuts, 1
gilpin, 2018, Explaining explanations An approach to evaluating interpretability of machine learning
paul, 2016, Interpretable machine learning: Lessons from topic modeling, Proc CHI Workshop Hum -Centered Mach Learn, 1
miller, 2017, Explainable AI: Beware of inmates running the asylum, Proc Workshop Explainable AI (XAI) IJCAI, 36
poursabzi-sangdeh, 2018, Manipulating and measuring model interpretability
mishra, 2017, Local interpretable model-agnostic explanations for music content analysis, Proc ISMIR, 537
mohseni, 2018, A human-grounded evaluation benchmark for local explanations of machine learning
hall, 2018, An Introduction to Machine Learning Interpretability
tan, 2015, Improving the interpretability of deep neural networks with stimulated learning, Proc IEEE Workshop Autom Speech Recognition Understanding (ASRU), 617
kass, 1988, The need for user models in generating expert system explanations
2017, Unfairness by Algorithm Distilling the Harms of Automated Decision-Making
henelius, 2017, Interpreting classifiers through attribute interactions in datasets
knight, 2017, The U S military wants its autonomous machines to explain themselves
2018, Equifax Launches NeuroDecision Technology
silver, 2017, Mastering the game of go without human knowledge, Nature, 550, 354, 10.1038/nature24270
dhurandhar, 2017, TIP Typifying the interpretability of procedures
offert, 2017, ‘I know it when I see it’ Visualization and intuitive interpretability
barocas, 2018, The FAT-ML Workshop Series on Fairness Accountability and Transparency in Machine Learning
2017, Top 10 strategic technology trends for 2018
wilson, 0, Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex Systems
kim, 0, 2018 Workshop on Human Interpretability in Machine Learning (WHI)
hohman, 2018, Visual Analytics in Deep Learning An Interrogative Survey for the Next Frontiers
farina, 2017, Proc XCI Explainable Comput Intell Workshop
tan, 2018, Detecting Bias in Black-Box Models Using Transparent Model Distillation
aha, 2017, Proc Workshop Explainable AI (XAI) IJCAI
zhu, 2018, Explainable AI for designers: A human-centered perspective on mixed-initiative co-creation, Proc IEEE Conf Comput Intell Games (CIG), 458
tamagnini, 2017, Interpreting black-box classifiers using instance-level visual explanations, Proc 2nd Workshop Hum -Loop Data Anal, 10.1145/3077257.3077260
guyon, 2017, Proc IJCNN Explainability Learn Mach
katuwal, 2016, Machine learning model interpretability for precision medicine
holzinger, 2017, What do we need to build explainable AI systems for the medical domain?
lightbourne, 2017, Damned lies & criminal sentencing using evidence-based tools, 327
che, 2017, Interpretable deep models for ICU outcome prediction, Proc AMIA Annu Symp, 371
mcfarland, 2018, Uber Shuts Down Self-Driving Operations in Arizona CNN
norvig, 2018, Google’s approach to artificial intelligence and machine learning
haspiel, 2018, Explanations and Expectations Trust Building in Automated Vehicles deepblue lib umich edu
bojarski, 2016, End to End Learning for Self-Driving Cars
tan, 2018, Transparent model distillation
tan, 2018, Auditing black-box models using transparent model distillation with side information
tan, 2017, Interpretable approaches to detect bias in black-box models, Proc AAAI/ACM Conf AI Ethics Soc, 1
xu, 2018, Interpreting deep classifier by visual distillation of dark knowledge
mikolov, 2013, Distributed representations of words and phrases and their compositionality, Proc Adv Neural Inf Process Syst (NIPS), 3111
zhang, 2016, A sensitivity analysis of (and practitioners’ Guide to) convolutional neural networks for sentence classification
palm, 2017, Recurrent relational networks for complex relational reasoning
ras, 2018, Explanation methods in deep learning Users values concerns and challenges
santoro, 2017, A simple neural network module for relational reasoning
kim, 2014, The Bayesian case model: A generative approach for case-based reasoning and prototype classification, Proc Adv Neural Inf Process Syst, 1952
louizos, 2017, Causal effect inference with deep latent-variable models, Proc Adv Neural Inf Process Syst (NIPS), 6446
fisher, 2018, Model class reliance Variable importance measures for any machine learning model class from the ‘rashomon’ perspective
goudet, 2017, Learning functional causal models with generative neural networks
kim, 2016, Examples are not enough, learn to criticize! criticism for interpretability, Proc 29th Conf Neural Inf Process Syst (NIPS), 2280
gurumoorthy, 2017, ProtoDash Fast interpretable prototype selection
yuan, 2017, Adversarial examples Attacks and defenses for deep learning
wachter, 2017, Counterfactual Explanations Without Opening the Black Box Automated Decisions and the GDPR
breiman, 2001, Statistical Modeling The Two Cultures Statistical Science
doshi-velez, 2018, Towards a rigorous science of interpretable machine learning
guidotti, 2018, A survey of methods for explaining black box models
hara, 2016, Making Tree Ensembles Interpretable
tan, 2016, Tree space prototypes Another look at making tree ensembles interpretable
xu, 2015, Show, attend and tell: Neural image caption generation with visual attention, Proc Int Conf Mach Learn (ICML), 1
wang, 2015, Falling rule lists, Proc 14th Int Conf Artif Intell Statist (AISTATS), 1013
2018, Revenues From the Artificial Intelligence (AI) Market Worldwide From 2016 to 2025
sarkar, 2016, Accuracy and interpretability trade-offs in machine learning applied to safer gambling, Proc CEUR Workshop, 79
su, 2015, Interpretable two-level Boolean rule learning for classification
2018, Worldwide Semiannual Cognitive Artificial Intelligence Systems Spending Guide
green, 2010, Modeling heterogeneous treatment effects in large-scale experiments using Bayesian additive regression trees, Proc Summer Meeting Soc Political Methodol, 1
thiagarajan, 2016, TreeView Peeking into deep neural networks via feature-space partitioning
bastani, 2017, Interpretability via model extraction
smilkov, 2017, SmoothGrad Removing noise by adding noise
molnar, 2018, Interpretable Machine Learning A Guide for Making Black Box Models Explainable
sundararajan, 2017, Axiomatic Attribution for Deep Networks
linsley, 2018, Global-and-local attention networks for visual recognition
guidotti, 2018, Local rule-based explanations of black box decision systems
welling, 2016, Forest floor visualizations of random forests
kindermans, 2018, Learning how to explain neural networks: PatternNet and patternAttribution, Proc Int Conf Learn Represent, 1
shrikumar, 2016, Not just a black box Interpretable deep learning by propagating activation differences
dabkowski, 2017, Real time image saliency for black box classifiers, Proc Adv Neural Inf Process Syst, 6970
chander, 2018, Proc MAKE-Explainable AI
biundo, 2018, Proc ICAPS Workshop EXplainable AI Planning
graaf, 2018, HRI Workshop on Explainable Robotic Systems
komatsu, 2018, Proc ACM Intell Interfaces (IUI) Workshop Explainable Smart Syst (EXSS)
alonso, 2018, Proc IPMU Adv Explainable Artif Intell
agudo, 2018, Proc ICCBR 1st Workshop Case-Based Reasoning Explanation Intell Syst (XCBR)
gunning, 2018, Explainable artificial intelligence (xai)
nguyen, 2016, Synthesizing the preferred inputs for neurons in neural networks via deep generator networks, Proc Adv Neural Inf Process Syst (NIPS), 3387
hall, 2018, Using H2O Driverless AI H2O AI
valenzuela-escárcega, 2018, Lightly-supervised representation learning with global interpretability
2018, Cognilytica’s AI Positioning Matrix (CAPM)
2018, Explainable Machine Learning Challenge
erhan, 2010, Understanding representations learned in deep architectures
johansson, 2004, The truth is in there—Rule extraction from opaque models using genetic programming, Proc FLAIRS Conf, 658
casalicchio, 2018, Visualizing the feature importance for black box models
hailesilassie, 2017, Rule extraction algorithm for deep neural networks A review
yang, 2018, Global model interpretation via recursive partitioning
barakat, 2005, Eclectic rule-extraction from support vector machines, Int J Comput Intell, 2, 59
zeiler, 2014, Visualizing and understanding convolutional networks, Proc Eur Conf Comput Vis, 818
sadowski, 2015, Deep learning, dark knowledge, and dark matter, Proc NIPS Workshop High-Energy Phys Mach Learn (PMLR), 42, 81
hinton, 2015, Distilling the knowledge in a neural network
che, 2015, Distilling knowledge from deep networks with applications to healthcare domain
ribeiro, 2018, Anchors: High-precision model-agnostic explanations, Proc AAAI Conf Artif Intell, 1
baehrens, 2010, How to explain individual classification decisions, J Mach Learn Res, 11, 1803
simonyan, 2013, Deep Inside Convolutional Networks Visualising Image Classification Models and Saliency Maps