Shapley-Lorenz eXplainable Artificial Intelligence
Tài liệu tham khảo
Aas, K., Jullum, M., & Loland, A. (2020). Explaining individual predictions when features are dependent: More accurate approximations to Shapley values. arXiv preprint arXiv:1903.10464.
Arras, 2017, “What is relevant in a text document?”: An interpretable machine learning approach, PLoS One, 12, 1, 10.1371/journal.pone.0181142
Arrieta, A. B., Dríaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2019). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. arXiv preprint arXiv:1910.10045.
Bracke, P., Datta, A., Jung, C., & Shayak, S. (2019). Machine learning explainability in finance: An application to default risk analysis. Staff Working Paper No. 816, Bank of England.
Bussmann, 2020, Explainable AI in credit risk management, Frontiers in Artificial Intelligence, 3, 1
European Commission. (2020). On artificial intelligence – A European approach to excellence and trust. White Paper, European Commission, Brussels, 19-02-2020.
Giudici, 2019, What determines bitcoin exchange prices? A network VAR approach, Finance Research Letters, 28, 309, 10.1016/j.frl.2018.05.013
Giudici, 2020, Lorenz model selection, Journal of Classification, 10.1007/s00357-019-09358-w
Guégan, 2018, Regulatory learning: How to supervise machine learning models? An application to credit scoring, The Journal of Finance and Data Science, 4, 157, 10.1016/j.jfds.2018.04.001
Guidotti, 2018, A survey of methods for explaining black-box models, ACM Computing Surveys (CSUR), 51, 1, 10.1145/3236009
Joseph, A. (2019). Shapley regressions: A framework for statistical inference in machine learning models. Staff Working Paper No. 784, Bank of England.
Koshevoy, 1996, The Lorenz Zonoid of a multivariate distribution, Journal of the American Statistical Association, 91, 873, 10.1080/01621459.1996.10476955
Lou, 2012, Intelligible models for classification and regression, 150
Lundberg, S. M., & Lee, S. (2017). A unified approach to interpreting model predictions. arXiv preprint arXiv:1705.07874.
Mantegna, 1999
Molnar, C. (2020). Interpretable machine learning – A guide for making black box models explainable. Available at URL: https://cristophm.github.io/interpretable-ml-book.
Owen, 2017, On Shapley value for measuring importance of dependent inputs, SIAM/ASA Journal of Uncertainty Quantification, 5, 986, 10.1137/16M1097717
Shapley, 1953, A value for n-person games, Contributions to the Theory of Games, 307
Song, 2016, Shapley effects for global sensitivity analysis: Theory and computation, SIAM/ASA Journal of Uncertainty Quantification, 4, 1060, 10.1137/15M1048070
Strumbelj, 2010, An efficient explanation of individual classifications using game theory, Journal of Machine Learning Research, 11, 1