Backdoor attacks-resilient aggregation based on Robust Filtering of Outliers in federated learning for image classification

Knowledge-Based Systems - Tập 245 - Trang 108588 - 2022
Nuria Rodríguez-Barroso1, Eugenio Martínez-Cámara1, M. Victoria Luzón2, Francisco Herrera1
1Department of Computer Science and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Spain
2Department of Software Engineering, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Spain

Tài liệu tham khảo

Yang, 2019, Federated learning, Synth. Lect. Artif. Intell. Mach. Learn., 13, 1 Zhang, 2021, A survey on federated learning, Knowl.-Based Syst., 216, 10.1016/j.knosys.2021.106775 Pang, 2021, Collaborative city digital twin for the COVID-19 pandemic: A federated learning solution, Tsinghua Sci. Technol., 26, 759, 10.26599/TST.2021.9010026 N. Dalvi, P. Domingos, Mausam, S. Sanghai, D. Verma, Adversarial Classification, in: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2004, pp. 99–108. E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, V. Shmatikov, How to Backdoor Federated Learning, in: Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, Vol. 108, 2020, pp. 2938–2948. Xiong, 2021, Privacy threat and defense for federated learning with Non-i.i.d. Data in AIoT, IEEE Trans. Ind. Inf. Song, 2020, FDA3: Federated defense against adversarial attacks for cloud-based iIoT applications, IEEE Trans. Ind. Inf. Kairouz, 2021, Advances and open problems in federated learning, Found. Trends® Mach. Learn., 14, 1, 10.1561/2200000083 Wang, 2020, Attack of the tails: Yes, you really can backdoor federated learning M. Nasr, R. Shokri, A. Houmansadr, Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning, in: 2019 IEEE Symposium on Security and Privacy, 2019, pp. 739–753. Suya, 2020 L. Li, W. Xu, T. Chen, G.B. Giannakis, Q. Ling, RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. No. 01, 2019, pp. 1544–1551. So, 2020, Byzantine-resilient secure federated learning, IEEE J. Sel. Areas Commun., Early access C. Fung, C.J. Yoon, I. Beschastnikh, The Limitations of Federated Learning in Sybil Settings, in: 23rd International Symposium on Research in Attacks, Intrusions and Defenses, 2020, pp. 301–316. A.N. Bhagoji, S. Chakraborty, P. Mittal, S. Calo, Analyzing Federated Learning through an Adversarial Lens, in: Proceedings of the 36th International Conference on Machine Learning, Vol. 97, 2019, pp. 634–643. Lyu, 2020 Sun, 2019 C. Xie, K. Huang, P.-Y. Chen, B. Li, DBA: Distributed Backdoor Attacks against Federated Learning, in: International Conference on Learning Representations, 2020. Ilyas, 2019 B. McMahan, E. Moore, D. Ramage, S. Hampson, B.A. y Arcas, Communication-Efficient Learning of Deep Networks from Decentralized Data, in: Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Vol. 54, 2017, pp. 1273–1282. Chen, 2017 Yin, 2018 M.S. Ozdayi, M. Kantarcioglu, Y.R. Gel, Defending against Backdoors in Federated Learning with Robust Learning Rate, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35, No. 10, 2021, pp. 9268–9276. Laskov, 2010, Machine learning in adversarial environments, Mach. Learn., 81, 115, 10.1007/s10994-010-5207-6 Huang, 2011, Adversarial machine learning, 43 Nelson, 2009, Misleading learners: Co-opting your spam filter, 17 Croux, 2007, Algorithms for projection–Pursuit robust principal component analysis, Chemometr. Intell. Lab. Syst., 87, 218, 10.1016/j.chemolab.2007.01.004 Lyu, 2020 Nasr, 2019, Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning D. Cao, S. Chang, Z. Lin, G. Liu, D. Sun, Understanding Distributed Poisoning Attack in Federated Learning, in: 2019 IEEE 25th International Conference on Parallel and Distributed Systems, ICPADS, 2019, pp. 233–239. Dwork, 2014, The algorithmic foundations of differential privacy, Found. Trends\protect \relax \special {t4ht=®} Theor. Comput. Sci., 9, 211 Zhou, 2021, Deep model poisoning attack on federated learning, Future Internet, 13, 10.3390/fi13030073 Sun, 2020 Lamport, 2019, The Byzantine generals problem, 203 J. Steinhardt, P.W. Koh, P. Liang, Certified Defenses for Data Poisoning Attacks, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, pp. 3520–3532. Shayan, 2018 S. Shen, S. Tople, P. Saxena, Auror: defending against poisoning attacks in collaborative deep learning systems, in: Proceedings of the 32nd Annual Conference on Computer Security Applications, 2016, pp. 508–519. X. Cao, M. Fang, J. Liu, N.Z. Gong, FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping, in: ISOC Network and Distributed System Security Symposium, 2021. D. Yin, Y. Chen, R. Kannan, P. Bartlett, Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates, in: Proceedings of the 35th International Conference on Machine Learning, Vol. 80, 2018, pp. 5650–5659. Blanchard, 2017, Machine learning with adversaries: Byzantine tolerant gradient descent, 119 El Mhamdi, 2018, The hidden vulnerability of distributed learning in byzantium, vol. 80, 3521 J. Bernstein, J. Zhao, K. Azizzadenesheli, A. Anandkumar, SignSGD with Majority Vote is Communication Efficient and Fault Tolerant, in: International Conference on Learning Representations, 2019. Chen, 2017 Roe, 2008, Central limit theorem, 66 Rodríguez-Barroso, 2020, Federated learning and differential privacy: Software tools analysis, the sherpa.ai FL framework and methodological guidelines for preserving data privacy, Inf. Fusion, 64, 270, 10.1016/j.inffus.2020.07.009 Caldas, 2018 Y. Ma, X. Zhu, J. Hsu, Data Poisoning against Differentially-Private Learners: Attacks and Defenses, in: International Joint Conferences on Artificial Intelligence Organization, 2019, pp. 4732–4738.