An adaptive robust defending algorithm against backdoor attacks in federated learning

Future Generation Computer Systems - Tập 143 - Trang 118-131 - 2023
Yongkang Wang1, Di-Hua Zhai1,2, Yongping He1, Yuanqing Xia1
1School of Automation, Beijing Institute of Technology, Beijing 100081, China
2Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing, 314001, China

Tài liệu tham khảo

Li, 2022, Federated learning with soft clustering, IEEE Internet Things J., 9, 7773, 10.1109/JIOT.2021.3113927 Mohammadi, 2018, Semisupervised deep reinforcement learning in support of IoT and smart city services, IEEE Internet Things J., 5, 624, 10.1109/JIOT.2017.2712560 Kolozali, 2019, Observing the pulse of a city: A smart city framework for real-time discovery, federation, and aggregation of data streams, IEEE Internet Things J., 6, 2651, 10.1109/JIOT.2018.2872606 Yu, 2021, When deep reinforcement learning meets federated learning: Intelligent multitimescale resource management for multiaccess edge computing in 5G ultradense network, IEEE Internet Things J., 8, 2238, 10.1109/JIOT.2020.3026589 Mohammadi, 2018, Deep learning for IoT big data and streaming analytics: A survey, IEEE Commun. Surv. Tutor., 20, 2923, 10.1109/COMST.2018.2844341 Zhang, 2021, Privacy-preserving cross-environment human activity recognition, IEEE Trans. Cybern. McMahan, 2017, Communication-efficient learning of deep networks from decentralized data, 1273 Kairouz, 2021, Advances and open problems in federated learning, Found. Trends® Mach. Learn., 14, 1, 10.1561/2200000083 C. Xie, K. Huang, P.-Y. Chen, B. Li, Dba: Distributed backdoor attacks against federated learning, in: International Conference on Learning Representations, 2019. Bagdasaryan, 2020, How to backdoor federated learning, 2938 Goldblum, 2022, Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses, IEEE Trans. Pattern Anal. Mach. Intell., 1 Wang, 2020, Attack of the tails: Yes, you really can backdoor federated learning, Adv. Neural Inf. Process. Syst., 33, 16070 Chen, 2018 Blanchard, 2017, Machine learning with adversaries: Byzantine tolerant gradient descent, Adv. Neural Inf. Process. Syst., 30 Chen, 2017, Distributed statistical machine learning in adversarial settings: Byzantine gradient descent, Proc. ACM Meas. Anal. Comput. Syst., 1, 1 Guerraoui, 2018, The hidden vulnerability of distributed learning in byzantium, 3521 Pillutla, 2022, Robust aggregation for federated learning, IEEE Trans. Signal Process., 70, 1142, 10.1109/TSP.2022.3153135 Yin, 2018, Byzantine-robust distributed learning: Towards optimal statistical rates, 5650 Li, 2020 Li, 2019 Xie, 2019, Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance, 6893 Cao, 2020 S. Shen, S. Tople, P. Saxena, Auror: Defending against poisoning attacks in collaborative deep learning systems, in: Proceedings of the 32nd Annual Conference on Computer Security Applications, 2016, pp. 508–519. T.D. Nguyen, P. Rieger, H. Chen, H. Yalame, H. Möllering, H. Fereidooni, S. Marchal, M. Miettinen, A. Mirhoseini, S. Zeitouni, et al., {FLAME}: Taming Backdoors in Federated Learning, in: 31st USENIX Security Symposium, USENIX Security 22, 2022, pp. 1415–1432. C. Fung, C.J. Yoon, I. Beschastnikh, The limitations of federated learning in sybil settings, in: 23rd International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2020, 2020, pp. 301–316. Tolpegin, 2020, Data poisoning attacks against federated learning systems, 480 Aramoon, 2021 Lecun, 1998, Gradient-based learning applied to document recognition, Proc. IEEE, 86, 2278, 10.1109/5.726791 Caldas, 2018 Krizhevsky, 2009 Sheller, 2018, Multi-institutional deep learning modeling without sharing patient data: A feasibility study on brain tumor segmentation, 92 Rieger, 2022 Nguyen, 2021 Muñoz-González, 2019 Briggs, 2020, Federated learning with hierarchical clustering of local updates to improve training on non-IID data, 1 Sattler, 2020, Clustered federated learning: Model-agnostic distributed multitask optimization under privacy constraints, IEEE Trans. Neural Netw. Learn. Syst., 32, 3710, 10.1109/TNNLS.2020.3015958 Mao, 2021, Romoa: Robust model aggregation for the resistance of federated learning to model poisoning attacks, 476 Tian, 2022 Li, 2021, Detection and mitigation of label-flipping attacks in federated learning systems with KPCA and K-means, 551 Sattler, 2020, On the byzantine robustness of clustered federated learning, 8861 M. Du, R. Jia, D. Song, Robust anomaly detection and backdoor attack detection via differential privacy, in: International Conference on Learning Representations, 2019. Xie, 2021, Crfl: Certifiably robust federated learning against backdoor attacks, 11372 Guo, 2021, Resisting distributed backdoor attacks in federated learning: A dynamic norm clipping approach, 1172 Dwork, 2014, The algorithmic foundations of differential privacy, Found. Trends® Theor. Comput. Sci., 9, 211 Minka, 2000 K. He, X. Zhang, et al., Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778. Chen, 2017, Distributed statistical machine learning in adversarial settings: Byzantine gradient descent, Proc. ACM Meas. Anal. Comput. Syst., 1, 1 X. Cao, J. Jia, N.Z. Gong, Provably Secure Federated Learning against Malicious Clients, in: Proceedings of the AAAI Conference on Artificial Intelligence, 2021, pp. 6885–6893. L. Chen, H. Wang, et al., Draco: Byzantine-resilient distributed training via redundant gradients, in: International Conference on Machine Learning, 2018, pp. 903–912.