Backdoor attacks and defenses in federated learning: Survey, challenges and future research directions
Tài liệu tham khảo
Abdulrahman, 2020, A survey on federated learning: The journey from centralized to distributed on-site learning and beyond, IEEE Internet of Things Journal, 8, 5476, 10.1109/JIOT.2020.3030072
Andreina, 2021, Baffle: Backdoor detection via feedback-based federated learning, 852
Bagdasaryan, 2019, Differential privacy has disparate impact on model accuracy, Adv. Neural Inf. Process. Syst., 32
Bagdasaryan, 2020, How to backdoor federated learning, 2938
Baluja, 2017, Hiding images in plain sight: Deep steganography, Adv. Neural Inf. Process. Syst., 30
Baruch, 2019, A little is enough: Circumventing defenses for distributed learning, Adv. Neural Inf. Process. Syst., 32
Becking, D., Kirchhoffer, H., Tech, G., Haase, P., Müller, K., Schwarz, H., Samek, W., 2022. Adaptive Differential Filters for Fast and Communication-Efficient Federated Learning. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. CVPRW, pp. 3366–3375.
Bernstein, 2018
Bhagoji, 2019, Analyzing federated learning through an adversarial lens, 634
Blanchard, 2017, Machine learning with adversaries: Byzantine tolerant gradient descent, Adv. Neural Inf. Process. Syst., 30
Blasch, 2021, Machine learning/artificial intelligence for sensor data fusion–opportunities and challenges, IEEE Aerosp. Electron. Syst. Mag., 36, 80, 10.1109/MAES.2020.3049030
Bonawitz, 2019
Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H.B., Patel, S., Ramage, D., Segal, A., Seth, K., 2017. Practical secure aggregation for privacy-preserving machine learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. pp. 1175–1191.
Bourtoule, L., Chandrasekaran, V., Choquette-Choo, C.A., Jia, H., Travers, A., Zhang, B., Lie, D., Papernot, N., 2021. Machine Unlearning. In: 2021 IEEE Symposium on Security and Privacy. SP, pp. 141–159.
Cao, 2019, Understanding distributed poisoning attack in federated learning, 233
Chen, 2021
Chen, 2020
Chen, 2017
Chen, 2020, Fedhealth: A federated transfer learning framework for wearable healthcare, IEEE Intell. Syst., 35, 83, 10.1109/MIS.2020.2988604
Chen, 2020, Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation, IEEE Trans. Neural Netw. Learn. Syst., 31, 4229, 10.1109/TNNLS.2019.2953131
Chen, 2019
Cohen, 2017, EMNIST: Extending MNIST to handwritten letters, 2921
Cui, X., Lu, S., Kingsbury, B., 2021. Federated acoustic modeling for automatic speech recognition. In: ICASSP. pp. 6748–6752.
Deng, 2012, The mnist database of handwritten digit images for machine learning research, IEEE Signal Process. Mag., 29, 141, 10.1109/MSP.2012.2211477
Deng, 2009, Imagenet: A large-scale hierarchical image database, 248
Dimitriadis, D., Kumatani, K., Gmyr, R., et al., 2020. A federated approach in training acoustic models. In: Interspeech. pp. 981–985.
Doan, 2021, Backdoor attack with imperceptible input and latent modification, Adv. Neural Inf. Process. Syst., 34, 18944
Doan, K., Lao, Y., Zhao, W., Li, P., 2021b. Lira: Learnable, imperceptible and robust backdoor attacks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 11966–11976.
Doshi, K., lmaz, Y.Y., 2022. Federated Learning-based Driver Activity Recognition for Edge Devices. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops. CVPRW, pp. 3337–3345.
Fang, S., Choromanska, A., 2022. Backdoor attacks on the DNN interpretation system. In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 36. pp. 561–570.
Feng, Y., Ma, B., Zhang, J., Zhao, S., Xia, Y., Tao, D., 2022. Fiba: Frequency-injection based backdoor attack in medical image analysis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 20876–20885.
Fung, 2018
Fung, C., Yoon, C.J., Beschastnikh, I., 2020. The limitations of federated learning in sybil settings. In: 23rd International Symposium on Research in Attacks, Intrusions and Defenses. RAID 2020, pp. 301–316.
Go, 2009, Twitter sentiment classification using distant supervision, CS224N project report, Stanford, 1, 2009
Goldblum, 2022, Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses, IEEE Trans. Pattern Anal. Mach. Intell., PP
Gong, 2022, Coordinated backdoor attacks against federated learning with model-dependent triggers, IEEE Netw., 36, 84, 10.1109/MNET.011.2000783
Gong, 2022, Backdoor attacks and defenses in federated learning: State-of-the-art, taxonomy, and future directions, IEEE Wirel. Commun.
Gosselin, 2022, Privacy and security in federated learning: A survey, Appl. Sci., 10.3390/app12199901
Gu, 2019, Badnets: evaluating backdooring attacks on deep neural networks, IEEE Access, 7, 47230, 10.1109/ACCESS.2019.2909068
Guerraoui, 2018, The hidden vulnerability of distributed learning in byzantium, 3521
Guliani, D., Beaufays, F., Motta, G., 2021. Training speech recognition models with federated learning: A quality/cost framework. In: ICASSP. pp. 3080–3084.
Gupta, 2021, Adaptive machine unlearning, Adv. Neural Inf. Process. Syst., 34, 16319
Gupta, D., Kayode, O., Bhatt, S., Gupta, M., Tosun, A.S., 2021b. Hierarchical Federated Learning based Anomaly Detection using Digital Twins for Smart Healthcare. In: 2021 IEEE 7th International Conference on Collaboration and Internet Computing. CIC, pp. 16–25.
Halimi, 2022
Hayes, 2017, Generating steganographic images via adversarial training, Adv. Neural Inf. Process. Syst., 30
He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778.
Hochreiter, 1997, Long short-term memory, Neural Comput., 9, 1735, 10.1162/neco.1997.9.8.1735
Hu, B., Gao, Y., Liu, L., Ma, H., 2018. Federated Region-Learning: An Edge Computing Based Framework for Urban Environment Sensing. In: 2018 IEEE Global Communications Conference. GLOBECOM, pp. 1–7.
Jin, 2022
Jing, J., Deng, X., Xu, M., Wang, J., Guan, Z., 2021. HiNet: Deep image hiding by invertible network. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4733–4742.
Jing, 2019
Kairouz, 2021, Advances and open problems in federated learning, Found. Trends® Mach. Learn., 14, 1, 10.1561/2200000083
Kholod, 2021, Open-source federated learning frameworks for IoT: A comparative review and analysis, Sensors (Basel, Switzerland), 21
Krizhevsky, 2009
Krizhevsky, 2009
LeCun, 1998, Gradient-based learning applied to document recognition, Proc. IEEE, 86, 2278, 10.1109/5.726791
Li, 2020
Li, Q., Wen, Z., He, B., 2020b. Practical federated gradient boosting decision trees. In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34. pp. 4642–4649.
Li, 2022, Backdoor learning: A survey, IEEE Trans. Neural Netw. Learn. Syst., PP
Lin, 2021
Liu, 2021, Deep anomaly detection for time-series data in industrial IoT: A communication-efficient on-device federated learning approach, IEEE Internet Things J., 8, 6348, 10.1109/JIOT.2020.3011726
Liu, 2022
Liu, 2021, FedEraser: Enabling efficient client-level data removal from federated learning models, 1
Liu, Y., Yang, R., 2021. Federated Learning Application on Depression Treatment Robots(DTbot). In: 2021 IEEE 13th International Conference on Computer Research and Development. ICCRD, pp. 121–124.
Liu, 2020
Liu, 2021
Lyu, 2020
Mahalanobis, P.C., 1936. On the generalised distance in statistics. In: Proceedings of the National Institute of Science of India, Vol. 12. pp. 49–55.
McMahan, 2016
McMahan, 2017, Communication-efficient learning of deep networks from decentralized data, 1273
McMahan, 2017
Mdhaffar, 2021, Study on acoustic model personalization in a context of collaborative learning constrained by privacy preservation, 426
Molnar, 2020
Montavon, 2018, Methods for interpreting and understanding deep neural networks, Digit. Signal Process., 73, 1, 10.1016/j.dsp.2017.10.011
Mothukuri, 2021, A survey on security and privacy of federated learning, Future Gener. Comput. Syst., 115, 619, 10.1016/j.future.2020.10.007
Muñoz-González, 2019
Naseri, 2020
Neel, 2021, Descent-to-delete: Gradient-based methods for machine unlearning, 931
Nguyen, 2019, DÏoT: A federated self-learning anomaly detection system for IoT, 756
Nguyen, T.D., Rieger, P., Chen, H., Yalame, H., Möllering, H., Fereidooni, H., Marchal, S., Miettinen, M., Mirhoseini, A., Zeitouni, S., et al., 2022. FLAME: Taming Backdoors in Federated Learning. In: 31st USENIX Security Symposium. USENIX Security 22, pp. 1415–1432.
Nguyen, T.D., Rieger, P., Miettinen, M., Sadeghi, A.-R., 2020. Poisoning attacks on federated learning-based IoT intrusion detection system. In: Proc. Workshop Decentralized IoT Syst. Secur.(DISS). DISS, pp. 1–7.
Nguyen, 2021, Efficient federated learning algorithm for resource allocation in wireless IoT networks, IEEE Internet Things J., 8, 3394, 10.1109/JIOT.2020.3022534
Nguyen, 2021
Ozdayi, M.S., Kantarcioglu, M., Gel, Y.R., 2021. Defending against backdoors in federated learning with robust learning rate. In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 35. pp. 9268–9276.
Pillutla, 2022, Robust aggregation for federated learning, IEEE Trans. Signal Process., 70, 1142, 10.1109/TSP.2022.3153135
Prayitno, 2021, A systematic review of federated learning in the healthcare area: From the perspective of data properties and applications, Appl. Sci., 10.3390/app112311191
Preuveneers, 2018, Chained anomaly detection models for federated learning: An intrusion detection case study, Appl. Sci., 8, 2663, 10.3390/app8122663
Rieger, 2022
Rodríguez-Barroso, 2023, Survey on federated learning threats: Concepts, taxonomy on attacks and defences, experimental study and challenges, Inf. Fusion, 90, 148, 10.1016/j.inffus.2022.09.011
Rudin, 2019, Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead, Nat. Mach. Intell., 1, 206, 10.1038/s42256-019-0048-x
Samek, 2019, Towards explainable artificial intelligence, 5
Sattler, 2020, On the byzantine robustness of clustered federated learning, 8861
Shafahi, 2018
Shejwalkar, 2021
Shen, S., Tople, S., Saxena, P., 2016. Auror: Defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd Annual Conference on Computer Security Applications. pp. 508–519.
Singhal, 2001, Modern information retrieval: A brief overview, IEEE Data Eng. Bull., 24, 35
Smith, 2017, Federated multi-task learning, Adv. Neural Inf. Process. Syst., 30
Sun, 2019
Sun, 2021, Fl-wbc: Enhancing robustness against model poisoning attacks in federated learning from a client perspective, Adv. Neural Inf. Process. Syst., 34, 12613
Sun, 2022
Tian, 2022, A comprehensive survey on poisoning attacks and countermeasures in machine learning, ACM Comput. Surv.
Tolpegin, 2020, Data poisoning attacks against federated learning systems, 480
Wan, 2021
Wang, J., Guo, S., Xie, X., Qi, H., 2022a. Federated unlearning via class-discriminative pruning. In: Proceedings of the ACM Web Conference 2022. pp. 622–632.
Wang, 2019, In-edge AI: Intelligentizing mobile edge computing, caching and communication by federated learning, IEEE Netw., 33, 156, 10.1109/MNET.2019.1800286
Wang, 2020, Attack of the tails: Yes, you really can backdoor federated learning, Adv. Neural Inf. Process. Syst., 33, 16070
Wang, N., Xiao, Y., Chen, Y., Hu, Y., Lou, W., Hou, Y.T., 2022b. Flare: Defending federated learning against model poisoning attacks via latent space representations. In: Proceedings of the 2022 ACM on Asia Conference on Computer and Communications Security. pp. 946–958.
Wei, 2022
Wenger, E., Passananti, J., Bhagoji, A.N., Yao, Y., Zheng, H., Zhao, B.Y., 2021. Backdoor attacks against deep learning systems in the physical world. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. CVPR, pp. 6206–6215.
Wu, 2020
Wu, 2020
Wu, 2022
Xie, 2021, Crfl: Certifiably robust federated learning against backdoor attacks, 11372
Xie, C., Huang, K., Chen, P.-Y., Li, B., 2019. Dba: Distributed backdoor attacks against federated learning. In: International Conference on Learning Representations.
Xu, 2021, Federated learning for healthcare informatics, J. Healthc. Inf. Res., 5, 1, 10.1007/s41666-020-00082-4
Xu, X., Wu, J., Yang, M., Luo, T., Duan, X., Li, W., Wu, Y., Wu, B., 2020. Information Leakage by Model Weights on Federated Learning. In: Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice.
Xue, M., He, C., Sun, S., Wang, J., Liu, W., 2021. Robust Backdoor Attacks against Deep Neural Networks in Real Physical World. In: 2021 IEEE 20th International Conference on Trust, Security and Privacy in Computing and Communications. TrustCom, pp. 620–626.
Yang, 2019, Federated machine learning: concept and applications, ACM Transactions on Intelligent Systems and Technology (TIST), 10, 1, 10.1145/3298981
Yang, 2019
Yin, 2018, Byzantine-robust distributed learning: Towards optimal statistical rates, 5650
Yin, 2021, A comprehensive survey of privacy-preserving federated learning, ACM Comput. Surv., 54, 1, 10.1145/3460427
Yoo, 2022
Zhang, Z., Cao, X., Jia, J., Gong, N.Z., 2022a. FLDetector: Defending federated learning against model poisoning attacks via detecting malicious clients. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. pp. 2545–2555.
Zhang, 2020, PoisonGAN: Generative poisoning attacks against federated learning in edge computing systems, IEEE Internet Things J., 8, 3310, 10.1109/JIOT.2020.3023126
Zhang, 2022, Neurotoxin: Durable backdoors in federated learning, 26429
Zhang, 2020, Defending poisoning attacks in federated learning via adversarial training method, 83
Zheng, 2021
Zhou, 2021, Deep model poisoning attack on federated learning, Future Internet, 13, 73, 10.3390/fi13030073
Zou, 2022, Defending batch-level label inference and replacement attacks in vertical federated learning, IEEE Transactions on Big Data, 10.1109/TBDATA.2022.3192121