PoisonGAN: Generative Poisoning Attacks Against Federated Learning in Edge Computing Systems

IEEE Internet of Things Journal - Tập 8 Số 5 - Trang 3310-3322 - 2021
Jiale Zhang1,2, Bing Chen1,2, Xiang Cheng1,2, Huynh Thi Thanh Binh3, Shui Yu4
1College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, China
2Science and Technology on Avionics Integration Laboratory, Nanjing University of Aeronautics and Astronautics, Nanjing, China
3School of Information and Communication Technology, Hanoi University of Science and Technology, Hanoi, Vietnam
4School of Computer Science, University of Technology Sydney, Sydney, NSW, Australia

Tóm tắt

Edge computing is a key-enabling technology that meets continuously increasing requirements for the intelligent Internet-of-Things (IoT) applications. To cope with the increasing privacy leakages of machine learning while benefiting from unbalanced data distributions, federated learning has been wildly adopted as a novel intelligent edge computing framework with a localized training mechanism. However, recent studies found that the federated learning framework exhibits inherent vulnerabilities on active attacks, and poisoning attack is one of the most powerful and secluded attacks where the functionalities of the global model could be damaged through attacker's well-crafted local updates. In this article, we give a comprehensive exploration of the poisoning attack mechanisms in the context of federated learning. We first present a poison data generation method, named Data_Gen, based on the generative adversarial networks (GANs). This method mainly relies upon the iteratively updated global model parameters to regenerate samples of interested victims. Second, we further propose a novel generative poisoning attack model, named PoisonGAN, against the federated learning framework. This model utilizes the designed Data_Gen method to efficiently reduce the attack assumptions and make attacks feasible in practice. We finally evaluate our data generation and attack models by implementing two types of typical poisoning attack strategies, label flipping and backdoor, on a federated learning prototype. The experimental results demonstrate that these two attack models are effective in federated learning.

Từ khóa

#Backdoor attack #federated learning #generative adversarial nets #label flipping #poisoning attacks

Tài liệu tham khảo

louppe, 2017, Learning to pivot with adversarial networks, Proc Conf Neural Inf Process Syst (NIPS), 981

10.1109/SP.2019.00031

10.1145/3128572.3140450

10.1145/2991079.2991125

xiao, 2012, Adversarial label flips attack on support vector machines, Proc 20th Eur Conf Artif Intell (ECAI), 870

10.1007/978-3-319-46128-1_42

10.1145/3133956.3133982

zhao, 2018, Federated learning with non-IID data

10.1109/TNNLS.2018.2886017

10.1109/SP.2019.00065

10.1109/SP.2019.00029

10.1109/SP.2018.00057

hayes, 2018, Contamination attacks and mitigation in multi-party machine learning, Proc Conf Neural Inf Process Syst (NIPS), 6604

10.1109/TrustCom/BigDataSE.2019.00057

biggio, 2012, Poisoning attacks against support vector machines, Proc 29th Int Conf Mech Learn (ICML), 1807

mei, 2015, Using machine teaching to identify optimal training-set attacks on machine learners, Proc 29th AAAI Conf Artif Intell, 2871

10.1145/3128572.3140451

mahloujifar, 2019, Data poisoning attacks in multi-party learning, Proc Int Conf Machine Learn (ICML), 4274

bagdasaryan, 2020, How to backdoor federated learning, Proc 23rd Int Conf Artif Intell Stat (AISTATS), 1

xiao, 2015, Is feature selection secure against training data poisoning?, Proc Int Conf Machine Learn (ICML), 1689

10.1109/ACCESS.2016.2577036

10.1109/INFOCOM.2019.8737416

10.1109/ACCESS.2017.2778504

10.1145/3298981

10.1016/j.neucom.2014.08.081

10.1109/JIOT.2017.2683200

10.1109/GLOBECOM38437.2019.9014272

10.1145/2810103.2813687

10.1515/popets-2018-0024

bhagoji, 2018, Analyzing federated learning through an adversarial lens

mcmahan, 2017, Communication-efficient learning of deep networks from decentralized data, Proc 20th Int Conf Artif Intell Stat (AISTATS), 1

blanchard, 2017, Machine learning with adversaries: Byzantine tolerant gradient descent, Proc Conf Neural Inf Process Syst (NIPS), 119

fang, 2020, Local model poisoning attacks to Byzantine-robust federated learning, Proc Usenix Security Symp, 1605

10.1109/MILCOM.2017.8170807

goodfellow, 2014, Generative adversarial nets, Proc Conf Neural Inf Process Syst (NIPS), 2672

yang, 2017, Generative poisoning attack method against neural networks

10.1145/3133956.3134012

10.1109/JBHI.2014.2344095