PoisonGAN: Generative Poisoning Attacks Against Federated Learning in Edge Computing Systems
Tóm tắt
Từ khóa
#Backdoor attack #federated learning #generative adversarial nets #label flipping #poisoning attacksTài liệu tham khảo
louppe, 2017, Learning to pivot with adversarial networks, Proc Conf Neural Inf Process Syst (NIPS), 981
xiao, 2012, Adversarial label flips attack on support vector machines, Proc 20th Eur Conf Artif Intell (ECAI), 870
zhao, 2018, Federated learning with non-IID data
hayes, 2018, Contamination attacks and mitigation in multi-party machine learning, Proc Conf Neural Inf Process Syst (NIPS), 6604
biggio, 2012, Poisoning attacks against support vector machines, Proc 29th Int Conf Mech Learn (ICML), 1807
mei, 2015, Using machine teaching to identify optimal training-set attacks on machine learners, Proc 29th AAAI Conf Artif Intell, 2871
mahloujifar, 2019, Data poisoning attacks in multi-party learning, Proc Int Conf Machine Learn (ICML), 4274
bagdasaryan, 2020, How to backdoor federated learning, Proc 23rd Int Conf Artif Intell Stat (AISTATS), 1
xiao, 2015, Is feature selection secure against training data poisoning?, Proc Int Conf Machine Learn (ICML), 1689
bhagoji, 2018, Analyzing federated learning through an adversarial lens
mcmahan, 2017, Communication-efficient learning of deep networks from decentralized data, Proc 20th Int Conf Artif Intell Stat (AISTATS), 1
blanchard, 2017, Machine learning with adversaries: Byzantine tolerant gradient descent, Proc Conf Neural Inf Process Syst (NIPS), 119
fang, 2020, Local model poisoning attacks to Byzantine-robust federated learning, Proc Usenix Security Symp, 1605
goodfellow, 2014, Generative adversarial nets, Proc Conf Neural Inf Process Syst (NIPS), 2672
yang, 2017, Generative poisoning attack method against neural networks
