GONE: A generic O (1) NoisE layer for protecting privacy of deep neural networks
Tài liệu tham khảo
Abadi, 2016, Deep learning with differential privacy, 308
Agarwal, 2021, Damad: Database, attack, and model agnostic adversarial perturbation detector, IEEE Trans. Neural Netw. Learn. Syst., 1
Agarwal, 2021, Image transformation-based defense against adversarial perturbation on deep learning models, IEEE Trans. Dependable Secure Comput., 18, 2106
Athalye, 2018, Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples, 274
Backes, 2016, Membership privacy in microrna-based studies, 319
Bondielli, 2019, A survey on fake news and rumour detection techniques, Inf. Sci., 497, 38, 10.1016/j.ins.2019.05.035
Brendel, 2018, Decision-based adversarial attacks: Reliable attacks against black-box machine learning models, 1
Bulò, 2017, Randomized prediction games for adversarial machine learning, IEEE Trans. Neural Netw. Learn. Syst., 28, 2466, 10.1109/TNNLS.2016.2593488
Carlini, 2016, Towards evaluating the robustness of neural networks, 39
Che
Chen, 2020, Hopskipjumpattack: A query-efficient decision-based attack, 1277
Chen, 2019, POBA-GA: perturbation optimized black-box adversarial attacks via genetic algorithm, Comput. Secur., 85, 89, 10.1016/j.cose.2019.04.014
Chen, 2017, ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models, 15
Choquette-Choo, 2021, Label-only membership inference attacks, 1964
Cohen, 2019, Certified adversarial robustness via randomized smoothing, 1310
Dong, 2018, Boosting adversarial attacks with momentum, 9185
Dwork, 2014, The algorithmic foundations of differential privacy, Found. Trends Theor. Comput. Sci., 9, 211, 10.1561/0400000042
Eom, 2020, Effective privacy preserving data publishing by vectorization, Inf. Sci., 527, 311, 10.1016/j.ins.2019.09.035
Geng, 2022, Novel target attention convolutional neural network for relation classification, Inf. Sci., 597, 24, 10.1016/j.ins.2022.03.024
Goodfellow, 2014, Explaining and harnessing adversarial examples, 1
Guo, 2018, Countering adversarial images using input transformations, 1
He, 2016, Deep residual learning for image recognition, 770
He, 2016, Identity mappings in deep residual networks, 630
Hosseini, 2019, Dropping pixels for adversarial robustness, 91
Ilyas, 2018, Black-box adversarial attacks with limited queries and information, 2142
Jeddi, 2020, Learn2perturb: An end-to-end feature perturbation learning to improve adversarial robustness, 1238
Jia, 2019, Memguard: Defending against black-box membership inference attacks via adversarial examples, 259
Juuti, 2019, PRADA: protecting against DNN model stealing attacks, 512
Kesarwani, 2018, Model extraction warning in mlaas paradigm, 371
Krizhevsky, 2012, Imagenet classification with deep convolutional neural networks, 1106
Kurakin, 2017, Adversarial machine learning at scale, 1
LeCun, 2015, Deep learning, Nature, 521, 436, 10.1038/nature14539
LeCun, 1989, Backpropagation applied to handwritten zip code recognition, Neural Comput., 1, 541, 10.1162/neco.1989.1.4.541
Lécuyer, 2019, Certified robustness to adversarial examples with differential privacy, 656
Lee
Li, 2021, Deep learning for lidar point clouds in autonomous driving: A review, IEEE Trans. Neural Netw. Learn. Syst., 32, 3412, 10.1109/TNNLS.2020.3015992
Liu, 2018, Towards robust neural networks via random self-ensemble, 381
Liu, 2021, Speech emotion recognition based on formant characteristics feature extraction and phoneme type convergence, Inf. Sci., 563, 309, 10.1016/j.ins.2021.02.016
Lowd, 2005, Adversarial learning, 641
Van der Maaten, 2008, Visualizing data using t-sne, J. Mach. Learn. Res., 9, 1
Nasr, 2018, Machine learning with membership privacy using adversarial regularization, 634
Orekondy, 2019, Knockoff nets: Stealing functionality of black-box models, 4954
Orekondy, 2020, Prediction poisoning: Towards defenses against DNN model stealing attacks, 1
Pajola, 2021, Fall of giants: How popular text-based mlaas fall against a simple evasion attack, 198
Pan, 2021, PNAS: A privacy preserving framework for neural architecture search services, Inf. Sci., 573, 370, 10.1016/j.ins.2021.05.073
Papernot, 2017, Practical black-box attacks against machine learning, 506
Phan, 2017, Adaptive laplace mechanism: Differential privacy preservation in deep learning, 385
Pyrgelis, 2018, Knock knock, who's there? membership inference on aggregate location data, 1
Qian, 2020, Privacy-preserving based task allocation with mobile edge clouds, Inf. Sci., 507, 288, 10.1016/j.ins.2019.07.092
Sablayrolles, 2019, White-box vs black-box: Bayes optimal strategies for membership inference, 5558
Salem, 2019, Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models, 1
Salman, 2020, Denoised smoothing: A provable defense for pretrained classifiers, 1
Shi, 2019, Adaptive multi-scale deep neural networks with perceptual loss for panchromatic and multispectral images classification, Inf. Sci., 490, 1, 10.1016/j.ins.2019.03.055
Shokri, 2015, Privacy-preserving deep learning, 1310
Shokri, 2017, Membership inference attacks against machine learning models, 3
Simonyan, 2015, Very deep convolutional networks for large-scale image recognition, 1
Strauss
Su, 2019, One pixel attack for fooling deep neural networks, IEEE Trans. Evol. Comput., 23, 828, 10.1109/TEVC.2019.2890858
Szegedy, 2014, Intriguing properties of neural networks, 1
Torfi, 2022, Differentially private synthetic medical data generation using convolutional gans, Inf. Sci., 586, 485, 10.1016/j.ins.2021.12.018
Tramèr, 2016, Stealing machine learning models via prediction apis, 601
Tu, 2019, Autozoom: Autoencoder-based zeroth order optimization method for attacking black-box neural networks, 742
Xie, 2017, Adversarial examples for semantic segmentation and object detection, 1378
Xie, 2019, Feature denoising for improving adversarial robustness, 501
Yu, 2018, Convolutional networks with cross-layer neurons for image recognition, Inf. Sci., 433–434, 241, 10.1016/j.ins.2017.12.045
Yuan, 2019, Adversarial examples: Attacks and defenses for deep learning, IEEE Trans. Neural Netw. Learn. Syst., 30, 2805, 10.1109/TNNLS.2018.2886017
Zheng, 2019, BDPL: A boundary differentially private layer against machine learning model extraction attacks, 66