PTB: Robust physical backdoor attacks against deep neural networks in real world
Tài liệu tham khảo
Ali, 2019, Consumer-facing technology fraud: economics, attack methods and potential solutions, Future Gener. Comput. Syst., 100, 408, 10.1016/j.future.2019.03.041
Athalye, 2018, Synthesizing robust adversarial examples, 284
Bagdasaryan, 2021, Blind backdoors in deep learning models, 1505
Bagdasaryan, 2020, How to backdoor federated learning, 2938
Bojarski, 2016, End to end learning for self-driving cars, arXiv:1604.07316
Chen, 2019, Detecting backdoor attacks on deep neural networks by activation clustering, 1
Chen, 2018, ShapeShifter: Robust physical adversarial attack on faster R-CNN object detector, 52
Chen, 2017, Targeted backdoor attacks on deep learning systems using data poisoning, arXiv:1712.05526
Cheng, 2021, Deep feature space trojan attack of neural networks by controlled detoxification, 1148
Chou, 2020, SentiNet: Detecting localized universal attacks against deep learning systems, 48
Deng, 2009, ImageNet: A large-scale hierarchical image database, 248
Désidéri, 2012, Multiple-gradient descent algorithm (MGDA) for multiobjective optimization, C.R. Math., 350, 313, 10.1016/j.crma.2012.03.014
Doan, 2021, LIRA: Learnable, imperceptible and robust backdoor attacks, 11966
Dumford, 2020, Backdooring convolutional neural networks via targeted weight perturbations, 1
Eykholt, 2018, Robust physical-world attacks on deep learning visual classification, 1625
Gao, 2019, STRIP: A defence against trojan attacks on deep neural networks, 113
Garg, 2020, Can adversarial weight perturbations inject neural backdoors, 2029
Gu, 2019, BadNets: evaluating backdooring attacks on deep neural networks, IEEE Access, 7, 47230, 10.1109/ACCESS.2019.2909068
Guo, 2020, TrojanNet: embedding hidden trojan horse models in neural networks, arXiv:2002.10078
Hampel, 1974, The influence curve and its role in robust estimation, J Am Stat Assoc, 69, 383, 10.1080/01621459.1974.10482962
Hu, 2015, When face recognition meets with deep learning: an evaluation of convolutional neural networks for face recognition, 142
Li, 2020, Invisible backdoor attacks on deep neural networks via steganography and regularization, IEEE Trans Dependable Secure Comput, 18, 2088
Li, 2021, Backdoor attack in the physical world, 1
Liu, 2018, Fine-Pruning: Defending against backdooring attacks on deep neural networks, 273
Liu, 2018, Trojaning attack on neural networks, 1
Liu, 2020, Reflection backdoor: A natural backdoor attack on deep neural networks, 182
Mejia, N., 2020. Facial recognition in banking current applications. https://emerj.com/ai-sector-overviews/facial-recognition-in-banking-current-applications/.
Nguyen, 2020, Input-aware dynamic backdoor attack, 1
Nguyen, 2021, WaNet – Imperceptible warping-based backdoor attack, 1
Parkhi, 2015, Deep face recognition, 1
Pasquini, 2020, Trembling triggers: exploring the sensitivity of backdoors in DNN-based face recognition, EURASIP J. Inf. Secur., 2020, 1
Rakin, 2020, TBT: targeted neural network attack with bit trojan, 13195
Redmon, 2016, You only look once: Unified, real-time object detection, 779
Ren, 2017, Faster R-CNN: towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., 39, 1137, 10.1109/TPAMI.2016.2577031
Saha, 2020, Hidden trigger backdoor attacks, 11957
Selvaraju, 2017, Grad-CAM: Visual explanations from deep networks via gradient-based localization, 618
Souri, 2021, Sleeper agent: scalable hidden trigger backdoors for neural networks trained from scratch, arXiv:2106.08970
Wang, 2019, Neural Cleanse: Identifying and mitigating backdoor attacks in neural networks, 707
Wenger, 2021, Backdoor attacks against deep learning systems in the physical world, 6206
Wolf, 2011, Face recognition in unconstrained videos with matched background similarity, 529
Xue, 2021, Robust backdoor attacks against deep neural networks in real physical world, 1
Xue, 2020, One-to-N & N-to-One: two advanced backdoor attacks against deep learning models, IEEE Trans Dependable Secure Comput, 1
Xue, 2021, Backdoors hidden in facial features: a novel invisible backdoor attack against face recognition systems, Peer-to-Peer Networking and Applications, 14, 1458, 10.1007/s12083-020-01031-z
Xue, 2020, Machine learning security: threats, countermeasures, and evaluations, IEEE Access, 8, 74720, 10.1109/ACCESS.2020.2987435
Yao, 2019, Latent backdoor attacks on deep neural networks, 2041
Zhang, 2021, Advdoor: Adversarial backdoor attack of deep learning system, 127
Zhong, 2020, Backdoor embedding in convolutional neural network models via invisible perturbation, 97
Zou, 2018, PoTrojan: powerful neural-level trojan designs in deep learning models, arXiv: 1802.03043