A Master Key backdoor for universal impersonation attack against DNN-based face verification
Tài liệu tham khảo
Akhtar, 2018, Threat of adversarial attacks on deep learning in computer vision: a survey, IEEE Access, 6, 14410, 10.1109/ACCESS.2018.2807385
Alberti, 2018, Are you tampering with my data?, 296
Barni, 2019, A new backdoor attack in CNNS by training set corruption without label poisoning, 101
Bhalerao, 2019, Luminance-based video backdoor attack against anti-spoofing rebroadcast detection, 1
Bromley, 1994, Signature verification using a “siamese” time delay neural network, 737
Cao, 2018, VGGFace2: a dataset for recognising faces across pose and age, 67
Chen, 2019, Detecting backdoor attacks on deep neural networks by activation clustering, 2301
X. Chen, C. Liu, B. Li, K. Lu, D. Song, Targeted backdoor attacks on deep learning systems using data poisoning, arXiv:1712.05526(2017).
Chopra, 2005, Learning a similarity metric discriminatively, with application to face verification, 1, 539
D. Deb, J. Zhang, A.K. Jain, AdvFaces: adversarial face synthesis, arXiv:1908.05008 (2019).
Dong, 2019, Efficient decision-based black-box adversarial attacks on face recognition, 7714
T. Gu, B. Dolan-Gavitt, S. Garg, Badnets: identifying vulnerabilities in the machine learning model supply chain, arXiv:1708.06733 (2017).
Huang, 2007, Unsupervised joint alignment of complex images, 1
G.B. Huang, M. Ramesh, T. Berg, E. Learned-Miller, LFW benchmark list reorganized in pairs for performance reporting, 2007b, (http://vis-www.cs.umass.edu/lfw/pairs.txt).
Koch, 2015, Siamese neural networks for one-shot image recognition, 2
C. Liao, H. Zhong, A. Squicciarini, S. Zhu, D. Miller, Backdoor embedding in convolutional neural network models via invisible perturbation, arXiv:1808.10307 (2018).
Liu, 2017, SphereFace: deep hypersphere embedding for face recognition, 212
Liu, 2018, Trojaning attack on neural networks
A. Saha, A. Subramanya, H. Pirsiavash, Hidden trigger backdoor attacks, arXiv:1910.00033 (2019).
Shafahi, 2018, Poison frogs! targeted clean-label poisoning attacks on neural networks
Sharif, 2016, Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition, 1528
Sharif, 2019, A general framework for adversarial examples with objectives, ACM Trans. Priv. Secur., 22, 16:1, 10.1145/3317611
Szegedy, 2017, Inception-v4, inception-resnet and the impact of residual connections on learning, 4278
C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, R. Fergus, Intriguing properties of neural networks, arXiv:1312.6199 (2013).
Taigman, 2014, DeepFace: closing the gap to human-level performance in face verification, 1701
T. Tanay, J.T.A. Andrews, L.D. Griffin, Built-in vulnerabilities to imperceptible adversarial perturbations, arXiv:1806.07409 (2018).
A. Turner, D. Tsipras, A. Madry, Clean-label backdoor attacks, 2019.
F. Wang, Overlapping list between VGGFace2 and lfw, 2018, (https://github.com/happynear/FaceDatasets).
Wolf, 2011, Face recognition in unconstrained videos with matched background similarity, 529
L. Wolf, T. Hassner, I. Maoz, Ytf benchmark list reorganized in pairs for performance reporting, 2011b, (https://www.cs.tau.ac.il/~wolf/ytfaces/).
Yao, 2019, Latent backdoor attacks on deep neural networks, 2041
Zhang, 2020, Adversarial examples for replay attacks against CNN-based face recognition with anti-spoofing capability, Comput. Vis. Image Underst., 102988, 10.1016/j.cviu.2020.102988
Zhang, 2016, Joint face detection and alignment using multitask cascaded convolutional networks, IEEE Signal Process. Lett., 23, 1499, 10.1109/LSP.2016.2603342
C. Zhu, W.R. Huang, A. Shafahi, H. Li, T. Gi, C. Studer, T. Goldstein, Transferable clean-label poisoning attacks on deep neural nets, arXiv:1905.05897 (2019).
