Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
Tóm tắt
Từ khóa
Tài liệu tham khảo
C. Middlehurst. (2015) “China unveils world’s first facial recognition atm.” http://www.telegraph.co.uk/news/worldnews/asia/china/11643314/China-unveils-worlds-first-facial-recognition-ATM.html.
A. Harvey. CV Dazzle. (2010) “Camouflage from face detection.” Master’s thesis, New York University. Available at: http://cvdazzle.com.
M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter. (2016) "Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition." in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016, pp. 1528-1540.
NEC. Face recognition. http://www.nec.com/en/global/solutions/biometrics/technologies/facerecognition.html.
NEURO Technology. SentiVeillance SDK. http://www.neurotechnology.com/sentiveillance.html
D. Amodei, R. Anubhai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, J. Chen, M. Chrzanowski, A. Coates, G. Diamos, et al. (2016) “Deep speech 2: End-to-end speech recognition in english and mandarin.” in International Conference on Machine Learning, pages 173-182.
iOS - Siri - Apple, https://www.apple.com/ios/siri/.
Alexa, https://developer.amazon.com/alexa.
Cortana - Your Intelligent Virtual and Personal Assistant - Microsoft, https://www.microsoft.com/en-us/windows/cortana.
A. Kurakin, I. Goodfellow, and S. Bengio. (2016) “Adversarial examples in the physical world.” arXiv preprint arXiv:1607.02533.
I. Evtimov, K. Eykholt, E. Fernandes, T. Kohno, B. Li, A. Prakash, A. Rahmati, and D. Song. (2017) “Robust physical-world attacks on deep learning models.”, arXiv preprint arXiv:1707.08945, vol. 1.
C. Xie, J. Wang, Z. Zhang, Y. Zhou, L. Xie, and A. Yuille. (2017) “Adversarial examples for semantic segmentation and object detection.” in International Conference on Computer Vision. IEEE.
Nicholas Carlini and David Wagner. (2016) “Towards evaluating the robustness of neural networks.” arXiv preprint arXiv:1608.04644.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. (2017) “Towards deep learning models resistant to adversarial attacks.” arXiv preprint arXiv:1706.06083. The complete code, along with the description of the challenge, is available at https://github.com/MadryLab/mnist_challenge and https://github.com/MadryLab/cifar10_challenge.
Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. (2014) “Explaining and harnessing adversarial examples.” CoRR, abs/1412.6572. URL http://arxiv.org/abs/1412.6572.
Y. Dong, F. Liao, T. Pang, H. Su, X. Hu, J. Li, and J. Zhu, (2017) "Boosting adversarial attacks with momentum", arXiv preprint arXiv:1710.06081.
Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. (2016) “Adversarial machine learning at scale.” arXiv preprint arXiv:1611.01236.
N. Papernot, P/ McDaniel, X. Wu, S. Jha, and A. Swami. (2016) “Distillation as a defense to adversarial perturbations against deep neural networks.” IEEE Symposium on Security and Privacy.
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., and Swami, A. (2016) “The limitations of deep learning in adversarial settings.” in Proceedings of the 1st IEEE European Symposium on Security and Privacy, pp. 372-387, 2016b.
https://www.kaggle.com/google-brain/nips17-adversarial-learning-final-results.
C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. (2016) “Rethinking the inception architecture for computer vision.” in CVPR.
C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. (2017) “Inception-v4, inception-resnet and the impact of residual connections on learning.” in AAAI.
Florian Tram’r, Alexey Kurakin, Nicolas Papernot, Dan Boneh, and Patrick D. McDaniel. (2017) “Ensemble adversarial training: Attacks and defenses.” arXiv preprint arXiv:1705.07204.
F. Liao, M. Liang, Y. Dong, T. Pang, J. Zhu, and X. Hu. (2017) “Defense against adversarial attacks using high-level representation guided denoiser.” arXiv preprint arXiv:1712.02976.