Adaptive iterative attack towards explainable adversarial robustness
Tài liệu tham khảo
Deng, 2018, Active multi-kernel domain adaptation for hyperspectral image classification, Pattern Recognit., 77, 306, 10.1016/j.patcog.2017.10.007
Deng, 2019, Active transfer learning network: a unified deep joint spectral-spatial feature learning model for hyperspectral image classification, IEEE Trans. Geosci. Remote Sens., 57, 1741, 10.1109/TGRS.2018.2868851
Long, 2015, Fully convolutional networks for semantic segmentation, 3431
Li, 2019, Spatio-temporal deformable 3d convnets with attention for action recognition, Pattern Recognit.
Ren, 2017, Faster R-CNN: towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., 39, 1137, 10.1109/TPAMI.2016.2577031
Szegedy, 2014, Intriguing properties of neural networks, International Conference on Learning Representations
Nguyen, 2015, Deep neural networks are easily fooled: High confidence predictions for unrecognizable images, 427
Goodfellow, 2015, Explaining and harnessing adversarial examples
Tramèr, 2018, Ensemble adversarial training: Attacks and defenses, International Conference on Learning Representations
Kurakin, 2017, Adversarial examples in the physical world
Dong, 2018, Boosting adversarial attacks with momentum
Moosavi Dezfooli, 2016, DeepFool: a simple and accurate method to fool deep neural networks
Madry, 2018, Towards deep learning models resistant to adversarial attacks, International Conference on Learning Representations
Carlini, 2017, Towards evaluating the robustness of neural networks, 39
Brendel, 2019, Accurate, reliable and fast robustness evaluation, 12841
C. Zhang, A. Liu, X. Liu, Y. Xu, H. Yu, Y. Ma, T. Li, Interpreting and improving adversarial robustness with neuron sensitivity, arXiv:1909.06978 (2019).
Chattopadhyay, 2019, Curse of dimensionality in adversarial examples, 1
Aghdam, 2017, Explaining adversarial examples by local properties of convolutional neural networks., 226
Kumarl Ibrahim Ben Daya, 2019, Beyond explainability: Leveraging interpretability for improved adversarial learning, 16
Stutz, 2019, Disentangling adversarial robustness and generalization, 6976
Li, 2017, Robust structured nonnegative matrix factorization for image representation, IEEE Trans. Neural Netw. Learn. Syst., 29, 1947, 10.1109/TNNLS.2017.2691725
Li, 2016, Weakly supervised deep matrix factorization for social image understanding, IEEE Trans. Image Process., 26, 276, 10.1109/TIP.2016.2624140
Jiang, 2019, Unsupervised adversarial perturbation eliminating via disentangled representations, 1
Sinha, 2017, Certifying some distributional robustness with principled adversarial training
Song, 2019, Improving the generalization of adversarial training with domain adaptation, International Conference on Learning Representations
Dong, 2019, Towards interpretable deep neural networks by leveraging adversarial examples, AAAI Network Interpretability for Deep Learning Workshop
Zhang, 2019, Interpreting adversarially trained convolutional neural networks, 7502
Russakovsky, 2015, Imagenet large scale visual recognition challenge, Int. J. Comput. Vis., 115, 211, 10.1007/s11263-015-0816-y
Ma, 2019, Explaining vulnerabilities to adversarial machine learning through visual analytics, IEEE Trans. Visual. Comput. Graph., 26, 1075, 10.1109/TVCG.2019.2934631
Yu, 2019, Interpreting and evaluating neural network robustness, IJCAI
Liu, 2017, Delving into transferable adversarial examples and black-box attacks, International Conference on Learning Representations
Rozsa, 2017, Facial attributes: accuracy and adversarial robustness, Pattern Recogni. Lett.
Papernot, 2016, The limitations of deep learning in adversarial settings, 372
Biggio, 2018, Wild patterns: Ten years after the rise of adversarial machine learning, Pattern Recognit., 84, 317, 10.1016/j.patcog.2018.07.023
Papernot, 2017, Practical black-box attacks against machine learning
Li, 2019, Certified Adversarial Robustness with Additive Noise, Advances in Neural Information Processing Systems
Xie, 2019, Improving transferability of adversarial examples with input diversity, 2730
Sethi, 2018, Data driven exploratory attacks on black box classifiers in adversarial domains, Neurocomputing, 289, 129, 10.1016/j.neucom.2018.02.007
Li, 2018, Deep collaborative embedding for social image understanding, IEEE Trans. Pattern Anal. Mach. Intell.
Li, 2015, Weakly supervised deep metric learning for community-contributed image retrieval, IEEE Trans. Multimed., 17, 1989, 10.1109/TMM.2015.2477035
Athalye, 2018, Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples, International Conference on Machine Learning
Wang, 2017, A theoretical framework for robustness of (deep) classifiers against adversarial examples, ICLR Workshop
Du, 2017, Gradient descent can take exponential time to escape saddle points, 1067
He, 2016, Deep residual learning for image recognition, 770
Huang, 2017, Densely connected convolutional networks, 2261
Simonyan, 2015, Very deep convolutional networks for large-scale image recognition
Szegedy, 2016, Rethinking the inception architecture for computer vision, 2818
Szegedy, 2017, Inception-v4, inception-resnet and the impact of residual connections on learning., vol. 4, 12
J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, Proceedings of the IEEE conference on computer vision and pattern recognition, 2018.
Zoph B., Vasudevan V., Shlens J., Le, Q. V., Learning transferable architectures for scalable image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, 2018.
Brendel, 2020, 129
Kim, 2019, Bridging adversarial robustness and gradient interpretability, ICLR Workshop
Liu, 2017, Distributed adaptive binary quantization for fast nearest neighbor search, IEEE Trans. Image Process., 26, 5324, 10.1109/TIP.2017.2729896