Towards harnessing feature embedding for robust learning with noisy labels

Chuang Zhang1, Li Shen2, Jian Yang3, Chen Gong4
1PCA Lab, The Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education, School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
2JD Explore Academy, Beijing, China
3College of Computer Science, Nankai University, Tianjin, China
4Jiangsu Key Lab of Image and Video Understanding for Social Security, Nanjing, China

Tóm tắt

Từ khóa


Tài liệu tham khảo

Arpit, D., Jastrzebski, S., Ballas, N. et al. (2017). A closer look at memorization in deep networks. In: Proceedings of the 34th international conference on machine learning, vol 70. PMLR, pp. 233–242

Bahri, D., Jiang, H., & Gupta, M. (2020). Deep k-nn for noisy labels. In: International conference on machine learning, PMLR, pp. 540–550

Bai, Y., & Liu, T. (2021). Me-momentum: Extracting hard confident examples from noisily labeled data. In: Proceedings of the IEEE/CVF international conference on computer vision, pp. 9312–9321

Bai, Y., Yang, E., Han, B. et al. (2021). Understanding and improving early stopping for learning with noisy labels. arXiv preprint arXiv:2106.15853

Berthelot, D., Carlini, N., Goodfellow, I. J. et al. (2019). Mixmatch: A holistic approach to semi-supervised learning. In: Advances in neural information processing systems, pp. 5050–5060

Chen, P., Liao, B., Chen, G. et al. (2019). Understanding and utilizing deep neural networks trained with noisy labels. In: Proceedings of the 36th International conference on machine learning, vol 97. PMLR, pp. 1062–1070

Devlin, J., Chang, M., Lee, K. et al. (2019). BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pp. 4171–4186

Ghosh, A., Kumar, H., & Sastry, P. (2017). Robust loss functions under label noise for deep neural networks. In: Proceedings of the AAAI conference on artificial intelligence

Goldberger, J., & Ben-Reuven, E. (2017). Training deep neural-networks using a noise adaptation layer. In: 5th International conference on learning representations

Gong, C., Zhang, H., Yang, J. et al. (2017). Learning with inadequate and incorrect supervision. In: 2017 IEEE International conference on data mining (ICDM), IEEE, pp. 889–894

Han, B., Yao, Q., Yu, X. et al. (2018). Co-teaching: Robust training of deep neural networks with extremely noisy labels. In: Advances in neural information processing systems, pp. 8536–8546

Han, B., Niu, G., Yu, X. et al. (2020a) Sigua: Forgetting may make learning with noisy labels more robust. In: International conference on machine learning, PMLR, pp. 4006–4016

Han, B., Yao, Q., Liu, T. et al. (2020b) A survey of label-noise representation learning: Past, present and future. arXiv preprint arXiv:2011.04406

He, K., Zhang, X., Ren S. et al. (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778

Hinton, G., Deng, L., Yu, D., et al. (2012). Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6), 82–97.

Huang, L., Zhang, C., & Zhang, H. (2020). Self-adaptive training: Beyond empirical risk minimization. Advances in Neural Information Processing Systems, 33, 19365–19376.

Iscen, A., Tolias, G., Avrithis, Y. et al. (2017). Efficient diffusion on region manifolds: Recovering small objects with compact cnn representations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2077–2086

Iscen, A., Tolias, G., Avrithis, Y. et al. (2019). Label propagation for deep semi-supervised learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 5070–5079

Jiang, L., Zhou, Z., Leung, T. et al. (2018). Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In: International conference on machine learning, pp. 2309–2318

Jiang, L., Huang, D., Liu, M. et al. (2020). Beyond synthetic noise: Deep learning on controlled noisy labels. In: Proceedings of the 37th international conference on machine learning, vol 119. PMLR, pp. 4804–4815

Krizhevsky, A., Hinton, G. et al. (2009). Learning multiple layers of features from tiny images. Proceedings of the IEEE

Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems, 25, 1097–1105.

Kuznetsova, A., Rom, H., Alldrin, N. et al. (2018). The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv preprint arXiv:1811.00982

Li, J., Socher, R., & Hoi, S. C. H. (2020). Dividemix: Learning with noisy labels as semi-supervised learning. In: 8th International conference on learning representations

Li, W., Wang, L., Li, W. et al. (2017). Webvision database: Visual learning and understanding from web data. arXiv preprint arXiv:1708.02862

Li, X., Liu, T., Han, B. et al. (2021). Provably end-to-end label-noise learning without anchor points. In: International conference on machine learning, PMLR, pp. 6403–6413

Mahajan, D., Girshick, R., Ramanathan, V. et al. (2018). Exploring the limits of weakly supervised pretraining. In: European conference on computer vision, pp 185–201

Natarajan, N., Dhillon, I. S., Ravikumar, P. K., et al. (2013). Learning with noisy labels. Advances in Neural Information Processing Systems, 26, 1196–1204.

Nguyen, D. T., Mummadi, C. K., Ngo, T. P. N. et al. (2019). Self: Learning to filter noisy labels with self-ensembling. In: International conference on learning representations

Patrini, G., Nielsen, F., Nock, R. et al. (2016). Loss factorization, weakly supervised learning and label noise robustness. In: International conference on machine learning, pp. 708–717

Patrini, G., Rozza, A., Krishna Menon, A. et al. (2017). Making deep neural networks robust to label noise: A loss correction approach. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1944–1952

Reed, S. E., Lee, H., Anguelov, D. et al. (2015). Training deep neural networks on noisy labels with bootstrapping. In: 3rd International conference on learning representations

Sainath T. N., Mohamed, A., Kingsbury, B. et al. (2013). Deep convolutional neural networks for lvcsr. In: 2013 IEEE international conference on acoustics, speech and signal processing, IEEE, pp. 8614–8618

Song, H., Kim, M., & Lee, J. G. (2019). Selfie: Refurbishing unclean samples for robust deep learning. In: International conference on machine learning, pp. 5907–5915

Song, H., Kim, M., Park, D., et al. (2020). Learning from noisy labels with deep neural networks: A survey. arXiv preprint arXiv:2007.08199

Tanaka, D., Ikami, D., Yamasaki, T. et al. (2018). Joint optimization framework for learning with noisy labels. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5552–5560

Van Rooyen, B., Menon, A., & Williamson, R. C. (2015). Learning with symmetric label noise: The importance of being unhinged. Advances in Neural Information Processing Systems, 28, 10–18.

Vaswani, A., Shazeer, N., Parmar, N. et al. (2017). Attention is all you need. In: Advances in neural information processing systems, pp. 5998–6008

Wang, Y., Liu, W., Ma, X., et al. (2018). Iterative learning with open-set noisy labels. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 8688–8696

Wei, H., Feng, L., Chen, X., et al. (2020). Combating noisy labels by agreement: A joint training method with co-regularization. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 13726–13735

Wu, S., Xia, X., Liu, T. et al. (2021). Class2simi: A noise reduction perspective on learning with noisy labels. In: International conference on machine learning, PMLR, pp. 11285–11295

Xia, X., Liu, T., Wang, N. et al. (2019). Are anchor points really indispensable in label-noise learning? In: Advances in neural information processing systems, pp. 6838–6849

Xiao, T., Xia, T., Yang, Y. et al. (2015). Learning from massive noisy labeled data for image classification. In: International conference on machine learning, pp. 2691–2699

Xu, Y., Cao, P., Kong, Y. et al. (2019). L_dmi: A novel information-theoretic loss function for training deep nets robust to label noise. Advances in Neural Information Processing Systems, 32

Yao, Q., Yang, H., Han, B. et al. (2020). Searching to exploit memorization effect in learning with noisy labels. In: International Conference on Machine Learning, PMLR, pp. 10789–10798

Yao, Y., Liu, T., Gong, M., et al. (2021). Instance-dependent label-noise learning under a structural causal model. Advances in Neural Information Processing Systems, 34, 1013–1023.

Yi, K., & Wu, J. (2019). Probabilistic end-to-end noise correction for learning with noisy labels. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 7017–7025

Yu, X., Han, B., Yao, J. et al. (2019). How does disagreement help generalization against label corruption? In: Proceedings of the 36th international conference on machine learning, vol 97. PMLR, pp. 7164–7173

Zhang, Z., & Sabuncu, M. (2018). Generalized cross entropy loss for training deep neural networks with noisy labels. Advances in Neural Information Processing Systems, pp 8778–8788

Zhu, Z., Liu, T., Liu, Y. (2021). A second-order approach to learning with instance-dependent label noise. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10113–10123