Semantic guide for semi-supervised few-shot multi-label node classification

Information Sciences - Tập 591 - Trang 235-250 - 2022
Lin Xiao1, Pengyu Xu1, Liping Jing1, Uchenna Akujuobi2, Xiangliang Zhang2
1Beijing Jiaotong University, No.3 Shangyuancun, Haidian Qu, Beijing Shi, China
2King Abdullah University of Science and Technology, Thuwal, Saudi Arabia

Tài liệu tham khảo

Gupta, 2021, Instacovnet-19: A deep learning classification model for the detection of covid-19 patients using chest x-ray, Applied Soft Computing, 99, 10.1016/j.asoc.2020.106859 Katarya, 2020, Recognizing fake news in social media with deep learning: A systematic review, 1 C. Finn, P. Abbeel, S. Levine, Model-agnostic meta-learning for fast adaptation of deep networks, in: ICML, 2017. J. Snell, K. Swersky, R. Zemel, Prototypical networks for few-shot learning, in: Advances in neural information processing systems, 2017, pp. 4077–4087. Hou, 2019, Cross attention network for few-shot classification, Advances in Neural Information Processing Systems, 4003 Ren, 2019, Incremental few-shot learning with attention attractor networks, Advances in Neural Information Processing Systems, 5275 N. Mishra, M. Rohaninejad, X. Chen, P. Abbeel, A simple neural attentive meta-learner, arXiv preprint arXiv:1707.03141. A. Rios, R. Kavuluru, Few-shot and zero-shot multi-label learning for structured label spaces, in: Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing, Vol. 2018, NIH Public Access, 2018, p. 3132. Xian, 2017, Zero-shot learning-the good, the bad and the ugly, 4582 Patwary, 2019, Sensitivity analysis on initial classifier accuracy in fuzziness based semi-supervised learning, Information Sciences, 490, 93, 10.1016/j.ins.2019.03.036 Perozzi, 2014, Online learning of social representations, 701 Grover, 2016, node2vec Scalable feature learning for networks, 855 C. Yang, Z. Liu, D. Zhao, M. Sun, E. Chang, Network representation learning with rich text information, in: Twenty-Fourth International Joint Conference on Artificial Intelligence, 2015. Nandanwar, 2016, Structural neighborhood based classification of nodes in a network, 1085 H. Gao, H. Huang, Deep attributed network embedding., in: IJCAI, Vol. 18, 2018, pp. 3364–3370. Chen, 2017, Hierarchical mixed neural network for joint representation learning of social-attribute network, 238 P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, Y. Bengio, Graph attention networks, arXiv preprint arXiv:1710.10903. T.N. Kipf, M. Welling, Semi-supervised classification with graph convolutional networks, arXiv preprint arXiv:1609.02907. Hamilton, 2017, Inductive representation learning on large graphs, Advances in Neural Information Processing Systems, 1024 K.K. Thekumparampil, C. Wang, S. Oh, L.-J. Li, Attention-based graph neural network for semi-supervised learning, arXiv preprint arXiv:1803.03735. Zhang, 2013, A review on multi-label learning algorithms, IEEE transactions on knowledge and data engineering, 26, 1819, 10.1109/TKDE.2013.39 Zhang, 2010, Multi-label learning by exploiting label dependency, 999 Yu, 2014, Multi-label classification by exploiting label correlations, Expert Systems with Applications, 41, 2989, 10.1016/j.eswa.2013.10.030 U. Akujuobi, H. Yufei, Q. Zhang, X. Zhang, Collaborative graph walk for semi-supervised multi-label node classification, arXiv preprint arXiv:1910.09706. Gao, 2019, Semi-supervised graph embedding for multi-label graph node classification, 555 Tang, 2015, Line: Large-scale information network embedding, 1067 D. Bahdanau, K. Cho, Y. Bengio, Neural machine translation by jointly learning to align and translate, arXiv preprint arXiv:1409.0473. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, Łstrok;. Kaiser, I. Polosukhin, Attention is all you need, in: Advances in neural information processing systems, 2017, pp. 5998–6008. Fei-Fei, 2006, One-shot learning of object categories, IEEE transactions on pattern analysis and machine intelligence, 28, 594, 10.1109/TPAMI.2006.79 S. Ravi, H. Larochelle, Optimization as a model for few-shot learning. Santoro, 2016, Meta-learning with memory-augmented neural networks, 1842 O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al., Matching networks for one shot learning, in: Advances in neural information processing systems, 2016, pp. 3630–3638. Sung, 2018, Learning to compare: Relation network for few-shot learning, 1199 Gidaris, 2018, Dynamic few-shot visual learning without forgetting, 4367 Hariharan, 2017, Low-shot visual recognition by shrinking and hallucinating features, 3018 Qi, 2018, Low-shot learning with imprinted weights, 5822 Perez-Rua, 2020, Incremental few-shot object detection, 13846 Alfassy, 2019, Label-set operations networks for multi-label few-shot learning, 6548 Pennington, 2014, Global vectors for word representation, 1532 Dumais, 2004, Latent semantic analysis, Annual review of information science and technology, 38, 188, 10.1002/aris.1440380105 Hochreiter, 1997, Long short-term memory, Neural computation, 9, 1735, 10.1162/neco.1997.9.8.1735 Nam, 2014, Large-scale multi-label text classification revisiting neural networks, 437 Ji, 2010, Graph regularized transductive classification on heterogeneous information networks, 570 Yang, 1999, An evaluation of statistical approaches to text categorization, Information retrieval, 1, 69, 10.1023/A:1009982220290 Tsoumakas, 2007, Random k-labelsets: An ensemble method for multilabel classification, 406 Boutell, 2004, Learning multi-label scene classification, Pattern recognition, 37, 1757, 10.1016/j.patcog.2004.03.009 Read, 2011, Classifier chains for multi-label classification, Machine learning, 85, 333, 10.1007/s10994-011-5256-5 D.P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980. Z. Lin, M. Feng, C.N. d. Santos, M. Yu, B. Xiang, B. Zhou, Y. Bengio, A structured self-attentive sentence embedding, arXiv preprint arXiv:1703.03130.