Semantic vector learning for natural language understanding
Tài liệu tham khảo
Bordes, 2013, Translating embeddings for modeling multi-relational data, 2787
Chen, 2009, Similarity-based classification: Concepts and algorithms, J. Mach. Learn. Res., 10, 747
Chen, 2016, Syntax or semantics? knowledge-guided joint semantic frame parsing, 348
Chopra, 2005, Learning a similarity metric discriminatively, with application to face verification, 1, 539
Collobert, 2011, Natural language processing (almost) from scratch, J. Mach. Learn. Res., 12, 2493
Dietterich, 2002, Machine learning for sequential data: a review, 227
Do, 2018, Knowledge graph embedding with multiple relation projection, 332
Hakkani-Tür, 2016, Multi-domain joint semantic frame parsing using bi-directional rnn-lstm, 715
He, 2005, Semantic processing using the hidden vector state model, Comput. Speech Lang., 19, 85, 10.1016/j.csl.2004.03.001
Hinton, 2006, Reducing the dimensionality of data with neural networks, Science, 313, 504, 10.1126/science.1127647
Hochreiter, 1997, Long short-term memory, Neural Comput., 9, 1735, 10.1162/neco.1997.9.8.1735
Jeong, 2006, Jointly predicting dialog act and named entity for spoken language understanding, 66
Kim, 2016, Intent detection using semantically enriched word embeddings, 414
Kim, 2014, Convolutional neural networks for sentence classification, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)., 1746, 10.3115/v1/D14-1181
Le, 2014, Distributed representations of sentences and documents, 1188
Lee, 2017, LSTM-CRF models for named entity recognition, IEICE Trans. Inf. Syst., 100, 882, 10.1587/transinf.2016EDP7179
Liu, 2016, Attention-based recurrent neural network models for joint intent detection and slot filling, Interspeech., 685, 10.21437/Interspeech.2016-1352
Mesnil, 2015, Using recurrent neural networks for slot filling in spoken language understanding, IEEE/ACM Trans. Audio Speech Lang. Process (TASLP)., 23, 530, 10.1109/TASLP.2014.2383614
Mesnil, 2013, Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding, 3771
Mikolov, T., Chen, K., Corrado, G., Dean, J., 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Mueller, 2016, Siamese recurrent architectures for learning sentence similarity, 2786
Pennington, 2014, Glove: Global vectors for word representation, 1532
Preller, 2014, From logical to distributional models, Electronic Proceedings in Theoretical Computer Science., 113, 10.4204/EPTCS.171.11
Price, 1990, Evaluation of spoken language systems: the ATIS domain
Ravuri, 2015, Recurrent neural network and LSTM models for lexical utterance classification, 135
Schwartz, 1997, Hidden understanding models for statistical sentence understanding, Vol. 2, 1479
Xu, 2013, Convolutional neural network based triangular crf for joint intent detection and slot filling, 78