Sequential neural networks for noetic end-to-end response selection
Tài liệu tham khảo
Bromley, 1993, Signature verification using a siamese time delay neural network, 737
Chen, 2018, Enhancing sentence embedding with generalized pooling, 1815
Chen, 2019, Sequential attention-based network for noetic end-to-end response selection
Chen, 2018, Neural natural language inference models enhanced with external knowledge, 2406
Chen, 2017, Enhanced LSTM for natural language inference, 1657
Chen, 2017, Recurrent neural network-based sentence encoder with gated attention for natural language inference, 36
Devlin, 2018, BERT: pre-training of deep bidirectional transformers for language understanding, CoRR
Ganhotra, 2019, Knowledge-incorporating ESIM models for response selection in retrieval-based dialog systems
Gunasekara, 2019, DSTC7 task 1: Noetic end-to-end response selection, 60
Gunasekara, 2019, Dstc7 task 1: noetic end-to-end response selection
Kadlec, 2015, Improved deep learning baselines for Ubuntu corpus dialogs, CoRR
Kingma, 2014, Adam: a method for stochastic optimization, CoRR
Kummerfeld, J. K., Gouravajhala, S. R., Peper, J., Athreya, V., Gunasekara, C., Ganhotra, J., Patel, S. S., Polymenakos, L., Lasecki, W. S., 2018. Analyzing Assumptions in Conversation Disentanglement Research Through the Lens of a New Dataset and Model. arXiv:1810.11118.
Lin, 2017, A structured self-attentive sentence embedding, CoRR
Lowe, 2015, The Ubuntu dialogue corpus: a large dataset for research in unstructured multi-turn dialogue systems, 285
Mikolov, 2018, Advances in pre-training distributed word representations
Mikolov, 2013, Distributed representations of words and phrases and their compositionality, 3111
Mou, 2016, Natural language inference by tree-based convolution and heuristic matching
Pennington, 2014, Glove: Global vectors for word representation, 1532
Serban, 2016, Building end-to-end dialogue systems using generative hierarchical neural network models, 3776
Tan, 2015, Lstm-based deep learning models for non-factoid answer selection, CoRR
Vaswani, 2017, Attention is all you need, 6000
Vig, 2019, Comparison of transfer-learning approaches for response selectionin multi-turn conversations
Wan, 2016, Match-SRNN: modeling the recursive matching structure with spatial RNN, 2922
Wang, 2016, Learning natural language inference with LSTM, 1442
Wu, 2016, Google’s neural machine translation system: bridging the gap between human and machine translation, CoRR
Wu, 2017, Sequential matching network: a new architecture for multi-turn response selection in retrieval-based chatbots, 496
Yan, 2016, Learning to respond with deep neural networks for retrieval-based human-computer conversation system, 55
Yoshino, K., Hori, C., Perez, J., D’Haro, L. F., Polymenakos, L., Gunasekara, C., Lasecki, W. S., Kummerfeld, J., Galley, M., Brockett, C., Gao, J., Dolan, B., Gao, S., Marks, T. K., Parikh, D., Batra, D., 2018. The 7th Dialog System Technology Challenge. arXiv preprint.
Zhang, 2018, Modeling multi-turn conversation with deep utterance aggregation, 3740
Zhou, 2017, Mechanism-aware neural machine for dialogue response generation, 3400
Zhou, 2016, Multi-view response selection for human-computer conversation, 372
Zhou, 2018, Multi-turn response selection for chatbots with deep attention matching network, 1, 1118
Zhu, 2015, Aligning books and movies: towards story-like visual explanations by watching movies and reading books, 19