Fusing audio, visual and textual clues for sentiment analysis from multimodal content
Tài liệu tham khảo
L.-P. Morency, R. Mihalcea, P. Doshi, Towards multimodal sentiment analysis: harvesting opinions from the web, In: Proceedings of the 13th International Conference on Multimodal Interfaces, ACM, Alicante, Spain, 2011, pp. 169–176.
E. Cambria, N. Howard, J. Hsu, A. Hussain, Sentic blending: scalable multimodal fusion for continuous interpretation of semantics and sentics, In: IEEE SSCI, Singapore, 2013, pp. 108–117.
Huang, 2006, Extreme learning machine, Neurocomputing, 70, 489, 10.1016/j.neucom.2005.12.126
Huang, 2015, New trends of learning in computational intelligence, IEEE Comput. Intelli. Mag., 10, 16, 10.1109/MCI.2015.2405277
Huang, 2015, Trends in extreme learning machines, Neural Netw., 61, 32, 10.1016/j.neunet.2014.10.001
Decherchi, 2013, Circular-ELM for the reduced-reference assessment of perceived image quality, Neurocomputing, 102, 78, 10.1016/j.neucom.2011.12.050
Cambria, 2015, An ELM-based model for affective analogical reasoning, Neurocomputing, 149, 443, 10.1016/j.neucom.2014.01.064
Principi, 2015, Acoustic template-matching for automatic emergency state detection, Neurocomputing, 149, 426, 10.1016/j.neucom.2014.01.067
Poria, 2015, Towards an intelligent framework for multimodal affective data analysis, Neural Netw., 63, 104, 10.1016/j.neunet.2014.10.005
H. Qi, X. Wang, S.S. Iyengar, K. Chakrabarty, Multisensor data fusion in distributed sensor networks using mobile agents, In: Proceedings of 5th International Conference on Information Fusion, 2001, pp. 11–16.
Ekman, Paul, Friesen, Wallace V, O׳Sullivan, Maureen, Chan, Anthony, Diacoyanni-Tarlatzis, Irene, Heider, Karl, Krause, Rainer, LeCompte, William Ayhan and Pitcairn, Tom and Ricci-Bitti, Pio E and others, Universals and cultural differences in the judgments of facial expressions of emotion. J. person. soc. psychol. 53 (4) (1987) 712–717.
Matsumoto, 1992, More evidence for the universality of a contempt expression, Motiv. Emot., 16, 363, 10.1007/BF00992972
Ekman, 1978
W.V. Friesen, P. Ekman, Emfacs-7: Emotional Facial Action Coding System, Unpublished manuscript, University of California at San Francisco 2.
A. Lanitis, C.J. Taylor, T.F. Cootes, A unified approach to coding and interpreting face images, In: Fifth International Conference on Computer Vision, 1995. Proceedings, IEEE, Cambridge, Massachusetts, USA, 1995, pp. 368–373.
D. Datcu, L. Rothkrantz, Semantic audio–visual data fusion for automatic emotion recognition, In: Euromedia, Citeseer, 2008.
Kenji, 1991, Recognition of facial expression from optical flow, IEICE Trans. Inf. Syst., 74, 3474
Ueki, 1994, Expression analysis/synthesis system based on emotion space constructed by multilayered neural network, Syst. Comput. Jpn., 25, 95
L.S.-H. Chen, Joint processing of audio–visual information for the recognition of emotional expressions in human–computer interaction (Ph.D. thesis), Citeseer, 2000.
Murray, 1993, Toward the simulation of emotion in synthetic speech, J. Acoust. Soc. Am., 93, 1097, 10.1121/1.405558
R. Cowie, E. Douglas-Cowie, Automatic statistical analysis of the signal and prosodic signs of emotion in speech, In: Fourth International Conference on Spoken Language, 1996. ICSLP 96, Proceedings, vol. 3, IEEE, Philadelphia, PA, USA, 1996, pp. 1989–1992.
F. Dellaert, T. Polzin, A. Waibel, Recognizing emotion in speech, In: Fourth International Conference on Spoken Language, 1996, ICSLP 96, Proceedings, vol. 3, IEEE, Philadelphia, PA, USA, 1996, pp. 1970–1973.
T. Johnstone, Emotional speech elicited using computer games, In: Fourth International Conference on Spoken Language, 1996, ICSLP 96, Proceedings, vol. 3, IEEE, Philadelphia, PA, USA, 1996, pp. 1985–1988.
Navas, 2006, An objective and subjective study of the role of semantics and prosodic features in building corpora for emotional tts, IEEE Trans. Audio Speech Lang. Process., 14, 1117, 10.1109/TASL.2006.876121
H. Atassi, A. Esposito, A speaker independent approach to the classification of emotional vocal expressions, In: ICTAI, 2008, pp. 147–150.
F. Burkhardt, A. Paeschke, M. Rolfes, W. Sendlmeier, B. Weiss, A database of German emotional speech, In: Interspeech, 2005, pp. 1517–1520.
P. Pudil, F. Ferri, J. Novovicova, J. Kittler, Floating search methods for feature selection with nonmonotonic criterion functions, In: IAPR, 1994, pp. 279–283.
K.R. Scherer, Adding the affective dimension: a new look in speech analysis and synthesis, In: ICSLP, 1996, pp. 1808–1811.
G. Caridakis, G. Castellano, L. Kessous, A. Raouzaiou, L. Malatesta, S. Asteriadis, K. Karpouzis, Multimodal emotion recognition from expressive faces, body gestures and speech, In: Artificial Intelligence and Innovations 2007: From Theory to Applications, 2007, pp. 375–388.
J. Wiebe, Learning subjective adjectives from corpora, In: AAAI/IAAI, 2000, pp. 735–740.
P.D. Turney, Thumbs up or thumbs down?: semantic orientation applied to unsupervised classification of reviews, In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, Association for Computational Linguistics, 2002, pp. 417–424.
E. Cambria, D. Olsher, D. Rajagopal, SenticNet 3: a common and common-sense knowledge base for cognition-driven sentiment analysis, In: AAAI, Quebec City, 2014, pp. 1515–1521.
T. Wilson, J. Wiebe, P. Hoffmann, Recognizing contextual polarity in phrase-level sentiment analysis, In: HLT/EMNLP, Vancouver, BC, Canada, 2005, pp. 347–354.
E. Riloff, J. Wiebe, Learning extraction patterns for subjective expressions, In: Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2003, pp. 105–112.
B. Pang, L. Lee, A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts, In: ACL, Barcelona, 2004, pp. 271–278.
C. Strapparava, A. Valitutti, Wordnet affect: an affective extension of wordnet., In: LREC, vol. 4, 2004, pp. 1083–1086.
C.O. Alm, D. Roth, R. Sproat, Emotions from text: machine learning for text-based emotion prediction, In: Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing, Association for Computational Linguistics, 2005, pp. 579–586.
G. Mishne, Experiments with mood classification in blog posts, In: Proceedings of ACM SIGIR 2005 Workshop on Stylistic Analysis of Text for Information Access, vol. 19, 2005.
R. Xia, C. Zong, X. Hu, E. Cambria, Feature ensemble plus sample selection: domain adaptation for sentiment classification (extended abstract), In: IJCAI, Buenos Aires, 2015, pp. 4229-4233.
C. Yang, K.H.-Y. Lin, H.-H. Chen, Building emotion lexicon from weblog corpora, In: Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, Association for Computational Linguistics, 2007, pp. 133–136.
F.-R. Chaumartin, Upar7: a knowledge-based system for headline sentiment tagging, In: Proceedings of the 4th International Workshop on Semantic Evaluations, Association for Computational Linguistics, 2007, pp. 422–425.
A. Esuli, F. Sebastiani, Sentiwordnet: a publicly available lexical resource for opinion mining, In: Proceedings of LREC, vol. 6, 2006, pp. 417–422.
K.H.-Y. Lin, C. Yang, H.-H. Chen, What emotions do news articles trigger in their readers?, In: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM, Glasgow, UK, 2007, pp. 733–734.
A. Pak, P. Paroubek, Twitter as a corpus for sentiment analysis and opinion mining, In: LREC, 2010, pp. 1320–1326.
M. Hu, B. Liu, Mining and summarizing customer reviews, In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, Seattle, Washington, USA, 2004, pp. 168–177.
Balahur, 2012, Building and exploiting emotinet, a knowledge base for emotion detection based on the appraisal theory model, IEEE Trans. Affect. Comput., 3, 88, 10.1109/T-AFFC.2011.33
C. Shan, S. Gong, P.W. McOwan, Beyond facial expressions: learning human emotion from body gestures, In: BMVC, 2007, pp. 1–10.
Mansoorizadeh, 2010, Multimodal information fusion application to human emotion recognition from face and speech, Multimed. Tools Appl., 49, 277, 10.1007/s11042-009-0344-2
Zeng, 2007, Audio–visual affect recognition, IEEE Trans. Multimed., 9, 424, 10.1109/TMM.2006.886310
Gunes, 2007, Bi-modal emotion recognition from expressive face and body gestures, J. Netw. Comput. Appl., 30, 1334, 10.1016/j.jnca.2006.09.007
Pun, 2006, Brain–computer interaction research at the computer vision and multimedia laboratory, University of Geneva, IEEE Trans. Neural Syst. Rehabil. Eng., 14, 210, 10.1109/TNSRE.2006.875544
F. Wallhoff, Facial Expressions and Emotion Database, Technische Universität München.
M. Pantic, M. Valstar, R. Rademaker, L. Maat, Web-based database for facial expression analysis, In: IEEE International Conference on Multimedia and Expo, 2005. ICME 2005, IEEE, Amsterdam, The Netherlands, 2005, pp. 5-9.
M. Paleari, B. Huet, Toward emotion indexing of multimedia excerpts, In: International Workshop on Content-Based Multimedia Indexing, 2008, CBMI 2008, IEEE, London, UK, 2008, pp. 425–432.
Poria, 2014, EmoSenticSpace, Knowl. Based Syst., 69, 108, 10.1016/j.knosys.2014.06.011
J.M. Saragih, S. Lucey, J.F. Cohn, Face alignment through subspace constrained mean-shifts, In: 2009 IEEE 12th International Conference on Computer Vision, IEEE, Kyoto, Japan, 2009, pp. 1034–1041.
F. Eyben, M. Wollmer, B. Schuller, Openear—introducing the munich open-source emotion and affect recognition toolkit, In: 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009. ACII 2009, IEEE, Amsterdam, Netherlands, 2009, pp. 1–6.
E. Cambria, A. Hussain, C. Havasi, C. Eckl. Sentic computing: exploitation of common sense for the development of emotion-sensitive systems. In: LNCS, vol. 5967, Springer, pp. 148–156, 2010
Cambria, 2015