Modelling the “transactive memory system” in multimodal multiparty interactionsJournal on Multimodal User Interfaces - - 2024
Beatrice Biancardi, Maurizio Mancini, Brian Ravenet, Giovanna Varni
AbstractTransactive memory system (TMS) is a team emergent state representing the knowledge of each member about “who knows what” in a team performing a joint task. We present a study to show how the three TMS dimensions Credibility, Specialisation, Coordination, can be modelled as a linear combination of the nonverbal multimodal features displayed by the team perf...... hiện toàn bộ
Exploiting on-the-fly interpretation to design technical documents in a mobile contextJournal on Multimodal User Interfaces - Tập 4 - Trang 129-145 - 2011
Sébastien Macé, Eric Anquetil
Pen-based interaction is well adapted for writing down information in a mobile context. However, there is a lack of software taking advantage of this interaction process to design technical documents in constrained environments. This is because sketch interpretation is a complex research problem and good performances are required to design industrial software. The first contribution of this articl...... hiện toàn bộ
Spatial and temporal variations of feature tracks for crowd behavior analysisJournal on Multimodal User Interfaces - Tập 10 - Trang 307-317 - 2015
Hajer Fradi, Jean-Luc Dugelay
The study of crowd behavior in public areas or during some public events is receiving a lot of attention in security community to detect potential risk and to prevent overcrowd. In this paper, we propose a novel approach for change detection, event recognition and characterization in human crowds. It consists of modeling time-varying dynamics of the crowd using local features. It also involves a f...... hiện toàn bộ
“Let me explain!”: exploring the potential of virtual agents in explainable AI interaction designJournal on Multimodal User Interfaces - Tập 15 Số 2 - Trang 87-98 - 2021
Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias B. Huber, Elisabeth André
AbstractWhile the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility, especially for end-users. In this paper, we explore the effects of incorporating virtual agents into explainable artificial intelligence (XAI...... hiện toàn bộ
Empathetic video clip experience through timely multimodal interactionJournal on Multimodal User Interfaces - Tập 8 - Trang 273-288 - 2014
Myunghee Lee, Gerard Jounghyun Kim
In this article, we describe a video clip playing system, named “Empatheater,” that is controlled by multimodal interaction. As the video clip is played, the user can interact and emulate predefined video “events” through guidance and multimodal natural interaction (e.g. following the main character’s motion, gestures or voice). Without the timely interaction, the video stops. The system shows gui...... hiện toàn bộ
Empirical investigation of the temporal relations between speech and facial expressions of emotionJournal on Multimodal User Interfaces - Tập 3 - Trang 263-270 - 2010
Stéphanie Buisine, Yun Wang, Ouriel Grynszpan
Behavior models implemented within Embodied Conversational Agents (ECAs) require nonverbal communication to be tightly coordinated with speech. In this paper we present an empirical study seeking to explore the influence of the temporal coordination between speech and facial expressions of emotions on the perception of these emotions by users (measuring their performance in this task, the perceive...... hiện toàn bộ
Automatic recognition of touch gestures in the corpus of social touchJournal on Multimodal User Interfaces - - 2016
Merel M. Jung, Mannes Poel, Ronald Poppe, Dirk K. J. Heylen
For an artifact such as a robot or a virtual agent to respond appropriately to human social touch behavior, it should be able to automatically detect and recognize touch. This paper describes the data collection of CoST: Corpus of Social Touch, a data set containing 7805 captures of 14 different social touch gestures. All touch gestures were performed in three variants: gentle, normal and rough on...... hiện toàn bộ