Journal on Multimodal User Interfaces

Công bố khoa học tiêu biểu

* Dữ liệu chỉ mang tính chất tham khảo

Sắp xếp:  
Analysis of significant dialog events in realistic human–computer interaction
Journal on Multimodal User Interfaces - Tập 8 Số 1 - Trang 75-86 - 2014
Dmytro Prylipko, Dietmar Rösner, Ingo Siegert, Stephan Günther, Rafael Friesen, Matthias Haase, Bogdan Vlasenko, Andreas Wendemuth
Modelling the “transactive memory system” in multimodal multiparty interactions
Journal on Multimodal User Interfaces - - 2024
Beatrice Biancardi, Maurizio Mancini, Brian Ravenet, Giovanna Varni
AbstractTransactive memory system (TMS) is a team emergent state representing the knowledge of each member about “who knows what” in a team performing a joint task. We present a study to show how the three TMS dimensions Credibility, Specialisation, Coordination, can be modelled as a linear combination of the nonverbal multimodal features displayed by the team perf...... hiện toàn bộ
Exploiting on-the-fly interpretation to design technical documents in a mobile context
Journal on Multimodal User Interfaces - Tập 4 - Trang 129-145 - 2011
Sébastien Macé, Eric Anquetil
Pen-based interaction is well adapted for writing down information in a mobile context. However, there is a lack of software taking advantage of this interaction process to design technical documents in constrained environments. This is because sketch interpretation is a complex research problem and good performances are required to design industrial software. The first contribution of this articl...... hiện toàn bộ
Spatial and temporal variations of feature tracks for crowd behavior analysis
Journal on Multimodal User Interfaces - Tập 10 - Trang 307-317 - 2015
Hajer Fradi, Jean-Luc Dugelay
The study of crowd behavior in public areas or during some public events is receiving a lot of attention in security community to detect potential risk and to prevent overcrowd. In this paper, we propose a novel approach for change detection, event recognition and characterization in human crowds. It consists of modeling time-varying dynamics of the crowd using local features. It also involves a f...... hiện toàn bộ
“Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design
Journal on Multimodal User Interfaces - Tập 15 Số 2 - Trang 87-98 - 2021
Katharina Weitz, Dominik Schiller, Ruben Schlagowski, Tobias B. Huber, Elisabeth André
AbstractWhile the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility, especially for end-users. In this paper, we explore the effects of incorporating virtual agents into explainable artificial intelligence (XAI...... hiện toàn bộ
Correction to: The Augmented Movement Platform For Embodied Learning (AMPEL): development and reliability
Journal on Multimodal User Interfaces - Tập 15 - Trang 85-85 - 2021
Lousin Moumdjian, Thomas Vervust, Joren Six, Ivan Schepers, Micheline Lesaffre, Peter Feys, Marc Leman
There was an error in the affiliations of the co-authors Dr. Thomas Vervust and Prof. Peter Feys. Their correct affiliations are given in this correction
Empathetic video clip experience through timely multimodal interaction
Journal on Multimodal User Interfaces - Tập 8 - Trang 273-288 - 2014
Myunghee Lee, Gerard Jounghyun Kim
In this article, we describe a video clip playing system, named “Empatheater,” that is controlled by multimodal interaction. As the video clip is played, the user can interact and emulate predefined video “events” through guidance and multimodal natural interaction (e.g. following the main character’s motion, gestures or voice). Without the timely interaction, the video stops. The system shows gui...... hiện toàn bộ
Visual SceneMaker—a tool for authoring interactive virtual characters
Journal on Multimodal User Interfaces - - 2012
Patrick Gebhard, Gregor Mehlmann, Michael Kipp
Empirical investigation of the temporal relations between speech and facial expressions of emotion
Journal on Multimodal User Interfaces - Tập 3 - Trang 263-270 - 2010
Stéphanie Buisine, Yun Wang, Ouriel Grynszpan
Behavior models implemented within Embodied Conversational Agents (ECAs) require nonverbal communication to be tightly coordinated with speech. In this paper we present an empirical study seeking to explore the influence of the temporal coordination between speech and facial expressions of emotions on the perception of these emotions by users (measuring their performance in this task, the perceive...... hiện toàn bộ
Automatic recognition of touch gestures in the corpus of social touch
Journal on Multimodal User Interfaces - - 2016
Merel M. Jung, Mannes Poel, Ronald Poppe, Dirk K. J. Heylen
For an artifact such as a robot or a virtual agent to respond appropriately to human social touch behavior, it should be able to automatically detect and recognize touch. This paper describes the data collection of CoST: Corpus of Social Touch, a data set containing 7805 captures of 14 different social touch gestures. All touch gestures were performed in three variants: gentle, normal and rough on...... hiện toàn bộ
Tổng số: 298   
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 10