State synchronous modeling of audio-visual information for bi-modal speech recognition

S. Nakamura1, K. Kumatani2,1,3, S. Tamura2,1,4
1ATR Spoken Language Translation Research Laboratories
2ATR Spoken Language Translation Research Laboratories, Japan
3Nara Institute of Science and Technology, Singapore

Tóm tắt

There has been a higher demand recently for automatic speech recognition (ASR) systems able to operate robustly in acoustically noisy environments. This paper proposes a method to integrate audio and visual information effectively in audio-visual (bi-modal) ASR systems. Such integration inevitably necessitates modeling of the synchronization of the audio and visual information. To address the time lag and correlation problems in individual features between speech and lip movements, we introduce a type of integrated HMM modeling of audio-visual information based on HMM composition. The proposed model can represent state synchronicity, not only within a phoneme, but also between phonemes. Evaluation experiments show that the proposed method improves the recognition accuracy for noisy speech.

Từ khóa

#Speech recognition #Hidden Markov models #Automatic speech recognition #Working environment noise #Streaming media #Degradation #Spatial databases #Visual databases #Audio databases #Feature extraction

Tài liệu tham khảo

10.1109/ICASSP.1996.543247 kumatani, 0, An Adaptive Integration Method Based on Product HMM for Bi-Modal Speech Recognition, HSC2001 (International Workshop on Hands-Free Speech Communication), 195 nakamura, 2000, Stream weight optimization of speech and lip image sequence for Audio-Visual speech recognition, Proc IC-SLP2000, 3, 20 nakamura, 1997, Improved bimodal speech recognition using tied-mixture HMMs and 5000 word Audio-Visual Synchronous database, Proc EUROSPEECH, 1623