Toward Movement-invariant Automatic Lipreading and Speech Recognition - Robotics Institute Carnegie Mellon University

Toward Movement-invariant Automatic Lipreading and Speech Recognition

P. Duchnowski, M .Hunke, D. Busching, Uwe Meier, and Alex Waibel
Conference Paper, Proceedings of International Conference on Acoustics, Speech, and Signal Processing (ICASSP '95), pp. 109 - 112, May, 1995

Abstract

We present the development of a modular system for flexible human computer interaction via speech. The speech recognition component integrates acoustic and visual information (automatic lip-reading) improving overall recognition, especially in noisy environments. The image of the lips, constituting the visual input, is automatically extracted from the camera picture of the speaker's face by the lip locator module. Finally, the speaker's face is automatically acquired and followed by the face tracker sub-system. Integration of the three functions results in the first bi-modal speech recognizer allowing the speaker reasonable freedom of movement within a possibly noisy room while continuing to communicate with the computer via voice. Compared to audio-alone recognition, the combined system achieves a 20 to 50 percent error rate reduction for various signal/noise conditions.

BibTeX

@conference{Duchnowski-1995-13887,
author = {P. Duchnowski and M .Hunke and D. Busching and Uwe Meier and Alex Waibel},
title = {Toward Movement-invariant Automatic Lipreading and Speech Recognition},
booktitle = {Proceedings of International Conference on Acoustics, Speech, and Signal Processing (ICASSP '95)},
year = {1995},
month = {May},
pages = {109 - 112},
}