Initialised eigenlip estimator for fast lip tracking using linear regression
Conference Paper, Proceedings of 15th International Conference on Pattern Recognition (ICPR '00), Vol. 3, pp. 178 - 181, September, 2000
Abstract
Multimodal speech processing in which visual facial features are jointly processed with audio features is a rapidly advancing field. Lip movements and configurations provide useful information to improve speech and speaker recognition. However, the use of this visual information requires accurate and fast lip tracking algorithms. A new technique is outlined that is able to estimate the outer lip contour directly from a given lip intensity image via linear regression. This estimate can be improved by a active shape model that is able to track a speakers lips without requiring time consuming iterative energy minimization techniques. Results of performance are presented against known tracking algorithms using the M2VTS database.
BibTeX
@conference{Lucey-2000-121096,author = {S. Lucey and S. Sridharan and V. Chandran},
title = {Initialised eigenlip estimator for fast lip tracking using linear regression},
booktitle = {Proceedings of 15th International Conference on Pattern Recognition (ICPR '00)},
year = {2000},
month = {September},
volume = {3},
pages = {178 - 181},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.