Topic Adaptation for Language Modeling Using Unnormalized Exponential Models
Conference Paper, Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '98), Vol. 2, pp. 681 - 684, May, 1998
Abstract
In this paper, we present novel techniques for performing topic adaptation on an n-gram language model. Given training text labeled with topic information, we automatically identify the most relevant topics for new text. We adapt our language model toward these topics using an exponential model, by adjusting probabilities in our model to agree with those found in the topical subset of the training data. For efficiency, we do not normalize the model; that is, we do not require that the probabilities in the language model sum to 1. With these techniques, we were able to achieve a modest reduction in speech recognition word-error rate in the Broadcast News domain.
BibTeX
@conference{Chen-1998-16581,author = {Stanley Chen and Kristie Seymore and Ronald Rosenfeld},
title = {Topic Adaptation for Language Modeling Using Unnormalized Exponential Models},
booktitle = {Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '98)},
year = {1998},
month = {May},
volume = {2},
pages = {681 - 684},
}
Copyright notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.