Using Story Topics for Language Model Adaptation
Abstract
The subject matter of any conversation or document can typically be described as some combination of elemental topics. We have developed a language model adaptation scheme that takes a piece of text, chooses the most similar topic clusters from a set of over 5000 elemental topics, and uses topic specific language models built from the topic clusters to rescore N-best lists. We are able to achieve a 15% reduction in perplexity and a small improvement in WER by using this adaptation. We also investigate the use of a topic tree, where the amount of training data for a specific topic can be judiciously increased in cases where the elemental topic cluster has too few word tokens to build a reliably smoothed and representative language model. Our system is able to fine-tune topic adaptation by interpolating models chosen from thousands of topics, allowing for adaptation to unique, previously unseen combinations of subjects.
BibTeX
@conference{Seymore-1997-14459,author = {Kristie Seymore and Ronald Rosenfeld},
title = {Using Story Topics for Language Model Adaptation},
booktitle = {Proceedings of 5th European Conference on Speech Communication and Technology (EUROSPEECH '97)},
year = {1997},
month = {September},
}