Scalable Backoff Language Models - Robotics Institute Carnegie Mellon University

Scalable Backoff Language Models

Kristie Seymore and Ronald Rosenfeld
Conference Paper, Proceedings of 4th International Conference on Spoken Language Processing (ICSLP '96), Vol. 1, pp. 232 - 235, October, 1996

Abstract

When a trigram backoff language model is created from a large body of text, trigrams and bigrams that occur few times in the training text are often excluded from the model in order to decrease the model size. Generally, the elimination of n-grams with very low counts is believed to not significantly affect model performance. This project investigates the degradation of a trigram backoff model's perplexity and word error rates as bigram and trigram cutoffs are increased. The advantage of reduction in model size is compared to the increase in word error rate and perplexity scores. More importantly, this project also investigates alternative ways of excluding bigrams and trigrams from a backoff language model, using criteria other than the number of times an n-gram occurs in the training text. Specifically, a difference method has been investigated where the difference in the logs of the original and backed off trigram and bigram probabilities is used as a basis for n-gram exclusion from the model. We show that excluding trigrams and bigrams based on a weighted version of this difference method results in better perplexity and word error rate performance than excluding trigrams and bigrams based on counts alone.

BibTeX

@conference{Seymore-1996-14213,
author = {Kristie Seymore and Ronald Rosenfeld},
title = {Scalable Backoff Language Models},
booktitle = {Proceedings of 4th International Conference on Spoken Language Processing (ICSLP '96)},
year = {1996},
month = {October},
volume = {1},
pages = {232 - 235},
}