Computer Science, abstract
cs.CL/0108005

From: Joshua Goodman [view email]
Date: Thu, 9 Aug 2001 19:24:28 GMT   (379kb)

A Bit of Progress in Language Modeling

Authors: Joshua Goodman
Comments: 73 pages, extended version of paper to appear in Computer Speech and Language
Report-no: MSR-TR-2001-72
Subj-class: Computation and Language
ACM-class: I.2.7
In the past several years, a number of different language modeling improvements over simple trigram models have been found, including caching, higher-order n-grams, skipping, interpolated Kneser-Ney smoothing, and clustering. We present explorations of variations on, or of the limits of, each of these techniques, including showing that sentence mixture models may have more potential. While all of these techniques have been studied separately, they have rarely been studied in combination. We find some significant interactions, especially with smoothing and clustering techniques. We compare a combination of all techniques together to a Katz smoothed trigram model with no count cutoffs. We achieve perplexity reductions between 38% and 50% (1 bit of entropy), depending on training data size, as well as a word error rate reduction of 8.9%. Our perplexity reductions are perhaps the highest reported compared to a fair baseline. This is the extended version of the paper; it contains additional details and proofs, and is designed to be a good introduction to the state of the art in language modeling.

Full-text: PostScript, PDF, or Other formats

References and citations for this submission:
CiteBase (autonomous citation navigation and analysis)

Which authors of this paper are endorsers?