Written by
Nickolay Shmyrev
on
Using SRILM server in sphinx4
Recently I've added the support for the SRILM language model server to the sphinx4 so it's possible to use much bigger models during the search keeping the same memory requriements and, more important, during lattice rescoring. Lattice rescoring is still in progress, so here is the idea how to use network language model during search.
SRILM has a number of adavantages for example it implements few interesting algorithms and even for simple tasks like trigram language model creation it's way better than cmuclmtk. At least model pruning is
supported.
To start first dump the language model vocabulary since it's required in linguist
ngram -lm your.lm --write-vocab my.vocab
So start the server with
ngram -use-server 5000 -lm your.lm
Configure the recognizer
<component name="rescoringModel"
type="edu.cmu.sphinx.linguist.language.ngram.NetworkLanguageModel">
<property name="port" value="5000"/>
<property name="location" value="your.vocab"/>
<property name="logMath" value="logMath"/>
</component>
And start the lattice demo. You'll see the result soon.
Adjust the cache according to the size of your model. It shoudlnt' be large for a simple search. Typically the cache size isn't more than 100000 for a simple search.
Still, usage of the large-gram model is not reasonable for a typical search because of the large amount of word trigrams that should be tracked. It's more efficient to use trigram or even bigram model first and make a second recognizer pass with the rescored language model. More details on rescoring in the next posts.