edu.berkeley.nlp.lm.cache
Class ArrayEncodedCachingLmWrapper<W>
java.lang.Object
edu.berkeley.nlp.lm.AbstractNgramLanguageModel<W>
edu.berkeley.nlp.lm.AbstractArrayEncodedNgramLanguageModel<W>
edu.berkeley.nlp.lm.cache.ArrayEncodedCachingLmWrapper<W>
- Type Parameters:
W
-
- All Implemented Interfaces:
- ArrayEncodedNgramLanguageModel<W>, NgramLanguageModel<W>, Serializable
public class ArrayEncodedCachingLmWrapper<W>
- extends AbstractArrayEncodedNgramLanguageModel<W>
This class wraps ArrayEncodedNgramLanguageModel
with a cache.
- Author:
- adampauls
- See Also:
- Serialized Form
Methods inherited from class java.lang.Object |
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
wrapWithCacheNotThreadSafe
public static <W> ArrayEncodedCachingLmWrapper<W> wrapWithCacheNotThreadSafe(ArrayEncodedNgramLanguageModel<W> lm)
- To use this wrapper in a multithreaded environment, you should create one
wrapper per thread.
- Type Parameters:
T
- - Parameters:
lm
-
- Returns:
wrapWithCacheNotThreadSafe
public static <W> ArrayEncodedCachingLmWrapper<W> wrapWithCacheNotThreadSafe(ArrayEncodedNgramLanguageModel<W> lm,
int cacheBits)
wrapWithCacheThreadSafe
public static <W> ArrayEncodedCachingLmWrapper<W> wrapWithCacheThreadSafe(ArrayEncodedNgramLanguageModel<W> lm)
- This type of caching is threadsafe and (internally) maintains a separate
cache for each thread that calls it. Note each thread has its own cache,
so if you have lots of threads, memory usage could be substantial.
- Type Parameters:
W
- - Parameters:
lm
-
- Returns:
wrapWithCacheThreadSafe
public static <W> ArrayEncodedCachingLmWrapper<W> wrapWithCacheThreadSafe(ArrayEncodedNgramLanguageModel<W> lm,
int cacheBits)
getLogProb
public float getLogProb(int[] ngram,
int startPos,
int endPos)
- Description copied from interface:
ArrayEncodedNgramLanguageModel
- Calculate language model score of an n-gram. Warning: if you
pass in an n-gram of length greater than
getLmOrder()
,
this call will silently ignore the extra words of context. In other
words, if you pass in a 5-gram (endPos-startPos == 5
) to
a 3-gram model, it will only score the words from startPos + 2
to endPos
.
- Specified by:
getLogProb
in interface ArrayEncodedNgramLanguageModel<W>
- Specified by:
getLogProb
in class AbstractArrayEncodedNgramLanguageModel<W>
- Parameters:
ngram
- array of words in integer representationstartPos
- start of the portion of the array to be readendPos
- end of the portion of the array to be read.
- Returns: