edu.berkeley.nlp.lm
Class NgramLanguageModel.StaticMethods

java.lang.Object
  extended by edu.berkeley.nlp.lm.NgramLanguageModel.StaticMethods
Enclosing interface:
NgramLanguageModel<W>

public static class NgramLanguageModel.StaticMethods
extends Object


Constructor Summary
NgramLanguageModel.StaticMethods()
           
 
Method Summary
static
<W> Counter<W>
getDistributionOverNextWords(NgramLanguageModel<W> lm, List<W> context)
          Builds a distribution over next possible words given the context.
static
<W> List<W>
sample(Random random, NgramLanguageModel<W> lm)
          Samples from this language model.
static
<W> List<W>
sample(Random random, NgramLanguageModel<W> lm, double sampleTemperature)
           
static
<T> int[]
toIntArray(List<T> ngram, ArrayEncodedNgramLanguageModel<T> lm)
           
static
<T> List<T>
toObjectList(int[] ngram, ArrayEncodedNgramLanguageModel<T> lm)
           
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

NgramLanguageModel.StaticMethods

public NgramLanguageModel.StaticMethods()
Method Detail

toIntArray

public static <T> int[] toIntArray(List<T> ngram,
                                   ArrayEncodedNgramLanguageModel<T> lm)

toObjectList

public static <T> List<T> toObjectList(int[] ngram,
                                       ArrayEncodedNgramLanguageModel<T> lm)

sample

public static <W> List<W> sample(Random random,
                                 NgramLanguageModel<W> lm)
Samples from this language model. This is not meant to be particularly efficient

Parameters:
random -
Returns:

sample

public static <W> List<W> sample(Random random,
                                 NgramLanguageModel<W> lm,
                                 double sampleTemperature)

getDistributionOverNextWords

public static <W> Counter<W> getDistributionOverNextWords(NgramLanguageModel<W> lm,
                                                          List<W> context)
Builds a distribution over next possible words given the context. Context can be of any length, but only at most lm.getLmOrder() - 1 words are actually used.

Type Parameters:
W -
Parameters:
lm -
context -
Returns: