diff --git a/docs/source/decoding-with-langugage-models/LODR.rst b/docs/source/decoding-with-langugage-models/LODR.rst index e97824955..9e853bc24 100644 --- a/docs/source/decoding-with-langugage-models/LODR.rst +++ b/docs/source/decoding-with-langugage-models/LODR.rst @@ -59,12 +59,14 @@ during decoding for RNNT model: In LODR, an additional bi-gram LM estimated on the training corpus is required apart from the neural LM. Comared to DR, the only difference lies in the choice of source domain LM. According to the original `paper `_, LODR achieves similar performance compared DR. As a bi-gram is much faster to evaluate, LODR -is usually much faster. +is usually much faster. Note that although DR/LODR is originally proposed to address the domain +mismatch between training and testing, it still achieves very good results on intra-domain evaluation. Now, we will show you how to use LODR in ``icefall``. For illustration purpose, we will use a pre-trained ASR model from this `link `_. If you want to train your model from scratch, please have a look at :ref:`non_streaming_librispeech_pruned_transducer_stateless`. +The testing scenario here is intra-domain. As the initial step, let's download the pre-trained model.