From 2f1af8f30309e9d5096628473d1a3741b79d9161 Mon Sep 17 00:00:00 2001 From: marcoyang1998 <45973641+marcoyang1998@users.noreply.github.com> Date: Thu, 29 Jun 2023 12:10:28 +0800 Subject: [PATCH] Update docs/source/decoding-with-langugage-models/LODR.rst Co-authored-by: Fangjun Kuang --- docs/source/decoding-with-langugage-models/LODR.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/decoding-with-langugage-models/LODR.rst b/docs/source/decoding-with-langugage-models/LODR.rst index 37cce91c9..453624e39 100644 --- a/docs/source/decoding-with-langugage-models/LODR.rst +++ b/docs/source/decoding-with-langugage-models/LODR.rst @@ -148,7 +148,7 @@ Then, we perform LODR decoding by setting ``--decoding-method`` to ``modified_be --tokens-ngram 2 \ --ngram-lm-scale $LODR_scale -There are two extra arguments need to be given when doing LODR. ``--tokens-ngram`` specifies the order of n-gram. As we +There are two extra arguments that need to be given when doing LODR. ``--tokens-ngram`` specifies the order of n-gram. As we are using a bi-gram, we set it to 2. ``--ngram-lm-scale`` is the scale of the bi-gram, it should be a negative number as we are subtracting the bi-gram's score during decoding.