mirror of
https://github.com/k2-fsa/icefall.git
synced 2025-08-09 01:52:41 +00:00
fix typo (#1324)
This commit is contained in:
parent
973dc1026d
commit
eef47adee9
@ -56,7 +56,7 @@ during decoding for transducer model:
|
|||||||
\lambda_1 \log p_{\text{Target LM}}\left(y_u|\mathit{x},y_{1:u-1}\right) -
|
\lambda_1 \log p_{\text{Target LM}}\left(y_u|\mathit{x},y_{1:u-1}\right) -
|
||||||
\lambda_2 \log p_{\text{bi-gram}}\left(y_u|\mathit{x},y_{1:u-1}\right)
|
\lambda_2 \log p_{\text{bi-gram}}\left(y_u|\mathit{x},y_{1:u-1}\right)
|
||||||
|
|
||||||
In LODR, an additional bi-gram LM estimated on the source domain (e.g training corpus) is required. Comared to DR,
|
In LODR, an additional bi-gram LM estimated on the source domain (e.g training corpus) is required. Compared to DR,
|
||||||
the only difference lies in the choice of source domain LM. According to the original `paper <https://arxiv.org/abs/2203.16776>`_,
|
the only difference lies in the choice of source domain LM. According to the original `paper <https://arxiv.org/abs/2203.16776>`_,
|
||||||
LODR achieves similar performance compared DR in both intra-domain and cross-domain settings.
|
LODR achieves similar performance compared DR in both intra-domain and cross-domain settings.
|
||||||
As a bi-gram is much faster to evaluate, LODR is usually much faster.
|
As a bi-gram is much faster to evaluate, LODR is usually much faster.
|
||||||
|
@ -125,7 +125,7 @@ Python code. We have also set up ``PATH`` so that you can use
|
|||||||
.. caution::
|
.. caution::
|
||||||
|
|
||||||
Please don't use `<https://github.com/tencent/ncnn>`_.
|
Please don't use `<https://github.com/tencent/ncnn>`_.
|
||||||
We have made some modifications to the offical `ncnn`_.
|
We have made some modifications to the official `ncnn`_.
|
||||||
|
|
||||||
We will synchronize `<https://github.com/csukuangfj/ncnn>`_ periodically
|
We will synchronize `<https://github.com/csukuangfj/ncnn>`_ periodically
|
||||||
with the official one.
|
with the official one.
|
||||||
|
@ -203,7 +203,7 @@ def get_parser():
|
|||||||
"--beam-size",
|
"--beam-size",
|
||||||
type=int,
|
type=int,
|
||||||
default=4,
|
default=4,
|
||||||
help="""An interger indicating how many candidates we will keep for each
|
help="""An integer indicating how many candidates we will keep for each
|
||||||
frame. Used only when --decoding-method is beam_search or
|
frame. Used only when --decoding-method is beam_search or
|
||||||
modified_beam_search.""",
|
modified_beam_search.""",
|
||||||
)
|
)
|
||||||
|
@ -78,7 +78,7 @@ def add_finetune_arguments(parser: argparse.ArgumentParser):
|
|||||||
default=None,
|
default=None,
|
||||||
help="""
|
help="""
|
||||||
Modules to be initialized. It matches all parameters starting with
|
Modules to be initialized. It matches all parameters starting with
|
||||||
a specific key. The keys are given with Comma seperated. If None,
|
a specific key. The keys are given with Comma separated. If None,
|
||||||
all modules will be initialised. For example, if you only want to
|
all modules will be initialised. For example, if you only want to
|
||||||
initialise all parameters staring with "encoder", use "encoder";
|
initialise all parameters staring with "encoder", use "encoder";
|
||||||
if you want to initialise parameters starting with encoder or decoder,
|
if you want to initialise parameters starting with encoder or decoder,
|
||||||
|
@ -1977,7 +1977,7 @@ def parse_timestamps_and_texts(
|
|||||||
A k2.Fsa with best_paths.arcs.num_axes() == 3, i.e.
|
A k2.Fsa with best_paths.arcs.num_axes() == 3, i.e.
|
||||||
containing multiple FSAs, which is expected to be the result
|
containing multiple FSAs, which is expected to be the result
|
||||||
of k2.shortest_path (otherwise the returned values won't
|
of k2.shortest_path (otherwise the returned values won't
|
||||||
be meaningful). Attribtute `labels` is the prediction unit,
|
be meaningful). Attribute `labels` is the prediction unit,
|
||||||
e.g., phone or BPE tokens. Attribute `aux_labels` is the word index.
|
e.g., phone or BPE tokens. Attribute `aux_labels` is the word index.
|
||||||
word_table:
|
word_table:
|
||||||
The word symbol table.
|
The word symbol table.
|
||||||
|
Loading…
x
Reference in New Issue
Block a user