deploy: 800bf4b6a2e32745e7d0c31dd78d473f1faff509

This commit is contained in:
JinZr 2023-10-27 05:17:51 +00:00
parent 830033a735
commit 3210246c91
15 changed files with 22 additions and 22 deletions

View File

@ -67,7 +67,7 @@ To run stage 2 to stage 5, use:
.. HINT:: .. HINT::
A 3-gram language model will be downloaded from huggingface, we assume you have A 3-gram language model will be downloaded from huggingface, we assume you have
intalled and initialized ``git-lfs``. If not, you could install ``git-lfs`` by installed and initialized ``git-lfs``. If not, you could install ``git-lfs`` by
.. code-block:: bash .. code-block:: bash

View File

@ -67,7 +67,7 @@ To run stage 2 to stage 5, use:
.. HINT:: .. HINT::
A 3-gram language model will be downloaded from huggingface, we assume you have A 3-gram language model will be downloaded from huggingface, we assume you have
intalled and initialized ``git-lfs``. If not, you could install ``git-lfs`` by installed and initialized ``git-lfs``. If not, you could install ``git-lfs`` by
.. code-block:: bash .. code-block:: bash

View File

@ -418,7 +418,7 @@ The following shows two examples (for two types of checkpoints):
- ``beam_search`` : It implements Algorithm 1 in https://arxiv.org/pdf/1211.3711.pdf and - ``beam_search`` : It implements Algorithm 1 in https://arxiv.org/pdf/1211.3711.pdf and
`espnet/nets/beam_search_transducer.py <https://github.com/espnet/espnet/blob/master/espnet/nets/beam_search_transducer.py#L247>`_ `espnet/nets/beam_search_transducer.py <https://github.com/espnet/espnet/blob/master/espnet/nets/beam_search_transducer.py#L247>`_
is used as a reference. Basicly, it keeps topk states for each frame, and expands the kept states with their own contexts to is used as a reference. Basically, it keeps topk states for each frame, and expands the kept states with their own contexts to
next frame. next frame.
- ``modified_beam_search`` : It implements the same algorithm as ``beam_search`` above, but it - ``modified_beam_search`` : It implements the same algorithm as ``beam_search`` above, but it

View File

@ -1,6 +1,6 @@
.. _train_nnlm: .. _train_nnlm:
Train an RNN langugage model Train an RNN language model
====================================== ======================================
If you have enough text data, you can train a neural network language model (NNLM) to improve If you have enough text data, you can train a neural network language model (NNLM) to improve

View File

@ -20,7 +20,7 @@
<link rel="index" title="Index" href="../genindex.html" /> <link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" /> <link rel="search" title="Search" href="../search.html" />
<link rel="next" title="Contributing to Documentation" href="doc.html" /> <link rel="next" title="Contributing to Documentation" href="doc.html" />
<link rel="prev" title="Train an RNN langugage model" href="../recipes/RNN-LM/librispeech/lm-training.html" /> <link rel="prev" title="Train an RNN language model" href="../recipes/RNN-LM/librispeech/lm-training.html" />
</head> </head>
<body class="wy-body-for-nav"> <body class="wy-body-for-nav">
@ -133,7 +133,7 @@ and code to <code class="docutils literal notranslate"><span class="pre">icefall
</div> </div>
</div> </div>
<footer><div class="rst-footer-buttons" role="navigation" aria-label="Footer"> <footer><div class="rst-footer-buttons" role="navigation" aria-label="Footer">
<a href="../recipes/RNN-LM/librispeech/lm-training.html" class="btn btn-neutral float-left" title="Train an RNN langugage model" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a> <a href="../recipes/RNN-LM/librispeech/lm-training.html" class="btn btn-neutral float-left" title="Train an RNN language model" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a>
<a href="doc.html" class="btn btn-neutral float-right" title="Contributing to Documentation" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a> <a href="doc.html" class="btn btn-neutral float-right" title="Contributing to Documentation" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a>
</div> </div>

View File

@ -94,7 +94,7 @@
<h1>Decoding with language models<a class="headerlink" href="#decoding-with-language-models" title="Permalink to this heading"></a></h1> <h1>Decoding with language models<a class="headerlink" href="#decoding-with-language-models" title="Permalink to this heading"></a></h1>
<p>This section describes how to use external langugage models <p>This section describes how to use external langugage models
during decoding to improve the WER of transducer models. To train an external language model, during decoding to improve the WER of transducer models. To train an external language model,
please refer to this tutorial: <a class="reference internal" href="../recipes/RNN-LM/librispeech/lm-training.html#train-nnlm"><span class="std std-ref">Train an RNN langugage model</span></a>.</p> please refer to this tutorial: <a class="reference internal" href="../recipes/RNN-LM/librispeech/lm-training.html#train-nnlm"><span class="std std-ref">Train an RNN language model</span></a>.</p>
<p>The following decoding methods with external langugage models are available:</p> <p>The following decoding methods with external langugage models are available:</p>
<table class="docutils align-default"> <table class="docutils align-default">
<colgroup> <colgroup>

View File

@ -152,7 +152,7 @@ speech recognition recipes using <a class="reference external" href="https://git
</ul> </ul>
</li> </li>
<li class="toctree-l2"><a class="reference internal" href="recipes/RNN-LM/index.html">RNN-LM</a><ul> <li class="toctree-l2"><a class="reference internal" href="recipes/RNN-LM/index.html">RNN-LM</a><ul>
<li class="toctree-l3"><a class="reference internal" href="recipes/RNN-LM/librispeech/lm-training.html">Train an RNN langugage model</a></li> <li class="toctree-l3"><a class="reference internal" href="recipes/RNN-LM/librispeech/lm-training.html">Train an RNN language model</a></li>
</ul> </ul>
</li> </li>
</ul> </ul>

Binary file not shown.

View File

@ -176,7 +176,7 @@ the <code class="docutils literal notranslate"><span class="pre">dl_dir</span></
<div class="admonition hint"> <div class="admonition hint">
<p class="admonition-title">Hint</p> <p class="admonition-title">Hint</p>
<p>A 3-gram language model will be downloaded from huggingface, we assume you have <p>A 3-gram language model will be downloaded from huggingface, we assume you have
intalled and initialized <code class="docutils literal notranslate"><span class="pre">git-lfs</span></code>. If not, you could install <code class="docutils literal notranslate"><span class="pre">git-lfs</span></code> by</p> installed and initialized <code class="docutils literal notranslate"><span class="pre">git-lfs</span></code>. If not, you could install <code class="docutils literal notranslate"><span class="pre">git-lfs</span></code> by</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span>sudo<span class="w"> </span>apt-get<span class="w"> </span>install<span class="w"> </span>git-lfs <div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span>sudo<span class="w"> </span>apt-get<span class="w"> </span>install<span class="w"> </span>git-lfs
$<span class="w"> </span>git-lfs<span class="w"> </span>install $<span class="w"> </span>git-lfs<span class="w"> </span>install
</pre></div> </pre></div>

View File

@ -176,7 +176,7 @@ the <code class="docutils literal notranslate"><span class="pre">dl_dir</span></
<div class="admonition hint"> <div class="admonition hint">
<p class="admonition-title">Hint</p> <p class="admonition-title">Hint</p>
<p>A 3-gram language model will be downloaded from huggingface, we assume you have <p>A 3-gram language model will be downloaded from huggingface, we assume you have
intalled and initialized <code class="docutils literal notranslate"><span class="pre">git-lfs</span></code>. If not, you could install <code class="docutils literal notranslate"><span class="pre">git-lfs</span></code> by</p> installed and initialized <code class="docutils literal notranslate"><span class="pre">git-lfs</span></code>. If not, you could install <code class="docutils literal notranslate"><span class="pre">git-lfs</span></code> by</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span>sudo<span class="w"> </span>apt-get<span class="w"> </span>install<span class="w"> </span>git-lfs <div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span>sudo<span class="w"> </span>apt-get<span class="w"> </span>install<span class="w"> </span>git-lfs
$<span class="w"> </span>git-lfs<span class="w"> </span>install $<span class="w"> </span>git-lfs<span class="w"> </span>install
</pre></div> </pre></div>

View File

@ -502,7 +502,7 @@ $<span class="w"> </span>./pruned_transducer_stateless4/decode.py<span class="w"
of each frame as the decoding result.</p></li> of each frame as the decoding result.</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">beam_search</span></code> : It implements Algorithm 1 in <a class="reference external" href="https://arxiv.org/pdf/1211.3711.pdf">https://arxiv.org/pdf/1211.3711.pdf</a> and <li><p><code class="docutils literal notranslate"><span class="pre">beam_search</span></code> : It implements Algorithm 1 in <a class="reference external" href="https://arxiv.org/pdf/1211.3711.pdf">https://arxiv.org/pdf/1211.3711.pdf</a> and
<a class="reference external" href="https://github.com/espnet/espnet/blob/master/espnet/nets/beam_search_transducer.py#L247">espnet/nets/beam_search_transducer.py</a> <a class="reference external" href="https://github.com/espnet/espnet/blob/master/espnet/nets/beam_search_transducer.py#L247">espnet/nets/beam_search_transducer.py</a>
is used as a reference. Basicly, it keeps topk states for each frame, and expands the kept states with their own contexts to is used as a reference. Basically, it keeps topk states for each frame, and expands the kept states with their own contexts to
next frame.</p></li> next frame.</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">modified_beam_search</span></code> : It implements the same algorithm as <code class="docutils literal notranslate"><span class="pre">beam_search</span></code> above, but it <li><p><code class="docutils literal notranslate"><span class="pre">modified_beam_search</span></code> : It implements the same algorithm as <code class="docutils literal notranslate"><span class="pre">beam_search</span></code> above, but it
runs in batch mode with <code class="docutils literal notranslate"><span class="pre">--max-sym-per-frame=1</span></code> being hardcoded.</p></li> runs in batch mode with <code class="docutils literal notranslate"><span class="pre">--max-sym-per-frame=1</span></code> being hardcoded.</p></li>

View File

@ -19,7 +19,7 @@
<script src="../../_static/js/theme.js"></script> <script src="../../_static/js/theme.js"></script>
<link rel="index" title="Index" href="../../genindex.html" /> <link rel="index" title="Index" href="../../genindex.html" />
<link rel="search" title="Search" href="../../search.html" /> <link rel="search" title="Search" href="../../search.html" />
<link rel="next" title="Train an RNN langugage model" href="librispeech/lm-training.html" /> <link rel="next" title="Train an RNN language model" href="librispeech/lm-training.html" />
<link rel="prev" title="Zipformer Transducer" href="../Streaming-ASR/librispeech/zipformer_transducer.html" /> <link rel="prev" title="Zipformer Transducer" href="../Streaming-ASR/librispeech/zipformer_transducer.html" />
</head> </head>
@ -55,7 +55,7 @@
<li class="toctree-l2"><a class="reference internal" href="../Non-streaming-ASR/index.html">Non Streaming ASR</a></li> <li class="toctree-l2"><a class="reference internal" href="../Non-streaming-ASR/index.html">Non Streaming ASR</a></li>
<li class="toctree-l2"><a class="reference internal" href="../Streaming-ASR/index.html">Streaming ASR</a></li> <li class="toctree-l2"><a class="reference internal" href="../Streaming-ASR/index.html">Streaming ASR</a></li>
<li class="toctree-l2 current"><a class="current reference internal" href="#">RNN-LM</a><ul> <li class="toctree-l2 current"><a class="current reference internal" href="#">RNN-LM</a><ul>
<li class="toctree-l3"><a class="reference internal" href="librispeech/lm-training.html">Train an RNN langugage model</a></li> <li class="toctree-l3"><a class="reference internal" href="librispeech/lm-training.html">Train an RNN language model</a></li>
</ul> </ul>
</li> </li>
</ul> </ul>
@ -98,7 +98,7 @@
<h1>RNN-LM<a class="headerlink" href="#rnn-lm" title="Permalink to this heading"></a></h1> <h1>RNN-LM<a class="headerlink" href="#rnn-lm" title="Permalink to this heading"></a></h1>
<div class="toctree-wrapper compound"> <div class="toctree-wrapper compound">
<ul> <ul>
<li class="toctree-l1"><a class="reference internal" href="librispeech/lm-training.html">Train an RNN langugage model</a></li> <li class="toctree-l1"><a class="reference internal" href="librispeech/lm-training.html">Train an RNN language model</a></li>
</ul> </ul>
</div> </div>
</section> </section>
@ -108,7 +108,7 @@
</div> </div>
<footer><div class="rst-footer-buttons" role="navigation" aria-label="Footer"> <footer><div class="rst-footer-buttons" role="navigation" aria-label="Footer">
<a href="../Streaming-ASR/librispeech/zipformer_transducer.html" class="btn btn-neutral float-left" title="Zipformer Transducer" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a> <a href="../Streaming-ASR/librispeech/zipformer_transducer.html" class="btn btn-neutral float-left" title="Zipformer Transducer" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a>
<a href="librispeech/lm-training.html" class="btn btn-neutral float-right" title="Train an RNN langugage model" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a> <a href="librispeech/lm-training.html" class="btn btn-neutral float-right" title="Train an RNN language model" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a>
</div> </div>
<hr/> <hr/>

View File

@ -4,7 +4,7 @@
<meta charset="utf-8" /><meta name="generator" content="Docutils 0.18.1: http://docutils.sourceforge.net/" /> <meta charset="utf-8" /><meta name="generator" content="Docutils 0.18.1: http://docutils.sourceforge.net/" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Train an RNN langugage model &mdash; icefall 0.1 documentation</title> <title>Train an RNN language model &mdash; icefall 0.1 documentation</title>
<link rel="stylesheet" href="../../../_static/pygments.css" type="text/css" /> <link rel="stylesheet" href="../../../_static/pygments.css" type="text/css" />
<link rel="stylesheet" href="../../../_static/css/theme.css" type="text/css" /> <link rel="stylesheet" href="../../../_static/css/theme.css" type="text/css" />
<!--[if lt IE 9]> <!--[if lt IE 9]>
@ -55,7 +55,7 @@
<li class="toctree-l2"><a class="reference internal" href="../../Non-streaming-ASR/index.html">Non Streaming ASR</a></li> <li class="toctree-l2"><a class="reference internal" href="../../Non-streaming-ASR/index.html">Non Streaming ASR</a></li>
<li class="toctree-l2"><a class="reference internal" href="../../Streaming-ASR/index.html">Streaming ASR</a></li> <li class="toctree-l2"><a class="reference internal" href="../../Streaming-ASR/index.html">Streaming ASR</a></li>
<li class="toctree-l2 current"><a class="reference internal" href="../index.html">RNN-LM</a><ul class="current"> <li class="toctree-l2 current"><a class="reference internal" href="../index.html">RNN-LM</a><ul class="current">
<li class="toctree-l3 current"><a class="current reference internal" href="#">Train an RNN langugage model</a></li> <li class="toctree-l3 current"><a class="current reference internal" href="#">Train an RNN language model</a></li>
</ul> </ul>
</li> </li>
</ul> </ul>
@ -85,7 +85,7 @@
<li><a href="../../../index.html" class="icon icon-home" aria-label="Home"></a></li> <li><a href="../../../index.html" class="icon icon-home" aria-label="Home"></a></li>
<li class="breadcrumb-item"><a href="../../index.html">Recipes</a></li> <li class="breadcrumb-item"><a href="../../index.html">Recipes</a></li>
<li class="breadcrumb-item"><a href="../index.html">RNN-LM</a></li> <li class="breadcrumb-item"><a href="../index.html">RNN-LM</a></li>
<li class="breadcrumb-item active">Train an RNN langugage model</li> <li class="breadcrumb-item active">Train an RNN language model</li>
<li class="wy-breadcrumbs-aside"> <li class="wy-breadcrumbs-aside">
<a href="https://github.com/k2-fsa/icefall/blob/master/docs/source/recipes/RNN-LM/librispeech/lm-training.rst" class="fa fa-github"> Edit on GitHub</a> <a href="https://github.com/k2-fsa/icefall/blob/master/docs/source/recipes/RNN-LM/librispeech/lm-training.rst" class="fa fa-github"> Edit on GitHub</a>
</li> </li>
@ -95,8 +95,8 @@
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article"> <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody"> <div itemprop="articleBody">
<section id="train-an-rnn-langugage-model"> <section id="train-an-rnn-language-model">
<span id="train-nnlm"></span><h1>Train an RNN langugage model<a class="headerlink" href="#train-an-rnn-langugage-model" title="Permalink to this heading"></a></h1> <span id="train-nnlm"></span><h1>Train an RNN language model<a class="headerlink" href="#train-an-rnn-language-model" title="Permalink to this heading"></a></h1>
<p>If you have enough text data, you can train a neural network language model (NNLM) to improve <p>If you have enough text data, you can train a neural network language model (NNLM) to improve
the WER of your E2E ASR system. This tutorial shows you how to train an RNNLM from the WER of your E2E ASR system. This tutorial shows you how to train an RNNLM from
scratch.</p> scratch.</p>

View File

@ -111,7 +111,7 @@ Currently, only speech recognition recipes are provided.</p>
</ul> </ul>
</li> </li>
<li class="toctree-l1"><a class="reference internal" href="RNN-LM/index.html">RNN-LM</a><ul> <li class="toctree-l1"><a class="reference internal" href="RNN-LM/index.html">RNN-LM</a><ul>
<li class="toctree-l2"><a class="reference internal" href="RNN-LM/librispeech/lm-training.html">Train an RNN langugage model</a></li> <li class="toctree-l2"><a class="reference internal" href="RNN-LM/librispeech/lm-training.html">Train an RNN language model</a></li>
</ul> </ul>
</li> </li>
</ul> </ul>

File diff suppressed because one or more lines are too long