mirror of
https://github.com/k2-fsa/icefall.git
synced 2025-08-09 01:52:41 +00:00
deploy: 3fb99400cf2c691f5c666fecd1415340820364a6
This commit is contained in:
parent
c1ef2bb4d0
commit
830033a735
@ -3,7 +3,7 @@ How to create a recipe
|
|||||||
|
|
||||||
.. HINT::
|
.. HINT::
|
||||||
|
|
||||||
Please read :ref:`follow the code style` to adjust your code sytle.
|
Please read :ref:`follow the code style` to adjust your code style.
|
||||||
|
|
||||||
.. CAUTION::
|
.. CAUTION::
|
||||||
|
|
||||||
|
@ -32,7 +32,7 @@ In icefall, we implement the streaming conformer the way just like what `WeNet <
|
|||||||
.. HINT::
|
.. HINT::
|
||||||
If you want to modify a non-streaming conformer recipe to support both streaming and non-streaming, please refer
|
If you want to modify a non-streaming conformer recipe to support both streaming and non-streaming, please refer
|
||||||
to `this pull request <https://github.com/k2-fsa/icefall/pull/454>`_. After adding the code needed by streaming training,
|
to `this pull request <https://github.com/k2-fsa/icefall/pull/454>`_. After adding the code needed by streaming training,
|
||||||
you have to re-train it with the extra arguments metioned in the docs above to get a streaming model.
|
you have to re-train it with the extra arguments mentioned in the docs above to get a streaming model.
|
||||||
|
|
||||||
|
|
||||||
Streaming Emformer
|
Streaming Emformer
|
||||||
|
@ -584,7 +584,7 @@ The following shows two examples (for the two types of checkpoints):
|
|||||||
|
|
||||||
- ``beam_search`` : It implements Algorithm 1 in https://arxiv.org/pdf/1211.3711.pdf and
|
- ``beam_search`` : It implements Algorithm 1 in https://arxiv.org/pdf/1211.3711.pdf and
|
||||||
`espnet/nets/beam_search_transducer.py <https://github.com/espnet/espnet/blob/master/espnet/nets/beam_search_transducer.py#L247>`_
|
`espnet/nets/beam_search_transducer.py <https://github.com/espnet/espnet/blob/master/espnet/nets/beam_search_transducer.py#L247>`_
|
||||||
is used as a reference. Basicly, it keeps topk states for each frame, and expands the kept states with their own contexts to
|
is used as a reference. Basically, it keeps topk states for each frame, and expands the kept states with their own contexts to
|
||||||
next frame.
|
next frame.
|
||||||
|
|
||||||
- ``modified_beam_search`` : It implements the same algorithm as ``beam_search`` above, but it
|
- ``modified_beam_search`` : It implements the same algorithm as ``beam_search`` above, but it
|
||||||
@ -648,7 +648,7 @@ command to extract ``model.state_dict()``.
|
|||||||
.. caution::
|
.. caution::
|
||||||
|
|
||||||
``--streaming-model`` and ``--causal-convolution`` require to be True to export
|
``--streaming-model`` and ``--causal-convolution`` require to be True to export
|
||||||
a streaming mdoel.
|
a streaming model.
|
||||||
|
|
||||||
It will generate a file ``./pruned_transducer_stateless4/exp/pretrained.pt``.
|
It will generate a file ``./pruned_transducer_stateless4/exp/pretrained.pt``.
|
||||||
|
|
||||||
@ -697,7 +697,7 @@ Export model using ``torch.jit.script()``
|
|||||||
.. caution::
|
.. caution::
|
||||||
|
|
||||||
``--streaming-model`` and ``--causal-convolution`` require to be True to export
|
``--streaming-model`` and ``--causal-convolution`` require to be True to export
|
||||||
a streaming mdoel.
|
a streaming model.
|
||||||
|
|
||||||
It will generate a file ``cpu_jit.pt`` in the given ``exp_dir``. You can later
|
It will generate a file ``cpu_jit.pt`` in the given ``exp_dir``. You can later
|
||||||
load it by ``torch.jit.load("cpu_jit.pt")``.
|
load it by ``torch.jit.load("cpu_jit.pt")``.
|
||||||
|
@ -101,7 +101,7 @@
|
|||||||
<h1>How to create a recipe<a class="headerlink" href="#how-to-create-a-recipe" title="Permalink to this heading"></a></h1>
|
<h1>How to create a recipe<a class="headerlink" href="#how-to-create-a-recipe" title="Permalink to this heading"></a></h1>
|
||||||
<div class="admonition hint">
|
<div class="admonition hint">
|
||||||
<p class="admonition-title">Hint</p>
|
<p class="admonition-title">Hint</p>
|
||||||
<p>Please read <a class="reference internal" href="code-style.html#follow-the-code-style"><span class="std std-ref">Follow the code style</span></a> to adjust your code sytle.</p>
|
<p>Please read <a class="reference internal" href="code-style.html#follow-the-code-style"><span class="std std-ref">Follow the code style</span></a> to adjust your code style.</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="admonition caution">
|
<div class="admonition caution">
|
||||||
<p class="admonition-title">Caution</p>
|
<p class="admonition-title">Caution</p>
|
||||||
|
@ -133,7 +133,7 @@ See <a class="reference internal" href="librispeech/pruned_transducer_stateless.
|
|||||||
<p class="admonition-title">Hint</p>
|
<p class="admonition-title">Hint</p>
|
||||||
<p>If you want to modify a non-streaming conformer recipe to support both streaming and non-streaming, please refer
|
<p>If you want to modify a non-streaming conformer recipe to support both streaming and non-streaming, please refer
|
||||||
to <a class="reference external" href="https://github.com/k2-fsa/icefall/pull/454">this pull request</a>. After adding the code needed by streaming training,
|
to <a class="reference external" href="https://github.com/k2-fsa/icefall/pull/454">this pull request</a>. After adding the code needed by streaming training,
|
||||||
you have to re-train it with the extra arguments metioned in the docs above to get a streaming model.</p>
|
you have to re-train it with the extra arguments mentioned in the docs above to get a streaming model.</p>
|
||||||
</div>
|
</div>
|
||||||
</section>
|
</section>
|
||||||
<section id="streaming-emformer">
|
<section id="streaming-emformer">
|
||||||
|
@ -652,7 +652,7 @@ can try decoding with <code class="docutils literal notranslate"><span class="pr
|
|||||||
of each frame as the decoding result.</p></li>
|
of each frame as the decoding result.</p></li>
|
||||||
<li><p><code class="docutils literal notranslate"><span class="pre">beam_search</span></code> : It implements Algorithm 1 in <a class="reference external" href="https://arxiv.org/pdf/1211.3711.pdf">https://arxiv.org/pdf/1211.3711.pdf</a> and
|
<li><p><code class="docutils literal notranslate"><span class="pre">beam_search</span></code> : It implements Algorithm 1 in <a class="reference external" href="https://arxiv.org/pdf/1211.3711.pdf">https://arxiv.org/pdf/1211.3711.pdf</a> and
|
||||||
<a class="reference external" href="https://github.com/espnet/espnet/blob/master/espnet/nets/beam_search_transducer.py#L247">espnet/nets/beam_search_transducer.py</a>
|
<a class="reference external" href="https://github.com/espnet/espnet/blob/master/espnet/nets/beam_search_transducer.py#L247">espnet/nets/beam_search_transducer.py</a>
|
||||||
is used as a reference. Basicly, it keeps topk states for each frame, and expands the kept states with their own contexts to
|
is used as a reference. Basically, it keeps topk states for each frame, and expands the kept states with their own contexts to
|
||||||
next frame.</p></li>
|
next frame.</p></li>
|
||||||
<li><p><code class="docutils literal notranslate"><span class="pre">modified_beam_search</span></code> : It implements the same algorithm as <code class="docutils literal notranslate"><span class="pre">beam_search</span></code> above, but it
|
<li><p><code class="docutils literal notranslate"><span class="pre">modified_beam_search</span></code> : It implements the same algorithm as <code class="docutils literal notranslate"><span class="pre">beam_search</span></code> above, but it
|
||||||
runs in batch mode with <code class="docutils literal notranslate"><span class="pre">--max-sym-per-frame=1</span></code> being hardcoded.</p></li>
|
runs in batch mode with <code class="docutils literal notranslate"><span class="pre">--max-sym-per-frame=1</span></code> being hardcoded.</p></li>
|
||||||
@ -726,7 +726,7 @@ command to extract <code class="docutils literal notranslate"><span class="pre">
|
|||||||
<div class="admonition caution">
|
<div class="admonition caution">
|
||||||
<p class="admonition-title">Caution</p>
|
<p class="admonition-title">Caution</p>
|
||||||
<p><code class="docutils literal notranslate"><span class="pre">--streaming-model</span></code> and <code class="docutils literal notranslate"><span class="pre">--causal-convolution</span></code> require to be True to export
|
<p><code class="docutils literal notranslate"><span class="pre">--streaming-model</span></code> and <code class="docutils literal notranslate"><span class="pre">--causal-convolution</span></code> require to be True to export
|
||||||
a streaming mdoel.</p>
|
a streaming model.</p>
|
||||||
</div>
|
</div>
|
||||||
<p>It will generate a file <code class="docutils literal notranslate"><span class="pre">./pruned_transducer_stateless4/exp/pretrained.pt</span></code>.</p>
|
<p>It will generate a file <code class="docutils literal notranslate"><span class="pre">./pruned_transducer_stateless4/exp/pretrained.pt</span></code>.</p>
|
||||||
<div class="admonition hint">
|
<div class="admonition hint">
|
||||||
@ -768,7 +768,7 @@ can run:</p>
|
|||||||
<div class="admonition caution">
|
<div class="admonition caution">
|
||||||
<p class="admonition-title">Caution</p>
|
<p class="admonition-title">Caution</p>
|
||||||
<p><code class="docutils literal notranslate"><span class="pre">--streaming-model</span></code> and <code class="docutils literal notranslate"><span class="pre">--causal-convolution</span></code> require to be True to export
|
<p><code class="docutils literal notranslate"><span class="pre">--streaming-model</span></code> and <code class="docutils literal notranslate"><span class="pre">--causal-convolution</span></code> require to be True to export
|
||||||
a streaming mdoel.</p>
|
a streaming model.</p>
|
||||||
</div>
|
</div>
|
||||||
<p>It will generate a file <code class="docutils literal notranslate"><span class="pre">cpu_jit.pt</span></code> in the given <code class="docutils literal notranslate"><span class="pre">exp_dir</span></code>. You can later
|
<p>It will generate a file <code class="docutils literal notranslate"><span class="pre">cpu_jit.pt</span></code> in the given <code class="docutils literal notranslate"><span class="pre">exp_dir</span></code>. You can later
|
||||||
load it by <code class="docutils literal notranslate"><span class="pre">torch.jit.load("cpu_jit.pt")</span></code>.</p>
|
load it by <code class="docutils literal notranslate"><span class="pre">torch.jit.load("cpu_jit.pt")</span></code>.</p>
|
||||||
|
File diff suppressed because one or more lines are too long
Loading…
x
Reference in New Issue
Block a user