deploy: 5c8e9628cc39b9fb1e471d53df9aec06b2602b97

This commit is contained in:
csukuangfj 2023-01-13 07:47:20 +00:00
parent a172087ed0
commit dcfc7c6419
14 changed files with 142 additions and 64 deletions

View File

@ -65,3 +65,43 @@ The fix is:
pip uninstall setuptools
pip install setuptools==58.0.4
ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory
--------------------------------------------------------------------------------------------
If you are using ``conda`` and encounter the following issue:
.. code-block::
Traceback (most recent call last):
File "/k2-dev/yangyifan/anaconda3/envs/icefall/lib/python3.10/site-packages/k2-1.23.3.dev20230112+cuda11.6.torch1.13.1-py3.10-linux-x86_64.egg/k2/__init__.py", line 24, in <module>
from _k2 import DeterminizeWeightPushingType
ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/k2-dev/yangyifan/icefall/egs/librispeech/ASR/./pruned_transducer_stateless7_ctc_bs/decode.py", line 104, in <module>
import k2
File "/k2-dev/yangyifan/anaconda3/envs/icefall/lib/python3.10/site-packages/k2-1.23.3.dev20230112+cuda11.6.torch1.13.1-py3.10-linux-x86_64.egg/k2/__init__.py", line 30, in <module>
raise ImportError(
ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory
Note: If you're using anaconda and importing k2 on MacOS,
you can probably fix this by setting the environment variable:
export DYLD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages:$DYLD_LIBRARY_PATH
Please first try to find where ``libpython3.10.so.1.0`` locates.
For instance,
.. code-block:: bash
cd $CONDA_PREFIX/lib
find . -name "libpython*"
If you are able to find it inside ``$CODNA_PREFIX/lib``, please set the
following environment variable:
.. code-block:: bash
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH

View File

@ -1,16 +1,16 @@
Distillation with HuBERT
========================
This totorial shows you how to perform knowledge distillation in ``icefall``
with the `LibriSpeech <https://www.openslr.org/12>`_ dataset. The distillation method
used here is called "Multi Vector Quantization Knowledge Distillation" (MVQ-KD).
This tutorial shows you how to perform knowledge distillation in `icefall`_
with the `LibriSpeech`_ dataset. The distillation method
used here is called "Multi Vector Quantization Knowledge Distillation" (MVQ-KD).
Please have a look at our paper `Predicting Multi-Codebook Vector Quantization Indexes for Knowledge Distillation <https://arxiv.org/abs/2211.00508>`_
for more details about MVQ-KD.
.. note::
This tutorial is based on recipe
`pruned_transducer_stateless4 <https://github.com/k2-fsa/icefall/tree/master/egs/librispeech/ASR/pruned_transducer_stateless4>`_.
`pruned_transducer_stateless4 <https://github.com/k2-fsa/icefall/tree/master/egs/librispeech/ASR/pruned_transducer_stateless4>`_.
Currently, we only implement MVQ-KD in this recipe. However, MVQ-KD is theoretically applicable to all recipes
with only minor changes needed. Feel free to try out MVQ-KD in different recipes. If you
encounter any problems, please open an issue here `icefall <https://github.com/k2-fsa/icefall/issues>`_.
@ -18,7 +18,7 @@ for more details about MVQ-KD.
.. note::
We assume you have read the page :ref:`install icefall` and have setup
the environment for ``icefall``.
the environment for `icefall`_.
.. HINT::
@ -27,13 +27,13 @@ for more details about MVQ-KD.
Data preparation
----------------
We first prepare necessary training data for ``LibriSpeech``.
This is the same as in `Pruned_transducer_statelessX <./pruned_transducer_stateless.rst>`_.
We first prepare necessary training data for `LibriSpeech`_.
This is the same as in :ref:`non_streaming_librispeech_pruned_transducer_stateless`.
.. hint::
The data preparation is the same as other recipes on LibriSpeech dataset,
if you have finished this step, you can skip to ``Codebook index preparation`` directly.
if you have finished this step, you can skip to :ref:`codebook_index_preparation` directly.
.. code-block:: bash
@ -61,8 +61,8 @@ For example,
.. HINT::
If you have pre-downloaded the `LibriSpeech <https://www.openslr.org/12>`_
dataset and the `musan <http://www.openslr.org/17/>`_ dataset, say,
If you have pre-downloaded the `LibriSpeech`_
dataset and the `musan`_ dataset, say,
they are saved in ``/tmp/LibriSpeech`` and ``/tmp/musan``, you can modify
the ``dl_dir`` variable in ``./prepare.sh`` to point to ``/tmp`` so that
``./prepare.sh`` won't re-download them.
@ -84,24 +84,27 @@ We provide the following YouTube video showing how to run ``./prepare.sh``.
.. youtube:: ofEIoJL-mGM
.. _codebook_index_preparation:
Codebook index preparation
--------------------------
Here, we prepare necessary data for MVQ-KD. This requires the generation
of codebook indexes (please read our `paper <https://arxiv.org/abs/2211.00508>`_.
if you are interested in details). In this tutorial, we use the pre-computed
codebook indexes for convenience. The only thing you need to do is to
run ``./distillation_with_hubert.sh``.
if you are interested in details). In this tutorial, we use the pre-computed
codebook indexes for convenience. The only thing you need to do is to
run `./distillation_with_hubert.sh <https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/distillation_with_hubert.sh>`_.
.. note::
There are 5 stages in total, the first and second stage will be automatically skipped
when choosing to downloaded codebook indexes prepared by `icefall`_.
Of course, you can extract and compute the codebook indexes by yourself. This
will require you downloading a HuBERT-XL model and it can take a while for
the extraction of codebook indexes.
As usual, you can control the stages you want to run by specifying the following
There are 5 stages in total, the first and second stage will be automatically skipped
when choosing to downloaded codebook indexes prepared by `icefall`_.
Of course, you can extract and compute the codebook indexes by yourself. This
will require you downloading a HuBERT-XL model and it can take a while for
the extraction of codebook indexes.
As usual, you can control the stages you want to run by specifying the following
two options:
- ``--stage``
@ -115,7 +118,7 @@ For example,
$ ./distillation_with_hubert.sh --stage 0 --stop-stage 0 # run only stage 0
$ ./distillation_with_hubert.sh --stage 2 --stop-stage 4 # run from stage 2 to stage 5
Here are a few options in ``./distillation_with_hubert.sh``
Here are a few options in `./distillation_with_hubert.sh <https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/distillation_with_hubert.sh>`_
you need to know before you proceed.
- ``--full_libri`` If True, use full 960h data. Otherwise only ``train-clean-100`` will be used
@ -126,14 +129,14 @@ Since we are using the pre-computed codebook indexes, we set
``use_extracted_codebook=True``. If you want to do full `LibriSpeech`_
experiments, please set ``full_libri=True``.
The following command downloads the pre-computed codebook indexes
and prepares MVQ-augmented training manifests.
The following command downloads the pre-computed codebook indexes
and prepares MVQ-augmented training manifests.
.. code-block:: bash
$ ./distillation_with_hubert.sh --stage 2 --stop-stage 2 # run only stage 2
Please see the
Please see the
following screenshot for the output of an example execution.
.. figure:: ./images/distillation_codebook.png
@ -146,12 +149,12 @@ following screenshot for the output of an example execution.
.. hint::
The codebook indexes we prepared for you in this tutorial
are extracted from the 36-th layer of a fine-tuned HuBERT-XL model
are extracted from the 36-th layer of a fine-tuned HuBERT-XL model
with 8 codebooks. If you want to try other configurations, please
set ``use_extracted_codebook=False`` and set ``embedding_layer`` and
set ``use_extracted_codebook=False`` and set ``embedding_layer`` and
``num_codebooks`` by yourself.
Now, you should see the following files under the direcory ``./data/vq_fbank_layer36_cb8``.
Now, you should see the following files under the directory ``./data/vq_fbank_layer36_cb8``.
.. figure:: ./images/distillation_directory.png
:width: 800
@ -165,7 +168,7 @@ Whola! You are ready to perform knowledge distillation training now!
Training
--------
To perform training, please run stage 3 by executing the following command.
To perform training, please run stage 3 by executing the following command.
.. code-block:: bash
@ -176,7 +179,7 @@ Here is the code snippet for training:
.. code-block:: bash
WORLD_SIZE=$(echo ${CUDA_VISIBLE_DEVICES} | awk '{n=split($1, _, ","); print n}')
./pruned_transducer_stateless6/train.py \
--manifest-dir ./data/vq_fbank_layer36_cb8 \
--master-port 12359 \
@ -191,6 +194,7 @@ Here is the code snippet for training:
There are a few training arguments in the following
training commands that should be paid attention to.
- ``--enable-distillation`` If True, knowledge distillation training is enabled.
- ``--codebook-loss-scale`` The scale of the knowledge distillation loss.
- ``--manifest-dir`` The path to the MVQ-augmented manifest.
@ -204,7 +208,7 @@ the following command.
.. code-block:: bash
export CUDA_VISIBLE_DEVICES=0
export CUDA_VISIBLE_DEVICES=0
./pruned_transducer_stateless6/train.py \
--decoding-method "modified_beam_search" \
--epoch 30 \
@ -217,4 +221,3 @@ You should get similar results as `here <https://github.com/k2-fsa/icefall/blob/
That's all! Feel free to experiment with your own setups and report your results.
If you encounter any problems during training, please open up an issue `here <https://github.com/k2-fsa/icefall/issues>`_.

View File

@ -1,3 +1,5 @@
.. _non_streaming_librispeech_pruned_transducer_stateless:
Pruned transducer statelessX
============================

View File

@ -45,6 +45,7 @@
<li class="toctree-l1 current"><a class="current reference internal" href="#">Frequently Asked Questions (FAQs)</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#oserror-libtorch-hip-so-cannot-open-shared-object-file-no-such-file-or-directory">OSError: libtorch_hip.so: cannot open shared object file: no such file or directory</a></li>
<li class="toctree-l2"><a class="reference internal" href="#attributeerror-module-distutils-has-no-attribute-version">AttributeError: module distutils has no attribute version</a></li>
<li class="toctree-l2"><a class="reference internal" href="#importerror-libpython3-10-so-1-0-cannot-open-shared-object-file-no-such-file-or-directory">ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="model-export/index.html">Model export</a></li>
@ -136,6 +137,39 @@ pip<span class="w"> </span>install<span class="w"> </span><span class="nv">setup
</pre></div>
</div>
</section>
<section id="importerror-libpython3-10-so-1-0-cannot-open-shared-object-file-no-such-file-or-directory">
<h2>ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory<a class="headerlink" href="#importerror-libpython3-10-so-1-0-cannot-open-shared-object-file-no-such-file-or-directory" title="Permalink to this heading"></a></h2>
<p>If you are using <code class="docutils literal notranslate"><span class="pre">conda</span></code> and encounter the following issue:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span>Traceback (most recent call last):
File &quot;/k2-dev/yangyifan/anaconda3/envs/icefall/lib/python3.10/site-packages/k2-1.23.3.dev20230112+cuda11.6.torch1.13.1-py3.10-linux-x86_64.egg/k2/__init__.py&quot;, line 24, in &lt;module&gt;
from _k2 import DeterminizeWeightPushingType
ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File &quot;/k2-dev/yangyifan/icefall/egs/librispeech/ASR/./pruned_transducer_stateless7_ctc_bs/decode.py&quot;, line 104, in &lt;module&gt;
import k2
File &quot;/k2-dev/yangyifan/anaconda3/envs/icefall/lib/python3.10/site-packages/k2-1.23.3.dev20230112+cuda11.6.torch1.13.1-py3.10-linux-x86_64.egg/k2/__init__.py&quot;, line 30, in &lt;module&gt;
raise ImportError(
ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory
Note: If you&#39;re using anaconda and importing k2 on MacOS,
you can probably fix this by setting the environment variable:
export DYLD_LIBRARY_PATH=$CONDA_PREFIX/lib/python3.10/site-packages:$DYLD_LIBRARY_PATH
</pre></div>
</div>
<p>Please first try to find where <code class="docutils literal notranslate"><span class="pre">libpython3.10.so.1.0</span></code> locates.</p>
<p>For instance,</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">cd</span><span class="w"> </span><span class="nv">$CONDA_PREFIX</span>/lib
find<span class="w"> </span>.<span class="w"> </span>-name<span class="w"> </span><span class="s2">&quot;libpython*&quot;</span>
</pre></div>
</div>
<p>If you are able to find it inside <code class="docutils literal notranslate"><span class="pre">$CODNA_PREFIX/lib</span></code>, please set the
following environment variable:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">export</span><span class="w"> </span><span class="nv">LD_LIBRARY_PATH</span><span class="o">=</span><span class="nv">$CONDA_PREFIX</span>/lib:<span class="nv">$LD_LIBRARY_PATH</span>
</pre></div>
</div>
</section>
</section>

View File

@ -97,6 +97,7 @@ speech recognition recipes using <a class="reference external" href="https://git
<li class="toctree-l1"><a class="reference internal" href="faqs.html">Frequently Asked Questions (FAQs)</a><ul>
<li class="toctree-l2"><a class="reference internal" href="faqs.html#oserror-libtorch-hip-so-cannot-open-shared-object-file-no-such-file-or-directory">OSError: libtorch_hip.so: cannot open shared object file: no such file or directory</a></li>
<li class="toctree-l2"><a class="reference internal" href="faqs.html#attributeerror-module-distutils-has-no-attribute-version">AttributeError: module distutils has no attribute version</a></li>
<li class="toctree-l2"><a class="reference internal" href="faqs.html#importerror-libpython3-10-so-1-0-cannot-open-shared-object-file-no-such-file-or-directory">ImportError: libpython3.10.so.1.0: cannot open shared object file: No such file or directory</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="model-export/index.html">Model export</a><ul>

Binary file not shown.

View File

@ -332,10 +332,10 @@ $<span class="w"> </span>tensorboard<span class="w"> </span>dev<span class="w">
<p>Note there is a URL in the above output, click it and you will see
the following screenshot:</p>
<blockquote>
<div><figure class="align-center" id="id3">
<div><figure class="align-center" id="id4">
<a class="reference external image-reference" href="https://tensorboard.dev/experiment/WE1DocDqRRCOSAgmGyClhg/"><img alt="TensorBoard screenshot" src="../../../_images/aishell-conformer-ctc-tensorboard-log.jpg" style="width: 600px;" /></a>
<figcaption>
<p><span class="caption-number">Fig. 2 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id3" title="Permalink to this image"></a></p>
<p><span class="caption-number">Fig. 2 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id4" title="Permalink to this image"></a></p>
</figcaption>
</figure>
</div></blockquote>

View File

@ -328,10 +328,10 @@ $<span class="w"> </span>tensorboard<span class="w"> </span>dev<span class="w">
<p>Note there is a URL in the above output, click it and you will see
the following screenshot:</p>
<blockquote>
<div><figure class="align-center" id="id2">
<div><figure class="align-center" id="id3">
<a class="reference external image-reference" href="https://tensorboard.dev/experiment/LJI9MWUORLOw3jkdhxwk8A/"><img alt="TensorBoard screenshot" src="../../../_images/aishell-tdnn-lstm-ctc-tensorboard-log.jpg" style="width: 600px;" /></a>
<figcaption>
<p><span class="caption-number">Fig. 1 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id2" title="Permalink to this image"></a></p>
<p><span class="caption-number">Fig. 1 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id3" title="Permalink to this image"></a></p>
</figcaption>
</figure>
</div></blockquote>

View File

@ -340,10 +340,10 @@ $<span class="w"> </span>tensorboard<span class="w"> </span>dev<span class="w">
<p>Note there is a URL in the above output, click it and you will see
the following screenshot:</p>
<blockquote>
<div><figure class="align-center" id="id4">
<div><figure class="align-center" id="id6">
<a class="reference external image-reference" href="https://tensorboard.dev/experiment/lzGnETjwRxC3yghNMd4kPw/"><img alt="TensorBoard screenshot" src="../../../_images/librispeech-conformer-ctc-tensorboard-log.png" style="width: 600px;" /></a>
<figcaption>
<p><span class="caption-number">Fig. 4 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id4" title="Permalink to this image"></a></p>
<p><span class="caption-number">Fig. 4 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id6" title="Permalink to this image"></a></p>
</figcaption>
</figure>
</div></blockquote>

View File

@ -100,25 +100,23 @@
<section id="distillation-with-hubert">
<h1>Distillation with HuBERT<a class="headerlink" href="#distillation-with-hubert" title="Permalink to this heading"></a></h1>
<p>This totorial shows you how to perform knowledge distillation in <code class="docutils literal notranslate"><span class="pre">icefall</span></code>
<p>This tutorial shows you how to perform knowledge distillation in <a href="#id7"><span class="problematic" id="id8">`icefall`_</span></a>
with the <a class="reference external" href="https://www.openslr.org/12">LibriSpeech</a> dataset. The distillation method
used here is called “Multi Vector Quantization Knowledge Distillation” (MVQ-KD).
Please have a look at our paper <a class="reference external" href="https://arxiv.org/abs/2211.00508">Predicting Multi-Codebook Vector Quantization Indexes for Knowledge Distillation</a>
for more details about MVQ-KD.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<dl class="simple">
<dt>This tutorial is based on recipe</dt><dd><p><a class="reference external" href="https://github.com/k2-fsa/icefall/tree/master/egs/librispeech/ASR/pruned_transducer_stateless4">pruned_transducer_stateless4</a>.</p>
</dd>
</dl>
<p>Currently, we only implement MVQ-KD in this recipe. However, MVQ-KD is theoretically applicable to all recipes
<p>This tutorial is based on recipe
<a class="reference external" href="https://github.com/k2-fsa/icefall/tree/master/egs/librispeech/ASR/pruned_transducer_stateless4">pruned_transducer_stateless4</a>.
Currently, we only implement MVQ-KD in this recipe. However, MVQ-KD is theoretically applicable to all recipes
with only minor changes needed. Feel free to try out MVQ-KD in different recipes. If you
encounter any problems, please open an issue here <a class="reference external" href="https://github.com/k2-fsa/icefall/issues">icefall</a>.</p>
</div>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>We assume you have read the page <a class="reference internal" href="../../../installation/index.html#install-icefall"><span class="std std-ref">Installation</span></a> and have setup
the environment for <code class="docutils literal notranslate"><span class="pre">icefall</span></code>.</p>
the environment for <a href="#id9"><span class="problematic" id="id10">`icefall`_</span></a>.</p>
</div>
<div class="admonition hint">
<p class="admonition-title">Hint</p>
@ -126,12 +124,12 @@ the environment for <code class="docutils literal notranslate"><span class="pre"
</div>
<section id="data-preparation">
<h2>Data preparation<a class="headerlink" href="#data-preparation" title="Permalink to this heading"></a></h2>
<p>We first prepare necessary training data for <code class="docutils literal notranslate"><span class="pre">LibriSpeech</span></code>.
This is the same as in <a class="reference external" href="./pruned_transducer_stateless.rst">Pruned_transducer_statelessX</a>.</p>
<p>We first prepare necessary training data for <a class="reference external" href="https://www.openslr.org/12">LibriSpeech</a>.
This is the same as in <a class="reference internal" href="pruned_transducer_stateless.html#non-streaming-librispeech-pruned-transducer-stateless"><span class="std std-ref">Pruned transducer statelessX</span></a>.</p>
<div class="admonition hint">
<p class="admonition-title">Hint</p>
<p>The data preparation is the same as other recipes on LibriSpeech dataset,
if you have finished this step, you can skip to <code class="docutils literal notranslate"><span class="pre">Codebook</span> <span class="pre">index</span> <span class="pre">preparation</span></code> directly.</p>
if you have finished this step, you can skip to <a class="reference internal" href="#codebook-index-preparation"><span class="std std-ref">Codebook index preparation</span></a> directly.</p>
</div>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>$<span class="w"> </span><span class="nb">cd</span><span class="w"> </span>egs/librispeech/ASR
$<span class="w"> </span>./prepare.sh
@ -180,16 +178,16 @@ the following YouTube channel by <a class="reference external" href="https://www
<iframe allowfullscreen="true" src="https://www.youtube.com/embed/ofEIoJL-mGM" style="border: 0; height: 345px; width: 560px">
</iframe></div></section>
<section id="codebook-index-preparation">
<h2>Codebook index preparation<a class="headerlink" href="#codebook-index-preparation" title="Permalink to this heading"></a></h2>
<span id="id1"></span><h2>Codebook index preparation<a class="headerlink" href="#codebook-index-preparation" title="Permalink to this heading"></a></h2>
<p>Here, we prepare necessary data for MVQ-KD. This requires the generation
of codebook indexes (please read our <a class="reference external" href="https://arxiv.org/abs/2211.00508">paper</a>.
if you are interested in details). In this tutorial, we use the pre-computed
codebook indexes for convenience. The only thing you need to do is to
run <code class="docutils literal notranslate"><span class="pre">./distillation_with_hubert.sh</span></code>.</p>
run <a class="reference external" href="https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/distillation_with_hubert.sh">./distillation_with_hubert.sh</a>.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>There are 5 stages in total, the first and second stage will be automatically skipped
when choosing to downloaded codebook indexes prepared by <a class="reference external" href="https://github.com/k2-fsa/icefall/issues">icefall</a>.
when choosing to downloaded codebook indexes prepared by <a href="#id11"><span class="problematic" id="id12">`icefall`_</span></a>.
Of course, you can extract and compute the codebook indexes by yourself. This
will require you downloading a HuBERT-XL model and it can take a while for
the extraction of codebook indexes.</p>
@ -208,7 +206,7 @@ $<span class="w"> </span>./distillation_with_hubert.sh<span class="w"> </span>--
$<span class="w"> </span>./distillation_with_hubert.sh<span class="w"> </span>--stage<span class="w"> </span><span class="m">2</span><span class="w"> </span>--stop-stage<span class="w"> </span><span class="m">4</span><span class="w"> </span><span class="c1"># run from stage 2 to stage 5</span>
</pre></div>
</div>
<p>Here are a few options in <code class="docutils literal notranslate"><span class="pre">./distillation_with_hubert.sh</span></code>
<p>Here are a few options in <a class="reference external" href="https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/distillation_with_hubert.sh">./distillation_with_hubert.sh</a>
you need to know before you proceed.</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">--full_libri</span></code> If True, use full 960h data. Otherwise only <code class="docutils literal notranslate"><span class="pre">train-clean-100</span></code> will be used</p></li>
@ -225,10 +223,10 @@ and prepares MVQ-augmented training manifests.</p>
</div>
<p>Please see the
following screenshot for the output of an example execution.</p>
<figure class="align-center" id="id3">
<figure class="align-center" id="id5">
<a class="reference internal image-reference" href="../../../_images/distillation_codebook.png"><img alt="Downloading codebook indexes and preparing training manifest." src="../../../_images/distillation_codebook.png" style="width: 800px;" /></a>
<figcaption>
<p><span class="caption-number">Fig. 6 </span><span class="caption-text">Downloading codebook indexes and preparing training manifest.</span><a class="headerlink" href="#id3" title="Permalink to this image"></a></p>
<p><span class="caption-number">Fig. 6 </span><span class="caption-text">Downloading codebook indexes and preparing training manifest.</span><a class="headerlink" href="#id5" title="Permalink to this image"></a></p>
</figcaption>
</figure>
<div class="admonition hint">
@ -239,11 +237,11 @@ with 8 codebooks. If you want to try other configurations, please
set <code class="docutils literal notranslate"><span class="pre">use_extracted_codebook=False</span></code> and set <code class="docutils literal notranslate"><span class="pre">embedding_layer</span></code> and
<code class="docutils literal notranslate"><span class="pre">num_codebooks</span></code> by yourself.</p>
</div>
<p>Now, you should see the following files under the direcory <code class="docutils literal notranslate"><span class="pre">./data/vq_fbank_layer36_cb8</span></code>.</p>
<figure class="align-center" id="id4">
<p>Now, you should see the following files under the directory <code class="docutils literal notranslate"><span class="pre">./data/vq_fbank_layer36_cb8</span></code>.</p>
<figure class="align-center" id="id6">
<a class="reference internal image-reference" href="../../../_images/distillation_directory.png"><img alt="MVQ-augmented training manifests" src="../../../_images/distillation_directory.png" style="width: 800px;" /></a>
<figcaption>
<p><span class="caption-number">Fig. 7 </span><span class="caption-text">MVQ-augmented training manifests.</span><a class="headerlink" href="#id4" title="Permalink to this image"></a></p>
<p><span class="caption-number">Fig. 7 </span><span class="caption-text">MVQ-augmented training manifests.</span><a class="headerlink" href="#id6" title="Permalink to this image"></a></p>
</figcaption>
</figure>
<p>Whola! You are ready to perform knowledge distillation training now!</p>

View File

@ -99,7 +99,7 @@
<div itemprop="articleBody">
<section id="pruned-transducer-statelessx">
<h1>Pruned transducer statelessX<a class="headerlink" href="#pruned-transducer-statelessx" title="Permalink to this heading"></a></h1>
<span id="non-streaming-librispeech-pruned-transducer-stateless"></span><h1>Pruned transducer statelessX<a class="headerlink" href="#pruned-transducer-statelessx" title="Permalink to this heading"></a></h1>
<p>This tutorial shows you how to run a conformer transducer model
with the <a class="reference external" href="https://www.openslr.org/12">LibriSpeech</a> dataset.</p>
<div class="admonition note">
@ -378,10 +378,10 @@ $<span class="w"> </span>tensorboard<span class="w"> </span>dev<span class="w">
<p>Note there is a URL in the above output. Click it and you will see
the following screenshot:</p>
<blockquote>
<div><figure class="align-center" id="id7">
<div><figure class="align-center" id="id9">
<a class="reference external image-reference" href="https://tensorboard.dev/experiment/QOGSPBgsR8KzcRMmie9JGw/"><img alt="TensorBoard screenshot" src="../../../_images/librispeech-pruned-transducer-tensorboard-log.jpg" style="width: 600px;" /></a>
<figcaption>
<p><span class="caption-number">Fig. 5 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id7" title="Permalink to this image"></a></p>
<p><span class="caption-number">Fig. 5 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id9" title="Permalink to this image"></a></p>
</figcaption>
</figure>
</div></blockquote>

View File

@ -377,10 +377,10 @@ $<span class="w"> </span>tensorboard<span class="w"> </span>dev<span class="w">
<p>Note there is a URL in the above output. Click it and you will see
the following screenshot:</p>
<blockquote>
<div><figure class="align-center" id="id3">
<div><figure class="align-center" id="id5">
<a class="reference external image-reference" href="https://tensorboard.dev/experiment/lzGnETjwRxC3yghNMd4kPw/"><img alt="TensorBoard screenshot" src="../../../_images/librispeech-lstm-transducer-tensorboard-log.png" style="width: 600px;" /></a>
<figcaption>
<p><span class="caption-number">Fig. 10 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id3" title="Permalink to this image"></a></p>
<p><span class="caption-number">Fig. 10 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id5" title="Permalink to this image"></a></p>
</figcaption>
</figure>
</div></blockquote>

View File

@ -397,10 +397,10 @@ $<span class="w"> </span>tensorboard<span class="w"> </span>dev<span class="w">
<p>Note there is a URL in the above output. Click it and you will see
the following screenshot:</p>
<blockquote>
<div><figure class="align-center" id="id7">
<div><figure class="align-center" id="id10">
<a class="reference external image-reference" href="https://tensorboard.dev/experiment/97VKXf80Ru61CnP2ALWZZg/"><img alt="TensorBoard screenshot" src="../../../_images/streaming-librispeech-pruned-transducer-tensorboard-log.jpg" style="width: 600px;" /></a>
<figcaption>
<p><span class="caption-number">Fig. 9 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id7" title="Permalink to this image"></a></p>
<p><span class="caption-number">Fig. 9 </span><span class="caption-text">TensorBoard screenshot.</span><a class="headerlink" href="#id10" title="Permalink to this image"></a></p>
</figcaption>
</figure>
</div></blockquote>

File diff suppressed because one or more lines are too long