mirror of
https://github.com/k2-fsa/icefall.git
synced 2025-08-08 09:32:20 +00:00
Fix doc typos for onnx export (#891)
This commit is contained in:
parent
35e5a2475c
commit
e916027bfe
@ -86,4 +86,6 @@ rst_epilog = """
|
||||
.. _ncnn: https://github.com/tencent/ncnn
|
||||
.. _LibriSpeech: https://www.openslr.org/12
|
||||
.. _musan: http://www.openslr.org/17/
|
||||
.. _ONNX: https://github.com/onnx/onnx
|
||||
.. _onnxruntime: https://github.com/microsoft/onnxruntime
|
||||
"""
|
||||
|
@ -1,20 +1,21 @@
|
||||
Export to ONNX
|
||||
==============
|
||||
|
||||
In this section, we describe how to export the following models to ONNX.
|
||||
In this section, we describe how to export models to `ONNX`_.
|
||||
|
||||
In each recipe, there is a file called ``export-onnx.py``, which is used
|
||||
to export trained models to ONNX.
|
||||
to export trained models to `ONNX`_.
|
||||
|
||||
There is also a file named ``onnx_pretrained.py``, which you can use
|
||||
the exported ONNX model in Python to decode sound files.
|
||||
the exported `ONNX`_ model in Python with `onnxruntime`_ to decode sound files.
|
||||
|
||||
Example
|
||||
=======
|
||||
|
||||
In the following, we demonstrate how to export a streaming Zipformer pre-trained
|
||||
model from `<python3 ./python-api-examples/speech-recognition-from-microphone.py>`_
|
||||
to ONNX.
|
||||
model from
|
||||
`<https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless7-2022-11-11>`_
|
||||
to `ONNX`_.
|
||||
|
||||
Download the pre-trained model
|
||||
------------------------------
|
||||
|
Loading…
x
Reference in New Issue
Block a user