Fix doc typos for onnx export (#891)

This commit is contained in:
Fangjun Kuang 2023-02-09 10:33:40 +08:00 committed by GitHub
parent 35e5a2475c
commit e916027bfe
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 8 additions and 5 deletions

View File

@ -86,4 +86,6 @@ rst_epilog = """
.. _ncnn: https://github.com/tencent/ncnn
.. _LibriSpeech: https://www.openslr.org/12
.. _musan: http://www.openslr.org/17/
.. _ONNX: https://github.com/onnx/onnx
.. _onnxruntime: https://github.com/microsoft/onnxruntime
"""

View File

@ -1,20 +1,21 @@
Export to ONNX
==============
In this section, we describe how to export the following models to ONNX.
In this section, we describe how to export models to `ONNX`_.
In each recipe, there is a file called ``export-onnx.py``, which is used
to export trained models to ONNX.
to export trained models to `ONNX`_.
There is also a file named ``onnx_pretrained.py``, which you can use
the exported ONNX model in Python to decode sound files.
the exported `ONNX`_ model in Python with `onnxruntime`_ to decode sound files.
Example
=======
In the following, we demonstrate how to export a streaming Zipformer pre-trained
model from `<python3 ./python-api-examples/speech-recognition-from-microphone.py>`_
to ONNX.
model from
`<https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless7-2022-11-11>`_
to `ONNX`_.
Download the pre-trained model
------------------------------