Fangjun Kuang
|
34e40a86b3
|
Fix exporting decoder model to onnx (#1264)
* Use torch.jit.script() to export the decoder model
See also https://github.com/k2-fsa/sherpa-onnx/issues/327
|
2023-09-22 09:57:15 +08:00 |
|
zr_jin
|
a81396b482
|
Use tokens.txt to replace bpe.model (#1162)
|
2023-08-12 16:53:59 +08:00 |
|
Fangjun Kuang
|
a632b24c35
|
Export int8 quantized models for non-streaming Zipformer. (#977)
* Export int8 quantized models for non-streaming Zipformer.
* Delete export-onnx.py
* Export int8 models for other folders
|
2023-03-31 22:46:19 +08:00 |
|
Fangjun Kuang
|
8c3ea93fc8
|
Save meta data to exported ONNX models (#968)
|
2023-03-27 11:39:29 +08:00 |
|
Fangjun Kuang
|
7ae03f6c88
|
Add onnx export support for pruned_transducer_stateless5 (#883)
|
2023-02-07 17:47:08 +08:00 |
|
Fangjun Kuang
|
8d3810e289
|
Simplify ONNX export (#881)
* Simplify ONNX export
* Fix ONNX CI tests
|
2023-02-07 15:01:59 +08:00 |
|