709 Commits

Author SHA1 Message Date
Fangjun Kuang
7e82f87126
Add Zipformer from Dan (#672) 2022-11-12 18:11:19 +08:00
Fangjun Kuang
e334e570d8
Filter utterances with number_tokens > number_feature_frames. (#604) 2022-11-12 07:57:58 +08:00
Zengwei Yao
32de2766d5
Refactor getting timestamps in fsa-based decoding (#660)
* refactor getting timestamps for fsa-based decoding

* fix doc

* fix bug
2022-11-05 22:36:06 +08:00
Zengwei Yao
3600ce1b5f
Apply delay penalty on transducer (#654)
* add delay penalty

* fix CI

* fix CI
2022-11-04 16:10:09 +08:00
marcoyang
2271c3d396 remove testing file 2022-11-04 12:26:38 +08:00
marcoyang
a2d7095c1c resolve conflicts 2022-11-04 11:37:42 +08:00
marcoyang
b3c61b85e3 minor fixes 2022-11-04 11:32:09 +08:00
marcoyang
bdaeaae1ae resolve conflicts 2022-11-04 11:25:10 +08:00
marcoyang
0df597291f resolve conflict with timestamp feature 2022-11-04 11:17:56 +08:00
Wei Kang
64aed2cdeb
Fix LG log file name (#657) 2022-11-03 23:12:35 +08:00
Wei Kang
163d929601
Add fast_beam_search_LG (#622)
* Add fast_beam_search_LG

* add fast_beam_search_LG to commonly used recipes

* fix ci

* fix ci

* Fix error
2022-11-03 16:29:30 +08:00
marcoyang
f45d9c4383 resolve conflicts 2022-11-03 11:12:49 +08:00
marcoyang
2a52b8c125 update docs 2022-11-03 11:10:21 +08:00
marcoyang1998
e3f218b62b
Update egs/librispeech/ASR/lstm_transducer_stateless2/decode.py
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-11-02 22:10:23 +08:00
marcoyang
fb45b95c90 minor fixes 2022-11-02 18:11:39 +08:00
marcoyang
9a01b9098d include previous added decoding method 2022-11-02 18:03:56 +08:00
marcoyang
6c8d1f9ef5 update 2022-11-02 17:48:58 +08:00
marcoyang
babcfd4b68 update author info 2022-11-02 17:27:31 +08:00
marcoyang
0a46a39e24 update decoding commands 2022-11-02 17:25:31 +08:00
marcoyang
86662f0b97 update results 2022-11-02 17:24:53 +08:00
marcoyang
63d0a52dbd support RNNLM shallow fusion in stateless5 2022-11-02 16:37:29 +08:00
marcoyang
de2f5e3e6d support RNNLM shallow fusion for LSTM transducer 2022-11-02 16:15:56 +08:00
Wei Kang
d389524d45
remove tail padding for non-streaming models (#625) 2022-11-01 11:09:56 +08:00
Zengwei Yao
03668771d7
Get timestamps during decoding (#598)
* print out timestamps during decoding

* add word-level alignments

* support to compute mean symbol delay with word-level alignments

* print variance of symbol delay

* update doc

* support to compute delay for pruned_transducer_stateless4

* fix bug

* add doc
2022-11-01 10:24:00 +08:00
Fangjun Kuang
7f1c0e07b6
Remove onnx and onnxruntime from requirements.txt (#640)
* Remove onnx and onnxruntime from requirements.txt
2022-10-31 13:44:40 +08:00
Wei Kang
581d0361cc
Fix type hints for decode.py (#638)
* Fix type hints for decode.py

* Fix flake8
2022-10-30 16:35:30 +08:00
Nagendra Goel
6709bf1e63
Update train.py (#635)
Add the missing step to add the arguments to the parser.
2022-10-28 10:23:32 +08:00
ezerhouni
9b671e1c21
Add Shallow fusion in modified_beam_search (#630)
* Add utility for shallow fusion

* test batch size == 1 without shallow fusion

* Use shallow fusion for modified-beam-search

* Modified beam search with ngram rescoring

* Fix code according to review

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-10-21 16:44:56 +08:00
marcoyang1998
c30b8d3a1c
fix number of parameters in RESULTS.md (#627) 2022-10-19 16:53:29 +08:00
Fangjun Kuang
d69bb826ed
Support exporting LSTM with projection to ONNX (#621)
* Support exporting LSTM with projection to ONNX

* Add missing files

* small fixes
2022-10-18 11:25:31 +08:00
Fangjun Kuang
d1f16a04bd
fix type hints for decode.py (#623) 2022-10-18 06:56:12 +08:00
Fangjun Kuang
a66e74b92f
Fix links in the doc (#619) 2022-10-14 12:23:47 +08:00
Fangjun Kuang
c39cba5191
Support exporting to ONNX for the wenetspeech recipe (#615)
* Support exporting to ONNX for the wenetspeech recipe
2022-10-13 15:17:20 +08:00
Zengwei Yao
aa58c2ee02
Modify ActivationBalancer for speed (#612)
* add a probability to apply ActivationBalancer

* minor fix

* minor fix
2022-10-13 15:14:28 +08:00
Fangjun Kuang
1c07d2fb37
Remove all-in-one for onnx export (#614)
* Remove all-in-one for onnx export

* Exit on error for CI
2022-10-12 10:34:06 +08:00
Yunusemre
f3db4ea871
exporting projection layers of joiner separately for onnx (#584)
* exporting projection layers of joiner separately for onnx
2022-10-11 18:22:28 +08:00
Zengwei Yao
f3ad32777a
Gradient filter for training lstm model (#564)
* init files

* add gradient filter module

* refact getting median value

* add cutoff for grad filter

* delete comments

* apply gradient filter in LSTM module, to filter both input and params

* fix typing and refactor

* filter with soft mask

* rename lstm_transducer_stateless2 to lstm_transducer_stateless3

* fix typos, and update RESULTS.md

* minor fix

* fix return typing

* fix typo
2022-09-29 11:15:43 +08:00
LIyong.Guo
923b60a7c6
padding zeros (#591) 2022-09-28 21:20:33 +08:00
Fangjun Kuang
099cd3a215
support exporting to ncnn format via PNNX (#571) 2022-09-20 22:52:49 +08:00
Fangjun Kuang
97b3fc53aa
Add LSTM for the multi-dataset setup. (#558)
* Add LSTM for the multi-dataset setup.

* Add results

* fix style issues

* add missing file
2022-09-16 18:40:25 +08:00
Fangjun Kuang
145c44f710
Use modified ctc topo when vocab size is > 500 (#568) 2022-09-13 10:59:27 +08:00
Fangjun Kuang
e18fa78c3a
Check that read_manifests_if_cached returns a non-empty dict. (#555) 2022-08-28 11:50:11 +08:00
kobenaxie
235eb0746f
fix scaling converter test for decoder(predictor). (#553) 2022-08-27 17:26:21 +08:00
marcoyang1998
1e31fbcd7d
Add clamping operation in Eve optimizer for all scalar weights to avoid (#550)
non stable training in some scenarios. The clamping range is set to (-10,2).
 Note that this change may cause unexpected effect if you resume
training from a model that is trained without clamping.
2022-08-25 12:12:50 +08:00
Duo Ma
0967cf5b38
fixed no cut_id error in decode_dataset (#549)
* fixed import quantization is none

Signed-off-by: shanguanma <nanr9544@gmail.com>

* fixed no cut_id error in decode_dataset

Signed-off-by: shanguanma <nanr9544@gmail.com>

* fixed more than one "#"

Signed-off-by: shanguanma <nanr9544@gmail.com>

* fixed code style

Signed-off-by: shanguanma <nanr9544@gmail.com>

Signed-off-by: shanguanma <nanr9544@gmail.com>
Co-authored-by: shanguanma <nanr9544@gmail.com>
2022-08-25 10:54:21 +08:00
Yuekai Zhang
f9c3d7f92f
fix typo for export jit script (#544) 2022-08-23 17:29:42 +08:00
Duo Ma
dbd61a9db3
fixed import quantization is none (#541)
Signed-off-by: shanguanma <nanr9544@gmail.com>

Signed-off-by: shanguanma <nanr9544@gmail.com>
Co-authored-by: shanguanma <nanr9544@gmail.com>
2022-08-23 10:19:03 +08:00
Fangjun Kuang
0598291ff1
minor fixes to LSTM streaming model (#537) 2022-08-20 09:50:50 +08:00
Zengwei Yao
f2f5baf687
Use ScaledLSTM as streaming encoder (#479)
* add ScaledLSTM

* add RNNEncoderLayer and RNNEncoder classes in lstm.py

* add RNN and Conv2dSubsampling classes in lstm.py

* hardcode bidirectional=False

* link from pruned_transducer_stateless2

* link scaling.py pruned_transducer_stateless2

* copy from pruned_transducer_stateless2

* modify decode.py pretrained.py test_model.py train.py

* copy streaming decoding files from pruned_transducer_stateless2

* modify streaming decoding files

* simplified code in ScaledLSTM

* flat weights after scaling

* pruned2 -> pruned4

* link __init__.py

* fix style

* remove add_model_arguments

* modify .flake8

* fix style

* fix scale value in scaling.py

* add random combiner for training deeper model

* add using proj_size

* add scaling converter for ScaledLSTM

* support jit trace

* add using averaged model in export.py

* modify test_model.py, test if the model can be successfully exported by jit.trace

* modify pretrained.py

* support streaming decoding

* fix model.py

* Add cut_id to recognition results

* Add cut_id to recognition results

* do not pad in Conv subsampling module; add tail padding during decoding.

* update RESULTS.md

* minor fix

* fix doc

* update README.md

* minor change, filter infinite loss

* remove the condition of raise error

* modify type hint for the return value in model.py

* minor change

* modify RESULTS.md

Co-authored-by: pkufool <wkang.pku@gmail.com>
2022-08-19 14:38:45 +08:00
marcoyang1998
c74cec59e9
propagate changes from #525 to other librispeech recipes (#531)
* propaga changes from #525 to other librispeech recipes

* refactor display_and_save_batch to utils

* fixed typo

* reformat code style
2022-08-17 17:18:15 +08:00