* initial commit for zipformer tedlium
* fix unk decoding
* add pretrained model and logs
* update for new AsrModel
* add option for choosing rnnt type
* add results with modified rnnt
* copy files
* update train.py
* small fixes
* Add decode.py
* Fix dataloader in decode.py
* add blank penalty
* Add blank-penalty to other decoding method
* Minor fixes
* add zipformer2 recipe
* Minor fixes
* Remove pruned7
* export and test models
* Replace bpe with tokens in export.py and pretrain.py
* Minor fixes
* Minor fixes
* Minor fixes
* Fix export
* Update results
* Fix zipformer-ctc
* Fix ci
* Fix ci
* Fix CI
* Fix CI
---------
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* add CTC loss option in zipformer recipe
* add ctc_decode.py
* support CTC model export, add jit_pretrained_ctc.py, pretrained_ctc.py
* update README.md and RESULTS.md
* add CI test
* copy files from zipformer librispeech
* Add byte bpe training for aishell
* compile LG graph
* Support LG decoding
* Minor fixes
* black
* Minor fixes
* export & fix pretrain.py
* fix black
* Update RESULTS.md
* Fix export.py
* support transformer LM
* show number of parameters during training
* update docstring
* testing files for ppl calculation
* add lm wrampper for rnn and transformer LM
* apply lm wrapper in lm shallow fusion
* small updates
* update decode.py to support LM fusion and LODR
* add export.py
* update CI and workflow
* update decoding results
* fix CI
* remove transformer LM from CI test
* shuffled full/partial librispeech data
* fixed the code style issue
* Shuffled full librispeech data off-line
* Fixed style, addressed comments, and removed redandunt codes
* Used the suggested version of black
* Propagated the changes to other folders for librispeech (except
conformer_mmi and streaming_conformer_ctc)
* print out timestamps during decoding
* add word-level alignments
* support to compute mean symbol delay with word-level alignments
* print variance of symbol delay
* update doc
* support to compute delay for pruned_transducer_stateless4
* fix bug
* add doc
* Add utility for shallow fusion
* test batch size == 1 without shallow fusion
* Use shallow fusion for modified-beam-search
* Modified beam search with ngram rescoring
* Fix code according to review
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
non stable training in some scenarios. The clamping range is set to (-10,2).
Note that this change may cause unexpected effect if you resume
training from a model that is trained without clamping.
* add ScaledLSTM
* add RNNEncoderLayer and RNNEncoder classes in lstm.py
* add RNN and Conv2dSubsampling classes in lstm.py
* hardcode bidirectional=False
* link from pruned_transducer_stateless2
* link scaling.py pruned_transducer_stateless2
* copy from pruned_transducer_stateless2
* modify decode.py pretrained.py test_model.py train.py
* copy streaming decoding files from pruned_transducer_stateless2
* modify streaming decoding files
* simplified code in ScaledLSTM
* flat weights after scaling
* pruned2 -> pruned4
* link __init__.py
* fix style
* remove add_model_arguments
* modify .flake8
* fix style
* fix scale value in scaling.py
* add random combiner for training deeper model
* add using proj_size
* add scaling converter for ScaledLSTM
* support jit trace
* add using averaged model in export.py
* modify test_model.py, test if the model can be successfully exported by jit.trace
* modify pretrained.py
* support streaming decoding
* fix model.py
* Add cut_id to recognition results
* Add cut_id to recognition results
* do not pad in Conv subsampling module; add tail padding during decoding.
* update RESULTS.md
* minor fix
* fix doc
* update README.md
* minor change, filter infinite loss
* remove the condition of raise error
* modify type hint for the return value in model.py
* minor change
* modify RESULTS.md
Co-authored-by: pkufool <wkang.pku@gmail.com>