* Add utility for shallow fusion
* test batch size == 1 without shallow fusion
* Use shallow fusion for modified-beam-search
* Modified beam search with ngram rescoring
* Fix code according to review
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
non stable training in some scenarios. The clamping range is set to (-10,2).
Note that this change may cause unexpected effect if you resume
training from a model that is trained without clamping.
* add ScaledLSTM
* add RNNEncoderLayer and RNNEncoder classes in lstm.py
* add RNN and Conv2dSubsampling classes in lstm.py
* hardcode bidirectional=False
* link from pruned_transducer_stateless2
* link scaling.py pruned_transducer_stateless2
* copy from pruned_transducer_stateless2
* modify decode.py pretrained.py test_model.py train.py
* copy streaming decoding files from pruned_transducer_stateless2
* modify streaming decoding files
* simplified code in ScaledLSTM
* flat weights after scaling
* pruned2 -> pruned4
* link __init__.py
* fix style
* remove add_model_arguments
* modify .flake8
* fix style
* fix scale value in scaling.py
* add random combiner for training deeper model
* add using proj_size
* add scaling converter for ScaledLSTM
* support jit trace
* add using averaged model in export.py
* modify test_model.py, test if the model can be successfully exported by jit.trace
* modify pretrained.py
* support streaming decoding
* fix model.py
* Add cut_id to recognition results
* Add cut_id to recognition results
* do not pad in Conv subsampling module; add tail padding during decoding.
* update RESULTS.md
* minor fix
* fix doc
* update README.md
* minor change, filter infinite loss
* remove the condition of raise error
* modify type hint for the return value in model.py
* minor change
* modify RESULTS.md
Co-authored-by: pkufool <wkang.pku@gmail.com>
* Sort result to make it more convenient to compare decoding results
* Add cut_id to recognition results
* add cut_id to results for all recipes
* Fix torch.jit.script
* Fix comments
* Minor fixes
* Fix torch.jit.tracing for Pytorch version before v1.9.0
* WIP: Support exporting to ONNX format
* Minor fixes.
* Combine encoder/decoder/joiner into a single file.
* Revert merging three onnx models into a single one.
It's quite time consuming to extract a sub-graph from the combined
model. For instance, it takes more than one hour to extract
the encoder model.
* Update CI to test ONNX models.
* Decode with exported models.
* Fix typos.
* Add more doc.
* Remove ncnn as it is not fully tested yet.
* Fix as_strided for streaming conformer.
* add stats about duration and padding proportion
* add for utt_duration
* add stats for other recipes
* add stats for other 2 recipes
* modify doc
* minor change
* ctc attention model with reworked conformer encoder and reworked transformer decoder
* remove unnecessary func
* resolve flake8 conflicts
* fix typos and modify the expr of ScaledEmbedding
* use original beam size
* minor changes to the scripts
* add rnn lm decoding
* minor changes
* check whether q k v weight is None
* check whether q k v weight is None
* check whether q k v weight is None
* style correction
* update results
* update results
* upload the decoding results of rnn-lm to the RESULTS
* upload the decoding results of rnn-lm to the RESULTS
* Update egs/librispeech/ASR/RESULTS.md
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* Update egs/librispeech/ASR/RESULTS.md
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* Update egs/librispeech/ASR/RESULTS.md
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* init files
* use average value as memory vector for each chunk
* change tail padding length from right_context_length to chunk_length
* correct the files, ln -> cp
* fix bug in conv_emformer_transducer_stateless2/emformer.py
* fix doc in conv_emformer_transducer_stateless/emformer.py
* refactor init states for stream
* modify .flake8
* fix bug about memory mask when memory_size==0
* add @torch.jit.export for init_states function
* update RESULTS.md
* minor change
* update README.md
* modify doc
* replace torch.div() with <<
* fix bug, >> -> <<
* use i&i-1 to judge if it is a power of 2
* minor fix
* fix error in RESULTS.md
* support streaming in conformer
* Add more documents
* support streaming on pruned_transducer_stateless2; add delay penalty; fixes for decode states
* Minor fixes
* streaming for pruned_transducer_stateless4
* Fix conv cache error, support async streaming decoding
* Fix style
* Fix style
* Fix style
* Add torch.jit.export
* mask the initial cache
* Cutting off invalid frames of encoder_embed output
* fix relative positional encoding in streaming decoding for compution saving
* Minor fixes
* Minor fixes
* Minor fixes
* Minor fixes
* Minor fixes
* Fix jit export for torch 1.6
* Minor fixes for streaming decoding
* Minor fixes on decode stream
* move model parameters to train.py
* make states in forward streaming optional
* update pretrain to support streaming model
* update results.md
* update tensorboard and pre-models
* fix typo
* Fix tests
* remove unused arguments
* add streaming decoding ci
* Minor fix
* Minor fix
* disable right context by default
* Add fast_beam_search_nbest.
* Fix CI errors.
* Fix CI errors.
* More fixes.
* Small fixes.
* Support using log_add in LG decoding with fast_beam_search.
* Support LG decoding in pruned_transducer_stateless
* Support LG for pruned_transducer_stateless2.
* Support LG for fast beam search.
* Minor fixes.