* first commit
* replace phonimizer with g2p
* use Conformer as text encoder
* modify training script, clean codes
* rename directory
* convert text to tokens in data preparation stage
* fix tts_datamodule.py
* support onnx export and testing the exported onnx model
* add doc
* add README.md
* fix style
* add the zipformer codes, copied from branch from_dan_scaled_adam_exp1119
* support model export with torch.jit.script
* update RESULTS.md
* support exporting streaming model with torch.jit.script
* add results of streaming models, with some minor changes
* update README.md
* add CI test
* update k2 version in requirements-ci.txt
* update pyproject.toml
* add ScaledLSTM
* add RNNEncoderLayer and RNNEncoder classes in lstm.py
* add RNN and Conv2dSubsampling classes in lstm.py
* hardcode bidirectional=False
* link from pruned_transducer_stateless2
* link scaling.py pruned_transducer_stateless2
* copy from pruned_transducer_stateless2
* modify decode.py pretrained.py test_model.py train.py
* copy streaming decoding files from pruned_transducer_stateless2
* modify streaming decoding files
* simplified code in ScaledLSTM
* flat weights after scaling
* pruned2 -> pruned4
* link __init__.py
* fix style
* remove add_model_arguments
* modify .flake8
* fix style
* fix scale value in scaling.py
* add random combiner for training deeper model
* add using proj_size
* add scaling converter for ScaledLSTM
* support jit trace
* add using averaged model in export.py
* modify test_model.py, test if the model can be successfully exported by jit.trace
* modify pretrained.py
* support streaming decoding
* fix model.py
* Add cut_id to recognition results
* Add cut_id to recognition results
* do not pad in Conv subsampling module; add tail padding during decoding.
* update RESULTS.md
* minor fix
* fix doc
* update README.md
* minor change, filter infinite loss
* remove the condition of raise error
* modify type hint for the return value in model.py
* minor change
* modify RESULTS.md
Co-authored-by: pkufool <wkang.pku@gmail.com>
* ctc attention model with reworked conformer encoder and reworked transformer decoder
* remove unnecessary func
* resolve flake8 conflicts
* fix typos and modify the expr of ScaledEmbedding
* use original beam size
* minor changes to the scripts
* add rnn lm decoding
* minor changes
* check whether q k v weight is None
* check whether q k v weight is None
* check whether q k v weight is None
* style correction
* update results
* update results
* upload the decoding results of rnn-lm to the RESULTS
* upload the decoding results of rnn-lm to the RESULTS
* Update egs/librispeech/ASR/RESULTS.md
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* Update egs/librispeech/ASR/RESULTS.md
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* Update egs/librispeech/ASR/RESULTS.md
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* init files
* use average value as memory vector for each chunk
* change tail padding length from right_context_length to chunk_length
* correct the files, ln -> cp
* fix bug in conv_emformer_transducer_stateless2/emformer.py
* fix doc in conv_emformer_transducer_stateless/emformer.py
* refactor init states for stream
* modify .flake8
* fix bug about memory mask when memory_size==0
* add @torch.jit.export for init_states function
* update RESULTS.md
* minor change
* update README.md
* modify doc
* replace torch.div() with <<
* fix bug, >> -> <<
* use i&i-1 to judge if it is a power of 2
* minor fix
* fix error in RESULTS.md
* copy files from existing branch
* add rule in .flake8
* monir style fix
* fix typos
* add tail padding
* refactor, use fixed-length cache for batch decoding
* copy from streaming branch
* copy from streaming branch
* modify emformer states stack and unstack, streaming decoding, to be continued
* refactor Stream class
* remane streaming_feature_extractor.py
* refactor streaming decoding
* test states stack and unstack
* fix bugs, no grad, and num_proccessed_frames
* add modify_beam_search, fast_beam_search
* support torch.jit.export
* use torch.div
* copy from pruned_transducer_stateless4
* modify export.py
* add author info
* delete other test functions
* minor fix
* modify doc
* fix style
* minor fix doc
* minor fix
* minor fix doc
* update RESULTS.md
* fix typo
* add info
* fix typo
* fix doc
* add test function for conv module, and minor fix.
* add copyright info
* minor change of test_emformer.py
* fix doc of stack and unstack, test case with batch_size=1
* update README.md
* First upload of model average codes.
* minor fix
* update decode file
* update .flake8
* rename pruned_transducer_stateless3 to pruned_transducer_stateless4
* change epoch number counter starting from 1 instead of 0
* minor fix of pruned_transducer_stateless4/train.py
* refactor the checkpoint.py
* minor fix, update docs, and modify the epoch number to count from 1 in the pruned_transducer_stateless4/decode.py
* update author info
* add docs of the scaling in function average_checkpoints_with_averaged_model
* Copy files for editing.
* Use librispeech + gigaspeech with modified conformer.
* Support specifying number of workers for on-the-fly feature extraction.
* Feature extraction code for GigaSpeech.
* Combine XL splits lazily during training.
* Fix warnings in decoding.
* Add decoding code for GigaSpeech.
* Fix decoding the gigaspeech dataset.
We have to use the decoder/joiner networks for the GigaSpeech dataset.
* Disable speed perturbe for XL subset.
* Compute the Nbest oracle WER for RNN-T decoding.
* Minor fixes.
* Minor fixes.
* Add results.
* Update results.
* Update CI.
* Update results.
* Fix style issues.
* Update results.
* Fix style issues.
* initial commit
* support download, data prep, and fbank
* on-the-fly feature extraction by default
* support BPE based lang
* support HLG for BPE
* small fix
* small fix
* chunked feature extraction by default
* Compute features for GigaSpeech by splitting the manifest.
* Fixes after review.
* Split manifests into 2000 pieces.
* set audio duration mismatch tolerance to 0.01
* small fix
* add conformer training recipe
* Add conformer.py without pre-commit checking
* lazy loading and use SingleCutSampler
* DynamicBucketingSampler
* use KaldifeatFbank to compute fbank for musan
* use pretrained language model and lexicon
* use 3gram to decode, 4gram to rescore
* Add decode.py
* Update .flake8
* Delete compute_fbank_gigaspeech.py
* Use BucketingSampler for valid and test dataloader
* Update params in train.py
* Use bpe_500
* update params in decode.py
* Decrease num_paths while CUDA OOM
* Added README
* Update RESULTS
* black
* Decrease num_paths while CUDA OOM
* Decode with post-processing
* Update results
* Remove lazy_load option
* Use default `storage_type`
* Keep the original tolerance
* Use split-lazy
* black
* Update pretrained model
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>