* WIP: Support exporting to ONNX format
* Minor fixes.
* Combine encoder/decoder/joiner into a single file.
* Revert merging three onnx models into a single one.
It's quite time consuming to extract a sub-graph from the combined
model. For instance, it takes more than one hour to extract
the encoder model.
* Update CI to test ONNX models.
* Decode with exported models.
* Fix typos.
* Add more doc.
* Remove ncnn as it is not fully tested yet.
* Fix as_strided for streaming conformer.
* pruned-rnnt5-for-wenetspeech
* style check
* style check
* add streaming conformer
* add streaming decode
* changes codes for fast_beam_search and export cpu jit
* add modified-beam-search for streaming decoding
* add modified-beam-search for streaming decoding
* change for streaming_beam_search.py
* add README.md and RESULTS.md
* change for style_check.yml
* do some changes
* do some changes for export.py
* add some decode commands for usage
* add streaming results on README.md
* support streaming in conformer
* Add more documents
* support streaming on pruned_transducer_stateless2; add delay penalty; fixes for decode states
* Minor fixes
* streaming for pruned_transducer_stateless4
* Fix conv cache error, support async streaming decoding
* Fix style
* Fix style
* Fix style
* Add torch.jit.export
* mask the initial cache
* Cutting off invalid frames of encoder_embed output
* fix relative positional encoding in streaming decoding for compution saving
* Minor fixes
* Minor fixes
* Minor fixes
* Minor fixes
* Minor fixes
* Fix jit export for torch 1.6
* Minor fixes for streaming decoding
* Minor fixes on decode stream
* move model parameters to train.py
* make states in forward streaming optional
* update pretrain to support streaming model
* update results.md
* update tensorboard and pre-models
* fix typo
* Fix tests
* remove unused arguments
* add streaming decoding ci
* Minor fix
* Minor fix
* disable right context by default
* Add fast_beam_search_nbest.
* Fix CI errors.
* Fix CI errors.
* More fixes.
* Small fixes.
* Support using log_add in LG decoding with fast_beam_search.
* Support LG decoding in pruned_transducer_stateless
* Support LG for pruned_transducer_stateless2.
* Support LG for fast beam search.
* Minor fixes.
* Use jsonl for cutsets in the librispeech recipe.
* Use lazy cutset for all recipes.
* More fixes to use lazy CutSet.
* Remove force=True from logging to support Python < 3.8
* Minor fixes.
* Fix style issues.
* add pruned-rnnt2 recipe for alimeeting dataset
* update code for merging
* change LilcomHdf5Writer to ChunkedLilcomHdf5Writer
* change for test.yml
* change for test.yml
* change for test.yml
* change for workflow yml
* change for yml
* change for yml
* change for README.md
* change for yml
* solve the conflicts
* solve the conflicts
* Copy files for editing.
* Add random combine from #229.
* Minor fixes.
* Pass model parameters from the command line.
* Fix warnings.
* Fix warnings.
* Update readme.
* Rename to avoid conflicts.
* Update results.
* Add CI for pruned_transducer_stateless5
* Typo fixes.
* Remove random combiner.
* Update decode.py and train.py to use periodically averaged models.
* Minor fixes.
* Revert to use random combiner.
* Update results.
* Minor fixes.
* Copy files for editing.
* Use librispeech + gigaspeech with modified conformer.
* Support specifying number of workers for on-the-fly feature extraction.
* Feature extraction code for GigaSpeech.
* Combine XL splits lazily during training.
* Fix warnings in decoding.
* Add decoding code for GigaSpeech.
* Fix decoding the gigaspeech dataset.
We have to use the decoder/joiner networks for the GigaSpeech dataset.
* Disable speed perturbe for XL subset.
* Compute the Nbest oracle WER for RNN-T decoding.
* Minor fixes.
* Minor fixes.
* Add results.
* Update results.
* Update CI.
* Update results.
* Fix style issues.
* Update results.
* Fix style issues.
* Add modified beam search for pruned rnn-t.
* Fix style issues.
* Update RESULTS.md.
* Fix typos.
* Minor fixes.
* Test the pre-trained model using GitHub actions.
* Let the user install optimized_transducer on her own.
* Fix errors in GitHub CI.
* Add modified transducer for aishell.
* Minor fixes.
* Add extra data in transducer training.
The extra data is from http://www.openslr.org/62/
* Update export.py and pretrained.py
* Update CI to install pretrained models with aishell.
* Update results.
* Update results.
* Update README.
* Use symlinks to avoid copies.
* Begin to use multiple datasets.
* Finish preparing training datasets.
* Minor fixes
* Copy files.
* Finish training code.
* Display losses for gigaspeech and librispeech separately.
* Fix decode.py
* Make the probability to select a batch from GigaSpeech configurable.
* Update results.
* Minor fixes.
* Disable weight decay.
* Remove input feature batchnorm..
* Replace BatchNorm in the Conformer model with LayerNorm.
* Use tanh in the joint network.
* Remove sos ID.
* Reduce the number of decoder layers from 4 to 2.
* Minor fixes.
* Fix typos.
* Begin to add RNN-T training for librispeech.
* Copy files from conformer_ctc.
Will edit it.
* Use conformer/transformer model as encoder.
* Begin to add training script.
* Add training code.
* Remove long utterances to avoid OOM when a large max_duraiton is used.
* Begin to add decoding script.
* Add decoding script.
* Minor fixes.
* Add beam search.
* Use LSTM layers for the encoder.
Need more tunings.
* Use stateless decoder.
* Minor fixes to make it ready for merge.
* Fix README.
* Update RESULT.md to include RNN-T Conformer.
* Minor fixes.
* Fix tests.
* Minor fixes.
* Minor fixes.
* Fix tests.
* Apply layer normalization to the output of each gate in LSTM.
* Apply layer normalization to the output of each gate in GRU.
* Add projection support to LayerNormLSTMCell.
* Add GPU tests.
* Use typeguard.check_argument_types() to validate type annotations.
* Add typeguard as a requirement.
* Minor fixes.
* Fix CI.
* Fix CI.
* Fix test failures for torch 1.8.0
* Fix errors.
* Modify label smoothing to match the one implemented in PyTorch.
* Enable CI for torch 1.10
* Fix CI errors.
* Fix CI installation errors.
* Fix CI installation errors.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Fix CI errors.
* Use new APIs with k2.RaggedTensor
* Fix style issues.
* Update the installation doc, saying it requires at least k2 v1.7
* Extract framewise alignment information using CTC decoding.
* Print environment information.
Print information about k2, lhotse, PyTorch, and icefall.
* Fix CI.
* Fix CI.
* Compute framewise alignment information of the LibriSpeech dataset.
* Update comments for the time to compute alignments of train-960.
* Preserve cut id in mix cut transformer.
* Minor fixes.
* Add doc about how to extract framewise alignments.
* Add CI to run pre-trained models.
* Minor fixes.
* Install kaldifeat
* Install a CPU version of PyTorch.
* Fix CI errors.
* Disable decoder layers in pretrained.py if it is not used.
* Clone pre-trained model from GitHub.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Refactor decode.py to make it more readable and more modular.
* Fix an error.
Nbest.fsa should always have token IDs as labels and
word IDs as aux_labels.
* Add nbest decoding.
* Compute edit distance with k2.
* Refactor nbest-oracle.
* Add rescore with nbest lists.
* Add whole-lattice rescoring.
* Add rescoring with attention decoder.
* Refactoring.
* Fixes after refactoring.
* Fix a typo.
* Minor fixes.
* Replace [] with () for shapes.
* Use k2 v1.9
* Use Levenshtein graphs/alignment from k2 v1.9
* [doc] Require k2 >= v1.9
* Minor fixes.