* Copy files for editing.
* Add random combine from #229.
* Minor fixes.
* Pass model parameters from the command line.
* Fix warnings.
* Fix warnings.
* Update readme.
* Rename to avoid conflicts.
* Update results.
* Add CI for pruned_transducer_stateless5
* Typo fixes.
* Remove random combiner.
* Update decode.py and train.py to use periodically averaged models.
* Minor fixes.
* Revert to use random combiner.
* Update results.
* Minor fixes.
* Copy files for editing.
* Use librispeech + gigaspeech with modified conformer.
* Support specifying number of workers for on-the-fly feature extraction.
* Feature extraction code for GigaSpeech.
* Combine XL splits lazily during training.
* Fix warnings in decoding.
* Add decoding code for GigaSpeech.
* Fix decoding the gigaspeech dataset.
We have to use the decoder/joiner networks for the GigaSpeech dataset.
* Disable speed perturbe for XL subset.
* Compute the Nbest oracle WER for RNN-T decoding.
* Minor fixes.
* Minor fixes.
* Add results.
* Update results.
* Update CI.
* Update results.
* Fix style issues.
* Update results.
* Fix style issues.
* Add modified beam search for pruned rnn-t.
* Fix style issues.
* Update RESULTS.md.
* Fix typos.
* Minor fixes.
* Test the pre-trained model using GitHub actions.
* Let the user install optimized_transducer on her own.
* Fix errors in GitHub CI.
* Add modified transducer for aishell.
* Minor fixes.
* Add extra data in transducer training.
The extra data is from http://www.openslr.org/62/
* Update export.py and pretrained.py
* Update CI to install pretrained models with aishell.
* Update results.
* Update results.
* Update README.
* Use symlinks to avoid copies.
* Begin to use multiple datasets.
* Finish preparing training datasets.
* Minor fixes
* Copy files.
* Finish training code.
* Display losses for gigaspeech and librispeech separately.
* Fix decode.py
* Make the probability to select a batch from GigaSpeech configurable.
* Update results.
* Minor fixes.
* Disable weight decay.
* Remove input feature batchnorm..
* Replace BatchNorm in the Conformer model with LayerNorm.
* Use tanh in the joint network.
* Remove sos ID.
* Reduce the number of decoder layers from 4 to 2.
* Minor fixes.
* Fix typos.
* Begin to add RNN-T training for librispeech.
* Copy files from conformer_ctc.
Will edit it.
* Use conformer/transformer model as encoder.
* Begin to add training script.
* Add training code.
* Remove long utterances to avoid OOM when a large max_duraiton is used.
* Begin to add decoding script.
* Add decoding script.
* Minor fixes.
* Add beam search.
* Use LSTM layers for the encoder.
Need more tunings.
* Use stateless decoder.
* Minor fixes to make it ready for merge.
* Fix README.
* Update RESULT.md to include RNN-T Conformer.
* Minor fixes.
* Fix tests.
* Minor fixes.
* Minor fixes.
* Fix tests.
* Apply layer normalization to the output of each gate in LSTM.
* Apply layer normalization to the output of each gate in GRU.
* Add projection support to LayerNormLSTMCell.
* Add GPU tests.
* Use typeguard.check_argument_types() to validate type annotations.
* Add typeguard as a requirement.
* Minor fixes.
* Fix CI.
* Fix CI.
* Fix test failures for torch 1.8.0
* Fix errors.
* Modify label smoothing to match the one implemented in PyTorch.
* Enable CI for torch 1.10
* Fix CI errors.
* Fix CI installation errors.
* Fix CI installation errors.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Fix CI errors.
* Use new APIs with k2.RaggedTensor
* Fix style issues.
* Update the installation doc, saying it requires at least k2 v1.7
* Extract framewise alignment information using CTC decoding.
* Print environment information.
Print information about k2, lhotse, PyTorch, and icefall.
* Fix CI.
* Fix CI.
* Compute framewise alignment information of the LibriSpeech dataset.
* Update comments for the time to compute alignments of train-960.
* Preserve cut id in mix cut transformer.
* Minor fixes.
* Add doc about how to extract framewise alignments.
* Add CI to run pre-trained models.
* Minor fixes.
* Install kaldifeat
* Install a CPU version of PyTorch.
* Fix CI errors.
* Disable decoder layers in pretrained.py if it is not used.
* Clone pre-trained model from GitHub.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Refactor decode.py to make it more readable and more modular.
* Fix an error.
Nbest.fsa should always have token IDs as labels and
word IDs as aux_labels.
* Add nbest decoding.
* Compute edit distance with k2.
* Refactor nbest-oracle.
* Add rescore with nbest lists.
* Add whole-lattice rescoring.
* Add rescoring with attention decoder.
* Refactoring.
* Fixes after refactoring.
* Fix a typo.
* Minor fixes.
* Replace [] with () for shapes.
* Use k2 v1.9
* Use Levenshtein graphs/alignment from k2 v1.9
* [doc] Require k2 >= v1.9
* Minor fixes.
* Add recipe for the yes_no dataset.
* Refactoring: Remove unused code.
* Add Colab notebook for the yesno dataset.
* Add GitHub actions to run yesno.
* Fix a typo.
* Minor fixes.
* Train more epochs for GitHub actions.
* Minor fixes.
* Minor fixes.
* Fix style issues.