* Begin to add RNN-T training for librispeech.
* Copy files from conformer_ctc.
Will edit it.
* Use conformer/transformer model as encoder.
* Begin to add training script.
* Add training code.
* Remove long utterances to avoid OOM when a large max_duraiton is used.
* Begin to add decoding script.
* Add decoding script.
* Minor fixes.
* Add beam search.
* Use LSTM layers for the encoder.
Need more tunings.
* Use stateless decoder.
* Minor fixes to make it ready for merge.
* Fix README.
* Update RESULT.md to include RNN-T Conformer.
* Minor fixes.
* Fix tests.
* Minor fixes.
* Minor fixes.
* Fix tests.
* Apply layer normalization to the output of each gate in LSTM.
* Apply layer normalization to the output of each gate in GRU.
* Add projection support to LayerNormLSTMCell.
* Add GPU tests.
* Use typeguard.check_argument_types() to validate type annotations.
* Add typeguard as a requirement.
* Minor fixes.
* Fix CI.
* Fix CI.
* Fix test failures for torch 1.8.0
* Fix errors.
* add MMI to AIShell
* fix MMI decode graph
* export model
* typo
* fix code style
* typo
* fix data prepare to just use train text by uid
* use a faster way to get the intersection of train and aishell_transcript_v0.8.txt
* update AIShell result
* update
* typo
We are using multiple machines to do various experiments. It makes
life easier to know which experiment is running on which machine
if we also log the IP and hostname of the machine.
* Modify label smoothing to match the one implemented in PyTorch.
* Enable CI for torch 1.10
* Fix CI errors.
* Fix CI installation errors.
* Fix CI installation errors.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Minor fixes.
* Fix CI errors.
* Update RESULTS using vocab size 500, att rate 0.8
* Update README.
* Refactoring.
Since FSAs in an Nbest object are linear in structure, we can
add the scores of a path to compute the total scores.
* Update documentation.
* Change default vocab size from 5000 to 500.
* Add a note about the CUDA OOM error.
Some users consider this kind of OOM as an error during decoding,
but actually it is not. This pull request clarifies that.
* Fix style issues.
* add a docker file for some users
Ubuntu18.04-pytorch1.7.1-cuda11.0-cudnn8-python3.8
* add a describing file of how to use dockerfile
give some steps to use dockerfile
* Use new APIs with k2.RaggedTensor
* Fix style issues.
* Update the installation doc, saying it requires at least k2 v1.7
* Extract framewise alignment information using CTC decoding.
* Print environment information.
Print information about k2, lhotse, PyTorch, and icefall.
* Fix CI.
* Fix CI.
* Compute framewise alignment information of the LibriSpeech dataset.
* Update comments for the time to compute alignments of train-960.
* Preserve cut id in mix cut transformer.
* Minor fixes.
* Add doc about how to extract framewise alignments.
* Add CI to run pre-trained models.
* Minor fixes.
* Install kaldifeat
* Install a CPU version of PyTorch.
* Fix CI errors.
* Disable decoder layers in pretrained.py if it is not used.
* Clone pre-trained model from GitHub.
* Minor fixes.
* Minor fixes.
* Minor fixes.