27 Commits

Author SHA1 Message Date
Fangjun Kuang
6dc2e04462
Update results. (#340)
* Update results.

* Typo fixes.
2022-04-29 15:49:45 +08:00
Fangjun Kuang
ac84220de9
Modified conformer with multi datasets (#312)
* Copy files for editing.

* Use librispeech + gigaspeech with modified conformer.

* Support specifying number of workers for on-the-fly feature extraction.

* Feature extraction code for GigaSpeech.

* Combine XL splits lazily during training.

* Fix warnings in decoding.

* Add decoding code for GigaSpeech.

* Fix decoding the gigaspeech dataset.

We have to use the decoder/joiner networks for the GigaSpeech dataset.

* Disable speed perturbe for XL subset.

* Compute the Nbest oracle WER for RNN-T decoding.

* Minor fixes.

* Minor fixes.

* Add results.

* Update results.

* Update CI.

* Update results.

* Fix style issues.

* Update results.

* Fix style issues.
2022-04-29 15:40:30 +08:00
Mingshuang Luo
118e195004
Update results for tedlium3 pruned RNN-T (#307)
* Update README.md
2022-04-11 22:19:26 +08:00
Mingshuang Luo
8cb727e24a
Tedlium3 pruned transducer stateless (#261)
* update tedlium3-pruned-transducer-stateless-codes

* update README.md

* update README.md

* add fast beam search for decoding

* do a change for RESULTS.md

* do a change for RESULTS.md

* do a fix

* do some changes for pruned RNN-T
2022-04-11 17:08:53 +08:00
Fangjun Kuang
bb7f6ed6b7
Add modified beam search for pruned rnn-t. (#248)
* Add modified beam search for pruned rnn-t.

* Fix style issues.

* Update RESULTS.md.

* Fix typos.

* Minor fixes.

* Test the pre-trained model using GitHub actions.

* Let the user install optimized_transducer on her own.

* Fix errors in GitHub CI.
2022-03-12 16:16:55 +08:00
Fangjun Kuang
50d2281524
Add modified transducer loss for AIShell dataset (#219)
* Add modified transducer for aishell.

* Minor fixes.

* Add extra data in transducer training.

The extra data is from http://www.openslr.org/62/

* Update export.py and pretrained.py

* Update CI to install pretrained models with aishell.

* Update results.

* Update results.

* Update README.

* Use symlinks to avoid copies.
2022-03-02 16:02:38 +08:00
Fangjun Kuang
05cb297858
Update result for full libri + GigaSpeech using transducer_stateless. (#231) 2022-03-01 17:01:46 +08:00
Fangjun Kuang
72f838dee1
Update results for transducer_stateless after training for more epochs. (#207) 2022-03-01 16:35:02 +08:00
PF Luo
ac7c2d84bc
minor fix for aishell recipe (#223)
* just remove unnecessary torch.sum

* minor fixs for aishell
2022-02-23 08:33:20 +08:00
PF Luo
277cc3f9bf
update aishell-1 recipe with k2.rnnt_loss (#215)
* update aishell-1 recipe with k2.rnnt_loss

* fix flak8 style

* typo

* add pretrained model link to result.md
2022-02-19 15:56:39 +08:00
Fangjun Kuang
a8150021e0
Use modified transducer loss in training. (#179)
* Use modified transducer loss in training.

* Minor fix.

* Add modified beam search.

* Add modified beam search.

* Minor fixes.

* Fix typo.

* Update RESULTS.

* Fix a typo.

* Minor fixes.
2022-02-07 18:37:36 +08:00
Fangjun Kuang
f94ff19bfe
Refactor beam search and update results. (#177) 2022-01-18 16:40:19 +08:00
Fangjun Kuang
4c1b3665ee
Use optimized_transducer to compute transducer loss. (#162)
* WIP: Use optimized_transducer to compute transducer loss.

* Minor fixes.

* Fix decoding.

* Fix decoding.

* Add RESULTS.

* Update RESULTS.

* Update CI.

* Fix sampling rate for yesno recipe.
2022-01-10 11:54:58 +08:00
pingfengluo
ea8af0ee9a
add transducer_stateless with char unit to AIShell (#164) 2022-01-01 18:32:08 +08:00
Fangjun Kuang
14c93add50
Remove batchnorm, weight decay, and SOS from transducer conformer encoder (#155)
* Remove batchnorm, weight decay, and SOS.

* Make --context-size configurable.

* Update results.
2021-12-27 16:01:10 +08:00
Fangjun Kuang
5b6699a835
Minor fixes to the RNN-T Conformer model (#152)
* Disable weight decay.

* Remove input feature batchnorm..

* Replace BatchNorm in the Conformer model with LayerNorm.

* Use tanh in the joint network.

* Remove sos ID.

* Reduce the number of decoder layers from 4 to 2.

* Minor fixes.

* Fix typos.
2021-12-23 13:54:25 +08:00
Fangjun Kuang
fb6a57e9e0
Increase the size of the context in the RNN-T decoder. (#153) 2021-12-23 07:55:02 +08:00
Fangjun Kuang
1d44da845b
RNN-T Conformer training for LibriSpeech (#143)
* Begin to add RNN-T training for librispeech.

* Copy files from conformer_ctc.

Will edit it.

* Use conformer/transformer model as encoder.

* Begin to add training script.

* Add training code.

* Remove long utterances to avoid OOM when a large max_duraiton is used.

* Begin to add decoding script.

* Add decoding script.

* Minor fixes.

* Add beam search.

* Use LSTM layers for the encoder.

Need more tunings.

* Use stateless decoder.

* Minor fixes to make it ready for merge.

* Fix README.

* Update RESULT.md to include RNN-T Conformer.

* Minor fixes.

* Fix tests.

* Minor fixes.

* Minor fixes.

* Fix tests.
2021-12-18 07:42:51 +08:00
Wei Kang
4151cca147
Add torch script support for Aishell and update documents (#124)
* Add aishell recipe

* Remove unnecessary code and update docs

* adapt to k2 v1.7, add docs and results

* Update conformer ctc model

* Update docs, pretrained.py & results

* Fix code style

* Fix code style

* Fix code style

* Minor fix

* Minor fix

* Fix pretrained.py

* Update pretrained model & corresponding docs

* Export torch script model for Aishell

* Add C++ deployment docs

* Minor fixes

* Fix unit test

* Update Readme
2021-11-19 16:37:05 +08:00
Mingshuang Luo
2e0f255ada
Add timit recipe (including the code scripts and the docs) for icefall (#114)
* add timit recipe for icefall

* add shared file

* update the docs for timit recipe

* Delete shared

* update the timit recipe and check style

* Update model.py

* Do some changes

* Update model.py

* Update model.py

* Add README.md and RESULTS.md

* Update RESULTS.md

* Update README.md

* update the docs for timit recipe
2021-11-17 11:23:45 +08:00
Fangjun Kuang
21096e99d8
Update result for the librispeech recipe using vocab size 500 and att rate 0.8 (#113)
* Update RESULTS using vocab size 500, att rate 0.8

* Update README.

* Refactoring.

Since FSAs in an Nbest object are linear in structure, we can
add the scores of a path to compute the total scores.

* Update documentation.

* Change default vocab size from 5000 to 500.
2021-11-10 14:32:52 +08:00
Fangjun Kuang
beb54ddb61
Support torch script. (#65)
* WIP: Support torchscript.

* Minor fixes.

* Fix style issues.

* Add documentation about how to deploy a trained model.
2021-10-12 14:55:05 +08:00
Fangjun Kuang
96e7f5c7ea
Release v0.1 (#26) 2021-08-24 21:30:30 +08:00
Fangjun Kuang
6c2c9b9d74
Add recipe for the yes_no dataset. (#16)
* Add recipe for the yes_no dataset.

* Refactoring: Remove unused code.

* Add Colab notebook for the yesno dataset.

* Add GitHub actions to run yesno.

* Fix a typo.

* Minor fixes.

* Train more epochs for GitHub actions.

* Minor fixes.

* Minor fixes.

* Fix style issues.
2021-08-23 11:36:29 +08:00
Fangjun Kuang
0b656e4e1c
Add a link to Colab. (#14)
It demonstrates the usages of pre-trained models.
2021-08-20 15:43:25 +08:00
Fangjun Kuang
12a2fd023e
Add doc about installation and usage (#7)
* Add readme.

* Add TOC.

* fix typos

* Minor fixes after review.
2021-08-12 12:44:04 +08:00
Fangjun Kuang
0d16431766 First commit. 2021-07-15 17:35:54 +08:00