7 Commits

Author SHA1 Message Date
Guanbo Wang
f293d4ade3 Update results 2022-05-12 00:57:02 -04:00
Guanbo Wang
239524d384 black 2022-05-03 22:26:46 +00:00
Guanbo Wang
c08c6ae0ec gigaspeech decode 2022-05-03 22:15:55 +00:00
Guanbo Wang
d6390fd107 Update params 2022-04-21 02:30:54 -04:00
Guanbo Wang
2d07df5281 flake8 2022-04-17 02:01:21 +00:00
Guanbo Wang
713601c624 Copy RNN-T recipe from librispeech 2022-04-17 01:52:32 +00:00
Wang, Guanbo
5fe58de43c
GigaSpeech recipe (#120)
* initial commit

* support download, data prep, and fbank

* on-the-fly feature extraction by default

* support BPE based lang

* support HLG for BPE

* small fix

* small fix

* chunked feature extraction by default

* Compute features for GigaSpeech by splitting the manifest.

* Fixes after review.

* Split manifests into 2000 pieces.

* set audio duration mismatch tolerance to 0.01

* small fix

* add conformer training recipe

* Add conformer.py without pre-commit checking

* lazy loading and use SingleCutSampler

* DynamicBucketingSampler

* use KaldifeatFbank to compute fbank for musan

* use pretrained language model and lexicon

* use 3gram to decode, 4gram to rescore

* Add decode.py

* Update .flake8

* Delete compute_fbank_gigaspeech.py

* Use BucketingSampler for valid and test dataloader

* Update params in train.py

* Use bpe_500

* update params in decode.py

* Decrease num_paths while CUDA OOM

* Added README

* Update RESULTS

* black

* Decrease num_paths while CUDA OOM

* Decode with post-processing

* Update results

* Remove lazy_load option

* Use default `storage_type`

* Keep the original tolerance

* Use split-lazy

* black

* Update pretrained model

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-04-14 16:07:22 +08:00