icefall/egs/gigaspeech/ASR/RESULTS.md
Wang, Guanbo 5fe58de43c
GigaSpeech recipe (#120)
* initial commit

* support download, data prep, and fbank

* on-the-fly feature extraction by default

* support BPE based lang

* support HLG for BPE

* small fix

* small fix

* chunked feature extraction by default

* Compute features for GigaSpeech by splitting the manifest.

* Fixes after review.

* Split manifests into 2000 pieces.

* set audio duration mismatch tolerance to 0.01

* small fix

* add conformer training recipe

* Add conformer.py without pre-commit checking

* lazy loading and use SingleCutSampler

* DynamicBucketingSampler

* use KaldifeatFbank to compute fbank for musan

* use pretrained language model and lexicon

* use 3gram to decode, 4gram to rescore

* Add decode.py

* Update .flake8

* Delete compute_fbank_gigaspeech.py

* Use BucketingSampler for valid and test dataloader

* Update params in train.py

* Use bpe_500

* update params in decode.py

* Decrease num_paths while CUDA OOM

* Added README

* Update RESULTS

* black

* Decrease num_paths while CUDA OOM

* Decode with post-processing

* Update results

* Remove lazy_load option

* Use default `storage_type`

* Keep the original tolerance

* Use split-lazy

* black

* Update pretrained model

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-04-14 16:07:22 +08:00

80 lines
2.0 KiB
Markdown

## Results
### GigaSpeech BPE training results (Conformer-CTC)
#### 2022-04-06
The best WER, as of 2022-04-06, for the gigaspeech is below
Results using HLG decoding + n-gram LM rescoring + attention decoder rescoring:
| | Dev | Test |
|-----|-------|-------|
| WER | 10.47 | 10.58 |
Scale values used in n-gram LM rescoring and attention rescoring for the best WERs are:
| ngram_lm_scale | attention_scale |
|----------------|-----------------|
| 0.5 | 1.3 |
To reproduce the above result, use the following commands for training:
```
cd egs/gigaspeech/ASR
./prepare.sh
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
./conformer_ctc/train.py \
--max-duration 120 \
--num-workers 1 \
--world-size 8 \
--exp-dir conformer_ctc/exp_500 \
--lang-dir data/lang_bpe_500
```
and the following command for decoding:
```
./conformer_ctc/decode.py \
--epoch 18 \
--avg 6 \
--method attention-decoder \
--num-paths 1000 \
--exp-dir conformer_ctc/exp_500 \
--lang-dir data/lang_bpe_500 \
--max-duration 20 \
--num-workers 1
```
Results using HLG decoding + whole lattice rescoring:
| | Dev | Test |
|-----|-------|-------|
| WER | 10.51 | 10.62 |
Scale values used in n-gram LM rescoring and attention rescoring for the best WERs are:
| lm_scale |
|----------|
| 0.2 |
To reproduce the above result, use the training commands above, and the following command for decoding:
```
./conformer_ctc/decode.py \
--epoch 18 \
--avg 6 \
--method whole-lattice-rescoring \
--num-paths 1000 \
--exp-dir conformer_ctc/exp_500 \
--lang-dir data/lang_bpe_500 \
--max-duration 20 \
--num-workers 1
```
Note: the `whole-lattice-rescoring` method is about twice as fast as the `attention-decoder` method, with slightly worse WER.
Pretrained model is available at
<https://huggingface.co/wgb14/icefall-asr-gigaspeech-conformer-ctc>
The tensorboard log for training is available at
<https://tensorboard.dev/experiment/rz63cmJXSK2fV9GceJtZXQ/>