mirror of
https://github.com/k2-fsa/icefall.git
synced 2025-09-07 16:14:17 +00:00
Update results
This commit is contained in:
parent
f485b66d54
commit
22f011e5ab
@ -15,6 +15,6 @@ ln -sfv /path/to/GigaSpeech download/GigaSpeech
|
|||||||
## Performance Record
|
## Performance Record
|
||||||
| | Dev | Test |
|
| | Dev | Test |
|
||||||
|-----|-------|-------|
|
|-----|-------|-------|
|
||||||
| WER | 11.93 | 11.86 |
|
| WER | 10.47 | 10.58 |
|
||||||
|
|
||||||
See [RESULTS](/egs/gigaspeech/ASR/RESULTS.md) for details.
|
See [RESULTS](/egs/gigaspeech/ASR/RESULTS.md) for details.
|
||||||
|
@ -5,22 +5,23 @@
|
|||||||
#### 2022-04-06
|
#### 2022-04-06
|
||||||
|
|
||||||
The best WER, as of 2022-04-06, for the gigaspeech is below
|
The best WER, as of 2022-04-06, for the gigaspeech is below
|
||||||
(using HLG decoding + n-gram LM rescoring + attention decoder rescoring):
|
|
||||||
|
Results using HLG decoding + n-gram LM rescoring + attention decoder rescoring:
|
||||||
|
|
||||||
| | Dev | Test |
|
| | Dev | Test |
|
||||||
|-----|-------|-------|
|
|-----|-------|-------|
|
||||||
| WER | 11.93 | 11.86 |
|
| WER | 10.47 | 10.58 |
|
||||||
|
|
||||||
Scale values used in n-gram LM rescoring and attention rescoring for the best WERs are:
|
Scale values used in n-gram LM rescoring and attention rescoring for the best WERs are:
|
||||||
| ngram_lm_scale | attention_scale |
|
| ngram_lm_scale | attention_scale |
|
||||||
|----------------|-----------------|
|
|----------------|-----------------|
|
||||||
| 0.3 | 1.5 |
|
| 0.5 | 1.3 |
|
||||||
|
|
||||||
|
|
||||||
To reproduce the above result, use the following commands for training:
|
To reproduce the above result, use the following commands for training:
|
||||||
|
|
||||||
```
|
```
|
||||||
cd egs/gigaspeech/ASR/conformer_ctc
|
cd egs/gigaspeech/ASR
|
||||||
./prepare.sh
|
./prepare.sh
|
||||||
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
|
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
|
||||||
./conformer_ctc/train.py \
|
./conformer_ctc/train.py \
|
||||||
@ -31,12 +32,12 @@ export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
|
|||||||
--lang-dir data/lang_bpe_500
|
--lang-dir data/lang_bpe_500
|
||||||
```
|
```
|
||||||
|
|
||||||
and the following command for decoding
|
and the following command for decoding:
|
||||||
|
|
||||||
```
|
```
|
||||||
./conformer_ctc/decode.py \
|
./conformer_ctc/decode.py \
|
||||||
--epoch 19 \
|
--epoch 18 \
|
||||||
--avg 8 \
|
--avg 6 \
|
||||||
--method attention-decoder \
|
--method attention-decoder \
|
||||||
--num-paths 1000 \
|
--num-paths 1000 \
|
||||||
--exp-dir conformer_ctc/exp_500 \
|
--exp-dir conformer_ctc/exp_500 \
|
||||||
@ -47,3 +48,29 @@ and the following command for decoding
|
|||||||
|
|
||||||
The tensorboard log for training is available at
|
The tensorboard log for training is available at
|
||||||
<https://tensorboard.dev/experiment/rz63cmJXSK2fV9GceJtZXQ/>
|
<https://tensorboard.dev/experiment/rz63cmJXSK2fV9GceJtZXQ/>
|
||||||
|
|
||||||
|
Results using HLG decoding + whole lattice rescoring:
|
||||||
|
|
||||||
|
| | Dev | Test |
|
||||||
|
|-----|-------|-------|
|
||||||
|
| WER | 10.51 | 10.62 |
|
||||||
|
|
||||||
|
Scale values used in n-gram LM rescoring and attention rescoring for the best WERs are:
|
||||||
|
| lm_scale |
|
||||||
|
|----------|
|
||||||
|
| 0.2 |
|
||||||
|
|
||||||
|
To reproduce the above result, use the training commands above, and the following command for decoding:
|
||||||
|
|
||||||
|
```
|
||||||
|
./conformer_ctc/decode.py \
|
||||||
|
--epoch 18 \
|
||||||
|
--avg 6 \
|
||||||
|
--method whole-lattice-rescoring \
|
||||||
|
--num-paths 1000 \
|
||||||
|
--exp-dir conformer_ctc/exp_500 \
|
||||||
|
--lang-dir data/lang_bpe_500 \
|
||||||
|
--max-duration 20 \
|
||||||
|
--num-workers 1
|
||||||
|
```
|
||||||
|
Note: the `whole-lattice-rescoring` method is about twice as fast as the `attention-decoder` method, with slightly worse WER.
|
||||||
|
Loading…
x
Reference in New Issue
Block a user