mirror of
https://github.com/k2-fsa/icefall.git
synced 2025-08-09 10:02:22 +00:00
Add RESULTS.md
This commit is contained in:
parent
d2ae1ba060
commit
a33852fd7a
23
egs/librispeech/ASR/RESULTS.md
Normal file
23
egs/librispeech/ASR/RESULTS.md
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
## Results
|
||||||
|
|
||||||
|
### LibriSpeech BPE training results (Conformer-CTC)
|
||||||
|
#### 2021-08-19
|
||||||
|
(Wei Kang): Result of https://github.com/k2-fsa/icefall/pull/13
|
||||||
|
|
||||||
|
TensorBoard log is available at https://tensorboard.dev/experiment/GnRzq8WWQW62dK4bklXBTg/#scalars
|
||||||
|
|
||||||
|
Pretrained model is available at https://huggingface.co/pkufool/conformer_ctc
|
||||||
|
|
||||||
|
The best decoding results (WER) are listed below, we got this results by averaging models from epoch 15 to 34, and using `attention-decoder` decoder with num_paths equals to 100.
|
||||||
|
|
||||||
|
||test-clean|test-other|
|
||||||
|
|--|--|--|
|
||||||
|
|WER| 2.57% | 5.94% |
|
||||||
|
|
||||||
|
To get more unique paths, we scaled the lattice.scores with 0.5 (see https://github.com/k2-fsa/icefall/pull/10#discussion_r690951662 for more details), we searched the lm_score_scale and attention_score_scale for best results, the scales that produced the WER above are also listed below.
|
||||||
|
|
||||||
|
||lm_scale|attention_scale|
|
||||||
|
|--|--|--|
|
||||||
|
|test-clean|1.3|1.2|
|
||||||
|
|test-other|1.2|1.1|
|
||||||
|
|
Loading…
x
Reference in New Issue
Block a user