Update results.

This commit is contained in:
Fangjun Kuang 2022-04-29 14:13:44 +08:00
parent a227bd76b4
commit 8d2797d7cd
2 changed files with 40 additions and 0 deletions

View File

@ -35,6 +35,9 @@ We do provide a Colab notebook for this recipe.
### LibriSpeech
Please see <https://github.com/k2-fsa/icefall/blob/master/egs/librispeech/ASR/README.md>
for the **latest** results.
We provide 4 models for this recipe:
- [conformer CTC model][LibriSpeech_conformer_ctc]
@ -92,6 +95,20 @@ in the decoding.
We provide a Colab notebook to run a pre-trained transducer conformer + stateless decoder model: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1CO1bXJ-2khDckZIW8zjOPHGSKLHpTDlp?usp=sharing)
#### k2 pruned RNN-T
| | test-clean | test-other |
|-----|------------|------------|
| WER | 2.57 | 5.95 |
#### k2 pruned RNN-T + GigaSpeech
| | test-clean | test-other |
|-----|------------|------------|
| WER | 2.19 | 4.97 |
### Aishell
We provide two models for this recipe: [conformer CTC model][Aishell_conformer_ctc]

View File

@ -127,6 +127,29 @@ The Nbest oracle WER is computed using the following steps:
- 5. The path with the lowest edit distance is the final output and is used to
compute the WER
The command to compute the Nbest oracle WER is:
```bash
for epoch in 27; do
for avg in 10 ; do
for num_paths in 50 100 200 400; do
for nbest_scale in 0.5 0.8 1.0; do
./pruned_transducer_stateless3/decode.py \
--epoch $epoch \
--avg $avg \
--exp-dir ./pruned_transducer_stateless3/exp \
--max-duration 600 \
--decoding-method fast_beam_search_nbest_oracle \
--num-paths $num_paths \
--max-states 32 \
--beam 8 \
--nbest-scale $nbest_scale
done
done
done
done
```
### LibriSpeech BPE training results (Pruned Transducer 2)
[pruned_transducer_stateless2](./pruned_transducer_stateless2)