mirror of
https://github.com/k2-fsa/icefall.git
synced 2025-09-19 05:54:20 +00:00
upload the decoding results of rnn-lm to the RESULTS
This commit is contained in:
parent
79d7c94397
commit
d71e35d85e
3
.flake8
3
.flake8
@ -10,7 +10,8 @@ per-file-ignores =
|
|||||||
egs/*/ASR/*/optim.py: E501,
|
egs/*/ASR/*/optim.py: E501,
|
||||||
egs/*/ASR/*/scaling.py: E501,
|
egs/*/ASR/*/scaling.py: E501,
|
||||||
egs/librispeech/ASR/conv_emformer_transducer_stateless*/*.py: E501, E203,
|
egs/librispeech/ASR/conv_emformer_transducer_stateless*/*.py: E501, E203,
|
||||||
egs/librispeech/ASR/conformer_ctc2/*py: E501
|
egs/librispeech/ASR/conformer_ctc2/*py: E501,
|
||||||
|
egs/librispeech/ASR/RESULTS.md: E999,
|
||||||
|
|
||||||
# invalid escape sequence (cause by tex formular), W605
|
# invalid escape sequence (cause by tex formular), W605
|
||||||
icefall/utils.py: E501, W605
|
icefall/utils.py: E501, W605
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
## Results
|
## Results
|
||||||
|
|
||||||
### LibriSpeech BPE training results (Pruned Stateless Conv-Emformer RNN-T 2)
|
#### LibriSpeech BPE training results (Pruned Stateless Conv-Emformer RNN-T 2)
|
||||||
|
|
||||||
[conv_emformer_transducer_stateless2](./conv_emformer_transducer_stateless2)
|
[conv_emformer_transducer_stateless2](./conv_emformer_transducer_stateless2)
|
||||||
|
|
||||||
@ -1998,10 +1998,9 @@ avg=11
|
|||||||
You can find the tensorboard log at: <https://tensorboard.dev/experiment/D7NQc3xqTpyVmWi5FnWjrA>
|
You can find the tensorboard log at: <https://tensorboard.dev/experiment/D7NQc3xqTpyVmWi5FnWjrA>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### LibriSpeech BPE training results (Conformer-CTC 2)
|
### LibriSpeech BPE training results (Conformer-CTC 2)
|
||||||
|
|
||||||
[conformer_ctc2](./conformer_ctc2)
|
#### [conformer_ctc2](./conformer_ctc2)
|
||||||
|
|
||||||
#### 2022-07-21
|
#### 2022-07-21
|
||||||
|
|
||||||
@ -2037,7 +2036,7 @@ The decoding configuration for the reworked model is --epoch 30, --avg 8, --use-
|
|||||||
| whole-lattice-rescoring| 2.66% | 5.76%| 4.21%| 2.56%| 6.04%| 4.30%|
|
| whole-lattice-rescoring| 2.66% | 5.76%| 4.21%| 2.56%| 6.04%| 4.30%|
|
||||||
| attention-decoder | 2.59% | 5.54%| 4.07%| 2.41%| 5.77%| 4.09%|
|
| attention-decoder | 2.59% | 5.54%| 4.07%| 2.41%| 5.77%| 4.09%|
|
||||||
| nbest-oracle | 1.53% | 3.47%| 2.50%| 1.69%| 4.02%| 2.86%|
|
| nbest-oracle | 1.53% | 3.47%| 2.50%| 1.69%| 4.02%| 2.86%|
|
||||||
|rnn-lm | 2.37% | 4.98%| 3.68%| 2.31%| 5.35%| 3.83%|
|
| rnn-lm | 2.37% | 4.98%| 3.68%| 2.31%| 5.35%| 3.83%|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
Loading…
x
Reference in New Issue
Block a user