icefall/egs/librispeech/ASR/RESULTS.md
Fangjun Kuang 7f8e3a673a
Add commands for reproducing. (#40)
* Add commands for reproducing.

* Use --bucketing-sampler by default.
2021-09-09 13:50:31 +08:00

2.5 KiB

Results

LibriSpeech BPE training results (Conformer-CTC)

2021-08-19

(Wei Kang): Result of https://github.com/k2-fsa/icefall/pull/13

TensorBoard log is available at https://tensorboard.dev/experiment/GnRzq8WWQW62dK4bklXBTg/#scalars

Pretrained model is available at https://huggingface.co/pkufool/icefall_asr_librispeech_conformer_ctc

The best decoding results (WER) are listed below, we got this results by averaging models from epoch 15 to 34, and using attention-decoder decoder with num_paths equals to 100.

test-clean test-other
WER 2.57% 5.94%

To get more unique paths, we scaled the lattice.scores with 0.5 (see https://github.com/k2-fsa/icefall/pull/10#discussion_r690951662 for more details), we searched the lm_score_scale and attention_score_scale for best results, the scales that produced the WER above are also listed below.

lm_scale attention_scale
test-clean 1.3 1.2
test-other 1.2 1.1

You can use the following commands to reproduce our results:

git clone https://github.com/k2-fsa/icefall
cd icefall

# It was using ef233486, you may not need to switch to it
# git checkout ef233486

cd egs/librispeech/ASR
./prepare.sh

export CUDA_VISIBLE_DEVICES="0,1,2,3"
python conformer_ctc/train.py --bucketing-sampler True \
                              --concatenate-cuts False \
                              --max-duration 200 \
                              --full-libri True \
                              --world-size 4

python conformer_ctc/decode.py --lattice-score-scale 0.5 \
                               --epoch 34 \
                               --avg 20 \
                               --method attention-decoder \
                               --max-duration 20 \
                               --num-paths 100

LibriSpeech training results (Tdnn-Lstm)

2021-08-24

(Wei Kang): Result of phone based Tdnn-Lstm model.

Icefall version: caa0b9e942

Pretrained model is available at https://huggingface.co/pkufool/icefall_asr_librispeech_tdnn-lstm_ctc

The best decoding results (WER) are listed below, we got this results by averaging models from epoch 19 to 14, and using whole-lattice-rescoring decoding method.

test-clean test-other
WER 6.59% 17.69%

We searched the lm_score_scale for best results, the scales that produced the WER above are also listed below.

lm_scale
test-clean 0.8
test-other 0.9