minor fix for aishell recipe (#223)

* just remove unnecessary torch.sum

* minor fixs for aishell
This commit is contained in:
PF Luo 2022-02-23 08:33:20 +08:00 committed by GitHub
parent 2332ba312d
commit ac7c2d84bc
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3 changed files with 9 additions and 9 deletions

View File

@ -113,7 +113,7 @@ The best CER we currently have is:
| | test |
|-----|------|
| CER | 5.4 |
| CER | 5.05 |
We provide a Colab notebook to run a pre-trained TransducerStateless model: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/14XaT2MhnBkK-3_RqqWq3K90Xlbin-GZC?usp=sharing)

View File

@ -46,12 +46,12 @@ python3 ./transducer_stateless/decode.py \
### Aishell training results (Transducer-stateless)
#### 2022-02-18
(Pingfeng Luo) : The tensorboard log for training is available at <https://tensorboard.dev/experiment/SG1KV62hRzO5YZswwMQnoQ/>
(Pingfeng Luo) : The tensorboard log for training is available at <https://tensorboard.dev/experiment/k3QL6QMhRbCwCKYKM9po9w/>
And pretrained model is available at <https://huggingface.co/pfluo/icefall-aishell-transducer-stateless-char-2021-12-29>
||test|
|--|--|
|CER| 5.4% |
|CER| 5.05% |
You can use the following commands to reproduce our results:
@ -61,17 +61,17 @@ export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7,8"
--bucketing-sampler True \
--world-size 8 \
--lang-dir data/lang_char \
--num-epochs 40 \
--num-epochs 60 \
--start-epoch 0 \
--exp-dir transducer_stateless/exp_char \
--max-duration 160 \
--exp-dir transducer_stateless/exp_rnnt_k2 \
--max-duration 80 \
--lr-factor 3
./transducer_stateless/decode.py \
--epoch 39 \
--epoch 59 \
--avg 10 \
--lang-dir data/lang_char \
--exp-dir transducer_stateless/exp_char \
--exp-dir transducer_stateless/exp_rnnt_k2 \
--max-duration 100 \
--decoding-method beam_search \
--beam-size 4

View File

@ -122,4 +122,4 @@ class Transducer(nn.Module):
loss = k2.rnnt_loss(logits, y_padded, blank_id, boundary)
return torch.sum(loss)
return loss