mirror of
https://github.com/k2-fsa/icefall.git
synced 2025-08-08 09:32:20 +00:00
* add whisper fbank for wenetspeech * add whisper fbank for other dataset * add str to bool * add decode for wenetspeech * add requirments.txt * add original model decode with 30s * test feature extractor speed * add aishell2 feat * change compute feature batch * fix overwrite * fix executor * regression * add kaldifeatwhisper fbank * fix io issue * parallel jobs * use multi machines * add wenetspeech fine-tune scripts * add monkey patch codes * remove useless file * fix subsampling factor * fix too long audios * add remove long short * fix whisper version to support multi batch beam * decode all wav files * remove utterance more than 30s in test_net * only test net * using soft links * add kespeech whisper feats * fix index error * add manifests for whisper * change to licomchunky writer * add missing option * decrease cpu usage * add speed perturb for kespeech * fix kespeech speed perturb * add dataset * load checkpoint from specific path * add speechio * add speechio results --------- Co-authored-by: zr_jin <peter.jin.cn@gmail.com>
Introduction
This recipe includes some different ASR models trained with WenetSpeech.
./RESULTS.md contains the latest results.
Transducers
There are various folders containing the name transducer
in this folder.
The following table lists the differences among them.
Encoder | Decoder | Comment | |
---|---|---|---|
pruned_transducer_stateless2 |
Conformer(modified) | Embedding + Conv1d | Using k2 pruned RNN-T loss |
pruned_transducer_stateless5 |
Conformer(modified) | Embedding + Conv1d | Using k2 pruned RNN-T loss |
The decoder in transducer_stateless
is modified from the paper
Rnn-Transducer with Stateless Prediction Network.
We place an additional Conv1d layer right after the input embedding layer.