2 Commits

Author SHA1 Message Date
Yifan Yang
070c77e724
Add Blankskip to Zipformer+CTC (#730)
* init files

* add ctc as auxiliary loss and ctc_decode.py

* tuning the scalar of HLG score for 1best, nbest and nbest-oracle

* rename to pruned_transducer_stateless7_ctc

* fix doc

* fix bug, recover the hlg scores

* modify ctc_decode.py, move out the hlg scale

* fix hlg_scale

* add export.py and pretrained.py, and so on

* upload files, update README.md and RESULTS.md

* add CI test

* update .gitignore

* create symlinks

* Add Blank Skip to Zipformer+CTC

* Add warmup to blank skip

* Add warmup to blank skip

* Add __init__.py

* Add parameters_names to Adam

* Add warmup to blank skip

* Modify frame_reducer

* Modify frame_reducer

* Add Blank Skip to decode.

* Add ctc_decode.py

* Add blank skip to Zipformer+CTC

* process conflict

* process conflict

* modify ctc_guild_decode_bk.py

* modify Lconv

* produce the conflict

* Add export.py

* finish export

* fix for running black

* Add ci test

* Add ci-test

* chmod

* chmod

* fix bug for ci-test

* fix bug for ci-test

* fix bug for ci-test

* rename the dirname

* rename the dirname

* change dirname

* change dirname

* fix notes

* add pretrained.py

* add pretrained.py

* add pretrained.py

* add pretrained.py

* add pretrained.py

* add pretrained.py

* fix

* fix

* fix

* finished

* add the Copyright info and notes

Co-authored-by: Zengwei Yao <yaozengwei@outlook.com>
Co-authored-by: yifanyang <yifanyeung@yifanyangs-MacBook-Pro.local>
2022-12-21 17:41:31 +08:00
Wang, Guanbo
5fe58de43c
GigaSpeech recipe (#120)
* initial commit

* support download, data prep, and fbank

* on-the-fly feature extraction by default

* support BPE based lang

* support HLG for BPE

* small fix

* small fix

* chunked feature extraction by default

* Compute features for GigaSpeech by splitting the manifest.

* Fixes after review.

* Split manifests into 2000 pieces.

* set audio duration mismatch tolerance to 0.01

* small fix

* add conformer training recipe

* Add conformer.py without pre-commit checking

* lazy loading and use SingleCutSampler

* DynamicBucketingSampler

* use KaldifeatFbank to compute fbank for musan

* use pretrained language model and lexicon

* use 3gram to decode, 4gram to rescore

* Add decode.py

* Update .flake8

* Delete compute_fbank_gigaspeech.py

* Use BucketingSampler for valid and test dataloader

* Update params in train.py

* Use bpe_500

* update params in decode.py

* Decrease num_paths while CUDA OOM

* Added README

* Update RESULTS

* black

* Decrease num_paths while CUDA OOM

* Decode with post-processing

* Update results

* Remove lazy_load option

* Use default `storage_type`

* Keep the original tolerance

* Use split-lazy

* black

* Update pretrained model

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-04-14 16:07:22 +08:00