36 Commits

Author SHA1 Message Date
Zengwei Yao
f76afff741
Support CTC/AED option for Zipformer recipe (#1389)
* add attention-decoder loss option for zipformer recipe

* add attention-decoder-rescoring

* update export.py and pretrained_ctc.py

* update RESULTS.md
2024-07-05 20:19:18 +08:00
Daniil
b293db4baf
Tedlium3 conformer ctc2 (#696)
* modify preparation

* small refacor

* add tedlium3 conformer_ctc2

* modify decode

* filter unk in decode

* add scaling converter

* address comments

* fix lambda function lhotse

* add implicit manifest shuffle

* refactor ctc_greedy_search

* import model arguments from train.py

* style fix

* fix ci test and last style issues

* update RESULTS

* fix RESULTS numbers

* fix label smoothing loss

* update model parameters number in RESULTS
2022-12-13 16:13:26 +08:00
Zengwei Yao
b25c234c51
Add Zipformer-MMI (#746)
* Minor fix to conformer-mmi

* Minor fixes

* Fix decode.py

* add training files

* train with ctc warmup

* add pruned_transducer_stateless7_mmi

* add zipformer_mmi/mmi_decode.py, using HP as decoding graph

* add mmi_decode.py

* remove pruned_transducer_stateless7_mmi

* rename zipformer_mmi/train_with_ctc.py as zipformer_mmi/train.py

* remove unused method

* rename mmi_decode.py

* add export.py pretrained.py jit_pretrained.py ...

* add RESULTS.md

* add CI test

* add docs

* add README.md

Co-authored-by: pkufool <wkang.pku@gmail.com>
2022-12-11 21:30:39 +08:00
Desh Raj
107df3b115 apply black on all files 2022-11-17 09:42:17 -05:00
Fangjun Kuang
60317120ca
Revert "Apply new Black style changes" 2022-11-17 20:19:32 +08:00
Desh Raj
cad8f6aca4 merge upstream 2022-11-16 19:50:43 -05:00
Daniil
fca796cc2c
Small code refactoring (#687) 2022-11-17 06:55:53 +08:00
Desh Raj
d110b04ad3 apply new black formatting to all files 2022-11-16 13:06:43 -05:00
Fangjun Kuang
6af5a82d8f
Convert ScaledEmbedding to nn.Embedding for inference. (#517)
* Convert ScaledEmbedding to nn.Embedding for inference.

* Fix CI style issues.
2022-08-03 15:34:55 +08:00
LIyong.Guo
132132f52a
liear_fst_with_self_loops (#512) 2022-08-02 22:28:12 +08:00
ezerhouni
608473b4eb
Add RNN-LM rescoring in fast beam search (#475) 2022-07-18 16:52:17 +08:00
ezerhouni
0475d75d15
[Ready to be merged] Add RNN-LM to Conformer-CTC decoding (#439) 2022-06-23 19:37:03 +08:00
Fangjun Kuang
dc89b61b80
Add fast_beam_search_nbest. (#420)
* Add fast_beam_search_nbest.

* Fix CI errors.

* Fix CI errors.

* More fixes.

* Small fixes.

* Support using log_add in LG decoding with fast_beam_search.

* Support LG decoding in pruned_transducer_stateless

* Support LG for pruned_transducer_stateless2.

* Support LG for fast beam search.

* Minor fixes.
2022-06-22 00:09:25 +08:00
Wang, Guanbo
5fe58de43c
GigaSpeech recipe (#120)
* initial commit

* support download, data prep, and fbank

* on-the-fly feature extraction by default

* support BPE based lang

* support HLG for BPE

* small fix

* small fix

* chunked feature extraction by default

* Compute features for GigaSpeech by splitting the manifest.

* Fixes after review.

* Split manifests into 2000 pieces.

* set audio duration mismatch tolerance to 0.01

* small fix

* add conformer training recipe

* Add conformer.py without pre-commit checking

* lazy loading and use SingleCutSampler

* DynamicBucketingSampler

* use KaldifeatFbank to compute fbank for musan

* use pretrained language model and lexicon

* use 3gram to decode, 4gram to rescore

* Add decode.py

* Update .flake8

* Delete compute_fbank_gigaspeech.py

* Use BucketingSampler for valid and test dataloader

* Update params in train.py

* Use bpe_500

* update params in decode.py

* Decrease num_paths while CUDA OOM

* Added README

* Update RESULTS

* black

* Decrease num_paths while CUDA OOM

* Decode with post-processing

* Update results

* Remove lazy_load option

* Use default `storage_type`

* Keep the original tolerance

* Use split-lazy

* black

* Update pretrained model

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-04-14 16:07:22 +08:00
Wang, Guanbo
e8eb408760
Incremental pruning threshold (#214)
* Incremental pruning threshold

* flake8

* black

* minor fix
2022-02-16 16:59:27 +08:00
Fangjun Kuang
5b10310bd1 Handle empty lattices in attention decoder rescoring. 2021-11-11 15:42:30 +08:00
Fangjun Kuang
8d679c3e74
Fix typos. (#115) 2021-11-10 14:45:30 +08:00
Fangjun Kuang
21096e99d8
Update result for the librispeech recipe using vocab size 500 and att rate 0.8 (#113)
* Update RESULTS using vocab size 500, att rate 0.8

* Update README.

* Refactoring.

Since FSAs in an Nbest object are linear in structure, we can
add the scores of a path to compute the total scores.

* Update documentation.

* Change default vocab size from 5000 to 500.
2021-11-10 14:32:52 +08:00
Fangjun Kuang
04029871b6
Fix a bug in Nbest.compute_am_scores and Nbest.compute_lm_scores. (#111) 2021-11-09 13:44:51 +08:00
Fangjun Kuang
91cfecebf2
Remove duplicated token seq in rescoring. (#108)
* Remove duplicated token seq in rescoring.

* Use a larger range for ngram_lm_scale and attention_scale
2021-11-06 08:54:45 +08:00
Fangjun Kuang
12d647d899
Add a note about the CUDA OOM error. (#94)
* Add a note about the CUDA OOM error.

Some users consider this kind of OOM as an error during decoding,
but actually it is not. This pull request clarifies that.

* Fix style issues.
2021-10-29 12:17:56 +08:00
Fangjun Kuang
707d7017a7
Support pure ctc decoding requiring neither a lexicon nor an n-gram LM (#58)
* Rename lattice_score_scale to nbest_scale.

* Support pure CTC decoding requiring neither a lexicion nor an n-gram LM.

* Fix style issues.

* Fix a typo.

* Minor fixes.
2021-09-26 14:21:49 +08:00
Fangjun Kuang
a80e58e15d
Refactor decode.py to make it more readable and more modular. (#44)
* Refactor decode.py to make it more readable and more modular.

* Fix an error.

Nbest.fsa should always have token IDs as labels and
word IDs as aux_labels.

* Add nbest decoding.

* Compute edit distance with k2.

* Refactor nbest-oracle.

* Add rescore with nbest lists.

* Add whole-lattice rescoring.

* Add rescoring with attention decoder.

* Refactoring.

* Fixes after refactoring.

* Fix a typo.

* Minor fixes.

* Replace [] with () for shapes.

* Use k2 v1.9

* Use Levenshtein graphs/alignment from k2 v1.9

* [doc] Require k2 >= v1.9

* Minor fixes.
2021-09-20 15:44:54 +08:00
Fangjun Kuang
cc77cb3459
Fix decode.py to remove the correct axis. (#50)
* Fix decode.py to remove the correct axis.

* Run GitHub actions manually.
2021-09-17 16:49:03 +08:00
Wei Kang
9a6e0489c8
update api for RaggedTensor (#45)
* Fix code style

* update k2 version in CI

* fix compile hlg
2021-09-14 16:39:56 +08:00
Fangjun Kuang
abadc71415
Use new APIs with k2.RaggedTensor (#38)
* Use new APIs with k2.RaggedTensor

* Fix style issues.

* Update the installation doc, saying it requires at least k2 v1.7

* Use k2 v1.7
2021-09-08 14:55:30 +08:00
Fangjun Kuang
1bd5dcc8ac
WIP: Add doc for the LibriSpeech recipe. (#24)
* WIP: Add doc for the LibriSpeech recipe.

* Add more doc for LibriSpeech recipe.

* Add more doc for the LibriSpeech recipe.

* More doc.
2021-08-24 20:28:32 +08:00
pkufool
19c4214958
Fix code style and add copyright. (#18)
* Fix style and add copyright

* Minor fix

* Remove duplicate lines

* Reformat conformer.py by black

* Reformat code style with black.

* Fix github workflows

* Fix lhotse installation

* Install icefall requirements

* Update k2 version, remove lhotse from test workflow
2021-08-23 10:43:59 +08:00
Fangjun Kuang
9d0cc9d829
Support computing nbest oracle WER. (#10)
* Support computing nbest oracle WER.

* Add scale to all nbest based decoding/rescoring methods.

* Add script to run pretrained models.

* Use torchaudio to extract features.

* Support decoding multiple files at the same time.

Also, use kaldifeat for feature extraction.

* Support decoding with LM rescoring and attention-decoder rescoring.

* Minor fixes.

* Replace scale with lattice-score-scale.

* Add usage example with a provided pretrained model.
2021-08-20 11:53:37 +08:00
pkufool
ef233486ae
The training script produce WER of 2.57% on librispeech test-clean (#13)
* Add grad_clip and weight-decay, small fix of dataloader and masking

* Add RESULTS.md
2021-08-20 10:08:08 +08:00
Fangjun Kuang
5a0b9bcb23
Refactoring (#4)
* Fix an error in TDNN-LSTM training.

* WIP: Refactoring

* Refactor transformer.py

* Remove unused code.

* Minor fixes.
2021-08-04 14:53:02 +08:00
Fangjun Kuang
398ed80d7a Minor fixes to support DDP training. 2021-07-31 15:26:57 +08:00
Fangjun Kuang
bd69e4be32 Use attention decoder for rescoring. 2021-07-28 12:22:09 +08:00
Fangjun Kuang
f65854cca5 Add BPE decoding results. 2021-07-27 17:38:47 +08:00
Fangjun Kuang
4a66712406 Add LM rescoring. 2021-07-25 18:21:26 +08:00
Fangjun Kuang
6f9fe5b906 Refactor decoding code. 2021-07-24 22:23:50 +08:00