115 Commits

Author SHA1 Message Date
Daniel Povey
97a1dd40cf Change initialization value of weight in SimpleCombine from 0.0 to 0.1; ignore infinities in MetricsTracker
.
2022-11-03 13:46:14 +08:00
Daniel Povey
be5c687fbd Merging upstream/master 2022-10-27 21:04:48 +08:00
Daniel Povey
ad2d3c2b36 Dont print out full non-finite tensor 2022-10-22 23:03:19 +08:00
Daniel Povey
269b70122e Add hooks.py, had negleted to git add it. 2022-10-22 20:58:52 +08:00
Daniel Povey
8d1021d131 Remove comparison diagnostics, which were not that useful. 2022-10-22 13:57:00 +08:00
Daniel Povey
1d2fe8e3c2 Add more diagnostics to debug gradient scale problems 2022-10-22 12:49:29 +08:00
ezerhouni
9b671e1c21
Add Shallow fusion in modified_beam_search (#630)
* Add utility for shallow fusion

* test batch size == 1 without shallow fusion

* Use shallow fusion for modified-beam-search

* Modified beam search with ngram rescoring

* Fix code according to review

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-10-21 16:44:56 +08:00
Daniel Povey
1825336841 Fix issue with diagnostics if stats is None 2022-10-11 11:05:52 +08:00
Daniel Povey
28e5f46854 Update checkpoint.py to deal with int params 2022-10-07 17:06:38 +08:00
Daniel Povey
040592a9e3 Fix eigs call 2022-10-05 16:22:33 +08:00
Daniel Povey
76e66408c5 Some cosmetic improvements 2022-09-27 11:08:44 +08:00
Zengwei Yao
c0101185d7
consider case of empty tensor (#540) 2022-08-22 21:42:56 +08:00
marcoyang1998
c74cec59e9
propagate changes from #525 to other librispeech recipes (#531)
* propaga changes from #525 to other librispeech recipes

* refactor display_and_save_batch to utils

* fixed typo

* reformat code style
2022-08-17 17:18:15 +08:00
Wei Kang
5c17255eec
Sort results to make it more convenient to compare decoding results (#522)
* Sort result to make it more convenient to compare decoding results

* Add cut_id to recognition results

* add cut_id to results for all recipes

* Fix torch.jit.script

* Fix comments

* Minor fixes

* Fix torch.jit.tracing for Pytorch version before v1.9.0
2022-08-12 07:12:50 +08:00
Zengwei Yao
a4dd273776
fix about tensorboard (#516)
* fix metricstracker

* fix style
2022-08-04 19:57:12 +08:00
Fangjun Kuang
6af5a82d8f
Convert ScaledEmbedding to nn.Embedding for inference. (#517)
* Convert ScaledEmbedding to nn.Embedding for inference.

* Fix CI style issues.
2022-08-03 15:34:55 +08:00
LIyong.Guo
132132f52a
liear_fst_with_self_loops (#512) 2022-08-02 22:28:12 +08:00
Lucky Wong
34b4356bad
correction for get rank id. (#507)
* Fix no attribute 'data' error.

* minor fixes

* correction for get rank id.
2022-07-29 11:28:52 +08:00
Daniel Povey
e25ca74955 Use a measure of correlation for eigs that can be negative. 2022-07-26 13:40:57 +08:00
Daniel Povey
b9696878b4 Update diagnostics stats 2022-07-26 12:39:51 +08:00
Zengwei Yao
8203d10be7
Add stats about duration and padding proportion (#485)
* add stats about duration and padding proportion

* add  for utt_duration

* add stats for other recipes

* add stats for other 2 recipes

* modify doc

* minor change
2022-07-25 16:40:43 +08:00
Daniel Povey
a8696b36fc
Merge pull request #483 from yaozengwei/fix_diagnostic
Fix diagnostic
2022-07-18 23:33:45 -07:00
yaozengwei
a35b28cd8d fix for case of None stats 2022-07-19 14:29:23 +08:00
ezerhouni
608473b4eb
Add RNN-LM rescoring in fast beam search (#475) 2022-07-18 16:52:17 +08:00
Daniel Povey
7e88e2a0e9 Increase debug freq; add type to diagnostics and increase precision of mean,rms 2022-07-17 06:40:16 +08:00
Fangjun Kuang
6c69c4e253
Support running icefall outside of a git tracked directory. (#470)
* Support running icefall outside of a git tracked directory.

* Minor fixes.
2022-07-08 15:03:07 +08:00
Fangjun Kuang
e5fdbcd480
Revert changes to setup_logger. (#468) 2022-07-08 09:15:37 +08:00
Mingshuang Luo
2cb1618c95
[Ready to merge] Pruned transducer stateless5 recipe for tal_csasr (mix Chinese chars and English BPE) (#428)
* add pruned transducer stateless5 recipe for tal_csasr

* do some changes for merging

* change for conformer.py

* add wer and cer for Chinese and English respectively

* fix a error for conformer.py
2022-06-28 11:02:10 +08:00
Wei Kang
6e609c67a2
Using streaming conformer as transducer encoder (#380)
* support streaming in conformer

* Add more documents

* support streaming on pruned_transducer_stateless2; add delay penalty; fixes for decode states

* Minor fixes

* streaming for pruned_transducer_stateless4

* Fix conv cache error, support async streaming decoding

* Fix style

* Fix style

* Fix style

* Add torch.jit.export

* mask the initial cache

* Cutting off invalid frames of encoder_embed output

* fix relative positional encoding in streaming decoding for compution saving

* Minor fixes

* Minor fixes

* Minor fixes

* Minor fixes

* Minor fixes

* Fix jit export for torch 1.6

* Minor fixes for streaming decoding

* Minor fixes on decode stream

* move model parameters to train.py

* make states in forward streaming optional

* update pretrain to support streaming model

* update results.md

* update tensorboard and pre-models

* fix typo

* Fix tests

* remove unused arguments

* add streaming decoding ci

* Minor fix

* Minor fix

* disable right context by default
2022-06-28 00:18:54 +08:00
ezerhouni
0475d75d15
[Ready to be merged] Add RNN-LM to Conformer-CTC decoding (#439) 2022-06-23 19:37:03 +08:00
Fangjun Kuang
dc89b61b80
Add fast_beam_search_nbest. (#420)
* Add fast_beam_search_nbest.

* Fix CI errors.

* Fix CI errors.

* More fixes.

* Small fixes.

* Support using log_add in LG decoding with fast_beam_search.

* Support LG decoding in pruned_transducer_stateless

* Support LG for pruned_transducer_stateless2.

* Support LG for fast beam search.

* Minor fixes.
2022-06-22 00:09:25 +08:00
Fangjun Kuang
f1abce72f8
Use jsonl for CutSet in the LibriSpeech recipe. (#397)
* Use jsonl for cutsets in the librispeech recipe.

* Use lazy cutset for all recipes.

* More fixes to use lazy CutSet.

* Remove force=True from logging to support Python < 3.8

* Minor fixes.

* Fix style issues.
2022-06-06 10:19:16 +08:00
Daniel Povey
ca09b9798f Remove decomposition code from checkpoint.py; restore double precision model_avg 2022-06-01 14:01:58 +08:00
Daniel Povey
da2ffd4d27 Do average computation in double precision 2022-05-31 14:39:21 +08:00
Daniel Povey
b2259184b5 Use single precision for model average; increase average-period to 200. 2022-05-31 14:31:46 +08:00
Daniel Povey
8d4c987e21 Update checkpoint.py to support decompose argument 2022-05-31 14:25:45 +08:00
Daniel Povey
7011956c6c Merge remote-tracking branch 'upstream/master' into cain3d_clean_merge 2022-05-31 12:17:45 +08:00
LIyong.Guo
c4ee2bc0af
[Ready to merge]stateless6: states4 + hubert distillation. (#387)
* a copy of stateless4 as base

* distillation with hubert

* fix typo

* example usage

* usage

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* fix comment

* add results of 100hours

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* check fairseq and quantization

* a short intro to distillation framework

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* add intro of statless6 in README

* fix type error of dst_manifest_dir

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* make export.py call stateless6/train.py instead of stateless2/train.py

* update results by stateless6

* adjust results format

* fix typo

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-05-28 12:37:50 +08:00
Daniel Povey
8e454bcf9e Exclude size=500 dim from projection; try to use double for model average 2022-05-26 15:15:27 +08:00
Mingshuang Luo
ec5a112831
[Ready to merge] Do some coding style checks for the latest files (#379)
* style check

* do changes for .flake8

* a change for compute_fbank_yesno.py
2022-05-20 19:30:38 +08:00
Daniel Povey
5230e73e41 Small fixes 2022-05-19 12:49:00 +08:00
Daniel Povey
c0fdfabaf3 Remove memory-limit options arg 2022-05-19 11:30:56 +08:00
Daniel Povey
c2c46ea023 Update diagnostics, hopefully print more stats.
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless4b/train.py
2022-05-19 11:29:31 +08:00
Fangjun Kuang
cd460f7bf1
Stringify torch.__version__ before serializing it. (#354) 2022-05-07 17:18:34 +08:00
Zengwei Yao
20f092e709
Support decoding with averaged model when using --iter (#353)
* support decoding with averaged model when using --iter

* minor fix

* monir fix of copyright date
2022-05-07 13:09:11 +08:00
Zengwei Yao
c059ef3169
Keep model_avg on cpu (#348)
* keep model_avg on cpu

* explicitly convert model_avg to cpu

* minor fix

* remove device convertion for model_avg

* modify usage of the model device in train.py

* change model.device to next(model.parameters()).device for decoding

* assert params.start_epoch>0

* assert params.start_epoch>0, params.start_epoch
2022-05-07 10:42:34 +08:00
Zengwei Yao
00c48ec1f3
Model average (#344)
* First upload of model average codes.

* minor fix

* update decode file

* update .flake8

* rename pruned_transducer_stateless3 to pruned_transducer_stateless4

* change epoch number counter starting from 1 instead of 0

* minor fix of pruned_transducer_stateless4/train.py

* refactor the checkpoint.py

* minor fix, update docs, and modify the epoch number to count from 1 in the pruned_transducer_stateless4/decode.py

* update author info

* add docs of the scaling in function average_checkpoints_with_averaged_model
2022-05-05 21:20:04 +08:00
Fangjun Kuang
9aeea3e1af
Support averaging models with weight tying. (#333) 2022-04-26 13:32:03 +08:00
Wang, Guanbo
5fe58de43c
GigaSpeech recipe (#120)
* initial commit

* support download, data prep, and fbank

* on-the-fly feature extraction by default

* support BPE based lang

* support HLG for BPE

* small fix

* small fix

* chunked feature extraction by default

* Compute features for GigaSpeech by splitting the manifest.

* Fixes after review.

* Split manifests into 2000 pieces.

* set audio duration mismatch tolerance to 0.01

* small fix

* add conformer training recipe

* Add conformer.py without pre-commit checking

* lazy loading and use SingleCutSampler

* DynamicBucketingSampler

* use KaldifeatFbank to compute fbank for musan

* use pretrained language model and lexicon

* use 3gram to decode, 4gram to rescore

* Add decode.py

* Update .flake8

* Delete compute_fbank_gigaspeech.py

* Use BucketingSampler for valid and test dataloader

* Update params in train.py

* Use bpe_500

* update params in decode.py

* Decrease num_paths while CUDA OOM

* Added README

* Update RESULTS

* black

* Decrease num_paths while CUDA OOM

* Decode with post-processing

* Update results

* Remove lazy_load option

* Use default `storage_type`

* Keep the original tolerance

* Use split-lazy

* black

* Update pretrained model

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-04-14 16:07:22 +08:00
Guo Liyong
78418ac37c fix comments 2022-04-13 13:09:24 +08:00