532 Commits

Author SHA1 Message Date
yaozengwei
1b0d2f3592 modify .flake8 2022-07-17 21:22:00 +08:00
yaozengwei
c71788ee57 remove add_model_arguments 2022-07-17 21:20:39 +08:00
yaozengwei
7c00f92abb fix style 2022-07-17 21:17:45 +08:00
yaozengwei
872d2390d2 link __init__.py 2022-07-17 20:39:46 +08:00
yaozengwei
ce2d817114 pruned2 -> pruned4 2022-07-17 20:36:20 +08:00
yaozengwei
125eac8dee flat weights after scaling 2022-07-17 20:35:29 +08:00
yaozengwei
539a9d75d4 simplified code in ScaledLSTM 2022-07-17 17:07:14 +08:00
yaozengwei
5c669b7716 modify streaming decoding files 2022-07-17 16:09:24 +08:00
yaozengwei
822cc78a9c copy streaming decoding files from pruned_transducer_stateless2 2022-07-17 15:47:43 +08:00
yaozengwei
4a0dea2aa2 modify decode.py pretrained.py test_model.py train.py 2022-07-17 15:38:53 +08:00
yaozengwei
b1be6ea475 copy from pruned_transducer_stateless2 2022-07-17 15:37:27 +08:00
yaozengwei
89bfb6b9c7 link scaling.py pruned_transducer_stateless2 2022-07-17 15:35:59 +08:00
yaozengwei
d16b9ec15f link from pruned_transducer_stateless2 2022-07-17 15:32:54 +08:00
yaozengwei
074bd7da71 hardcode bidirectional=False 2022-07-17 15:31:25 +08:00
yaozengwei
2d53f2ef8b add RNN and Conv2dSubsampling classes in lstm.py 2022-07-17 12:59:27 +08:00
yaozengwei
7c9fcfa5c9 add RNNEncoderLayer and RNNEncoder classes in lstm.py 2022-07-16 22:50:42 +08:00
yaozengwei
9165de5f57 add ScaledLSTM 2022-07-16 22:47:05 +08:00
Zengwei Yao
0fcdd15fec
Merge branch 'k2-fsa:master' into master 2022-07-12 15:39:04 +08:00
Zengwei Yao
ce26495238
Rand combine update result (#467)
* update RESULTS.md

* fix test code in pruned_transducer_stateless5/conformer.py

* minor fix

* delete doc

* fix style
2022-07-11 18:13:31 +08:00
Fangjun Kuang
6c69c4e253
Support running icefall outside of a git tracked directory. (#470)
* Support running icefall outside of a git tracked directory.

* Minor fixes.
2022-07-08 15:03:07 +08:00
Fangjun Kuang
e5fdbcd480
Revert changes to setup_logger. (#468) 2022-07-08 09:15:37 +08:00
Fangjun Kuang
8761452a2c
Add multi_quantization to requirements.txt (#464)
* Add multi_quantization to requirements.txt
2022-07-07 14:36:08 +08:00
Mingshuang Luo
8e0b7ea518
mv split cuts before computing feature (#461) 2022-07-04 11:59:37 +08:00
Mingshuang Luo
10e8bc5b56
do a change (#460) 2022-07-03 19:35:01 +08:00
Tiance Wang
ac9fe5342b
Fix TIMIT lexicon generation bug (#456) 2022-06-30 19:13:46 +08:00
Zengwei Yao
d80f29e662
Modification about random combine (#452)
* comment some lines, random combine from 1/3 layers, on linear layers in combiner

* delete commented lines

* minor change
2022-06-30 12:23:49 +08:00
Mingshuang Luo
c10aec5656
load_manifest_lazy for asr_datamodule.py (#453) 2022-06-29 17:45:30 +08:00
Mingshuang Luo
29e407fd04
Code checks for pruned rnnt2 wenetspeech (#451)
* code check

* jq install
2022-06-28 18:57:53 +08:00
Mingshuang Luo
bfa8264697
code check (#450) 2022-06-28 17:32:20 +08:00
Mingshuang Luo
2cb1618c95
[Ready to merge] Pruned transducer stateless5 recipe for tal_csasr (mix Chinese chars and English BPE) (#428)
* add pruned transducer stateless5 recipe for tal_csasr

* do some changes for merging

* change for conformer.py

* add wer and cer for Chinese and English respectively

* fix a error for conformer.py
2022-06-28 11:02:10 +08:00
Wei Kang
6e609c67a2
Using streaming conformer as transducer encoder (#380)
* support streaming in conformer

* Add more documents

* support streaming on pruned_transducer_stateless2; add delay penalty; fixes for decode states

* Minor fixes

* streaming for pruned_transducer_stateless4

* Fix conv cache error, support async streaming decoding

* Fix style

* Fix style

* Fix style

* Add torch.jit.export

* mask the initial cache

* Cutting off invalid frames of encoder_embed output

* fix relative positional encoding in streaming decoding for compution saving

* Minor fixes

* Minor fixes

* Minor fixes

* Minor fixes

* Minor fixes

* Fix jit export for torch 1.6

* Minor fixes for streaming decoding

* Minor fixes on decode stream

* move model parameters to train.py

* make states in forward streaming optional

* update pretrain to support streaming model

* update results.md

* update tensorboard and pre-models

* fix typo

* Fix tests

* remove unused arguments

* add streaming decoding ci

* Minor fix

* Minor fix

* disable right context by default
2022-06-28 00:18:54 +08:00
Jun Wang
d792bdc9bc
fix typo (#445) 2022-06-25 11:00:53 +08:00
Tiance Wang
c0ea334738
fix bug of concatenating list to tuple (#444) 2022-06-24 19:31:09 +08:00
Mingshuang Luo
c391bfd100
fix errors for soft connection (#443) 2022-06-24 10:40:46 +08:00
ezerhouni
0475d75d15
[Ready to be merged] Add RNN-LM to Conformer-CTC decoding (#439) 2022-06-23 19:37:03 +08:00
Fangjun Kuang
dc89b61b80
Add fast_beam_search_nbest. (#420)
* Add fast_beam_search_nbest.

* Fix CI errors.

* Fix CI errors.

* More fixes.

* Small fixes.

* Support using log_add in LG decoding with fast_beam_search.

* Support LG decoding in pruned_transducer_stateless

* Support LG for pruned_transducer_stateless2.

* Support LG for fast beam search.

* Minor fixes.
2022-06-22 00:09:25 +08:00
Fangjun Kuang
7100c33820
Add pruned RNN-T for aishell. (#436)
* Add pruned RNN-T for aishell.

* support torch script.

* Update CI.

* Minor fixes.

* Add links to sherpa.
2022-06-21 21:17:22 +08:00
Zengwei Yao
d3daeaf5cd
Upload extracted codebook indexes (#429)
* save only vq-related info to manifest

* support to join manifest files

* support using extracted codebook indexes

* fix doc

* minor fix

* add enable-distillation argument option, fix monir typos

* fix style

* fix typo
2022-06-21 19:16:59 +08:00
2xwwx2
91b2765cfd
Fixs spelling mistake (#438) 2022-06-20 16:41:04 +08:00
Mingshuang Luo
998091ef52
do some changes for export.py (#437) 2022-06-20 14:57:08 +08:00
Zengwei Yao
a42d96dfe0
Fix warmup (#435)
* fix warmup when scan_pessimistic_batches_for_oom

* delete comments
2022-06-20 13:40:01 +08:00
yaozengwei
74c14f5f5d Merge remote-tracking branch 'k2-fsa/master' 2022-06-18 17:48:51 +08:00
Fangjun Kuang
ab788980c9
Fix an error introduced by supporting torchscript for torch 1.6.0 (#434) 2022-06-18 08:57:20 +08:00
Fangjun Kuang
d53f69108f
Support torch 1.6.0 (#433) 2022-06-17 22:24:47 +08:00
Wei Kang
5379c8e9fa
Disable drop_last in testing time (#427) 2022-06-16 15:43:48 +08:00
Mingshuang Luo
5c3ee8bfcd
[Ready to merge] Pruned transducer stateless5 recipe for AISHELL4 (#399)
* pruned-transducer-stateless5 recipe for aishell4

* pruned-transducer-stateless5 recipe for aishell4

* do some changes and text normalize

* do some changes

* add text normalize

* combine the training data and decode without webdataset

* update codes for merging

* Do a change for READMD.md
2022-06-14 22:19:05 +08:00
yaozengwei
ec8646d0cd Merge remote-tracking branch 'k2-fsa/master' 2022-06-13 20:55:28 +08:00
Zengwei Yao
53f38c01d2
Emformer with conv module and scaling mechanism (#389)
* copy files from existing branch

* add rule in .flake8

* monir style fix

* fix typos

* add tail padding

* refactor, use fixed-length cache for batch decoding

* copy from streaming branch

* copy from streaming branch

* modify emformer states stack and unstack, streaming decoding, to be continued

* refactor Stream class

* remane streaming_feature_extractor.py

* refactor streaming decoding

* test states stack and unstack

* fix bugs, no grad, and num_proccessed_frames

* add modify_beam_search, fast_beam_search

* support torch.jit.export

* use torch.div

* copy from pruned_transducer_stateless4

* modify export.py

* add author info

* delete other test functions

* minor fix

* modify doc

* fix style

* minor fix doc

* minor fix

* minor fix doc

* update RESULTS.md

* fix typo

* add info

* fix typo

* fix doc

* add test function for conv module, and minor fix.

* add copyright info

* minor change of test_emformer.py

* fix doc of stack and unstack, test case with batch_size=1

* update README.md
2022-06-13 15:09:17 +08:00
yaozengwei
2a5a70e03e Merge remote-tracking branch 'k2-fsa/master' 2022-06-13 12:52:28 +08:00
Fangjun Kuang
9f6c748b30
Add links to sherpa. (#417)
* Add links to sherpa.
2022-06-10 12:19:18 +08:00