1325 Commits

Author SHA1 Message Date
Daniel Povey
2f73434541 Reduce debug frequency 2022-07-10 06:44:50 +08:00
Daniel Povey
b3bb2dac6f Iterative, more principled way of estimating param_cov 2022-07-10 06:28:01 +08:00
Daniel Povey
d139c18f22 Max eig of Q limited to 5 times the mean 2022-07-09 14:30:03 +08:00
Daniel Povey
ffeef4ede4 Remove rank-1 dims, meaning where size==numel(), from processing. 2022-07-09 13:36:48 +08:00
Daniel Povey
2fc9eb9789 Respect param_pow 2022-07-09 12:49:04 +08:00
Daniel Povey
209acaf6e4 Increase lr_update_period to 200. The update takes about 2 minutes, fore entire model. 2022-07-09 11:36:54 +08:00
Daniel Povey
61cab3ab65 introduce grad_cov_period 2022-07-09 10:29:23 +08:00
Daniel Povey
35a51bc153 Reduce debug probs 2022-07-09 10:22:19 +08:00
Daniel Povey
65bc964854 Fix bug for scalar update 2022-07-09 10:14:20 +08:00
Daniel Povey
aa2237a793 Bug fix 2022-07-09 10:11:54 +08:00
Daniel Povey
50ee414486 Fix train.py for new optimizer 2022-07-09 10:09:53 +08:00
Daniel Povey
6810849058 Implement new version of learning method. Does more complete diagonalization of grads than the previous methods. 2022-07-09 10:02:17 +08:00
Daniel Povey
a9edecd32c Conformed that symmetrizing helps because of interaction with regular update; still meta_lr_scale=0 best :-( 2022-07-09 05:20:04 +08:00
Fangjun Kuang
6c69c4e253
Support running icefall outside of a git tracked directory. (#470)
* Support running icefall outside of a git tracked directory.

* Minor fixes.
2022-07-08 15:03:07 +08:00
Daniel Povey
52bfb2b018 This works better for reasons I dont understand. transpose is enough, same as symmetrizing. 2022-07-08 11:53:59 +08:00
Daniel Povey
e9ab1ddd39 Inconseqeuential config change 2022-07-08 11:03:16 +08:00
Daniel Povey
be6680e3ba Couple configuration changes, comment simplification 2022-07-08 09:46:42 +08:00
Fangjun Kuang
e5fdbcd480
Revert changes to setup_logger. (#468) 2022-07-08 09:15:37 +08:00
Daniel Povey
75e872ea57 Fix bug in getting denom in proj update 2022-07-08 09:13:54 +08:00
Daniel Povey
914ac1e621 Works better with meta_lr_scale=0, must be bug. 2022-07-08 09:07:06 +08:00
Daniel Povey
923468b8af Deal with SVD failure better. 2022-07-08 09:00:12 +08:00
Daniel Povey
97feb8a3ec Reduce meta_lr_scale, reduces loss @140 from 1.4 to 0.39 2022-07-08 06:33:07 +08:00
Daniel Povey
b6199a71e9 Introduce delta_scale to slow down changes on M; significantly better. 2022-07-08 06:05:31 +08:00
Daniel Povey
ceb9815f2b Increase lr_est_period 2022-07-08 05:51:18 +08:00
Daniel Povey
fb36712e6b Another bug fix, regarding Q being transposed. 2022-07-08 05:22:24 +08:00
Daniel Povey
ad2e698fc3 Cleanups 2022-07-08 04:44:21 +08:00
Daniel Povey
04d2e10b4f Version that runs 2022-07-08 04:37:46 +08:00
Fangjun Kuang
8761452a2c
Add multi_quantization to requirements.txt (#464)
* Add multi_quantization to requirements.txt
2022-07-07 14:36:08 +08:00
Daniel Povey
e6d00ee3e4 More drafts of new method, not tested. 2022-07-06 23:05:06 -07:00
Daniel Povey
26815d177f Draft of the new method.. 2022-07-06 22:59:36 -07:00
Daniel Povey
e9e2a85c95 In the middle of reworking for new idea 2022-07-06 13:35:19 -07:00
Daniel Povey
41368f6b63 Change comment 2022-07-05 17:11:45 -07:00
Mingshuang Luo
8e0b7ea518
mv split cuts before computing feature (#461) 2022-07-04 11:59:37 +08:00
Mingshuang Luo
10e8bc5b56
do a change (#460) 2022-07-03 19:35:01 +08:00
Daniel Povey
2692d5f903 Closer to finished 2022-06-30 23:54:26 -07:00
Tiance Wang
ac9fe5342b
Fix TIMIT lexicon generation bug (#456) 2022-06-30 19:13:46 +08:00
Daniel Povey
d64cb1cb48 draft, not working, will edit locally 2022-06-30 15:35:26 +08:00
Zengwei Yao
d80f29e662
Modification about random combine (#452)
* comment some lines, random combine from 1/3 layers, on linear layers in combiner

* delete commented lines

* minor change
2022-06-30 12:23:49 +08:00
Mingshuang Luo
c10aec5656
load_manifest_lazy for asr_datamodule.py (#453) 2022-06-29 17:45:30 +08:00
Mingshuang Luo
29e407fd04
Code checks for pruned rnnt2 wenetspeech (#451)
* code check

* jq install
2022-06-28 18:57:53 +08:00
Mingshuang Luo
bfa8264697
code check (#450) 2022-06-28 17:32:20 +08:00
Mingshuang Luo
2cb1618c95
[Ready to merge] Pruned transducer stateless5 recipe for tal_csasr (mix Chinese chars and English BPE) (#428)
* add pruned transducer stateless5 recipe for tal_csasr

* do some changes for merging

* change for conformer.py

* add wer and cer for Chinese and English respectively

* fix a error for conformer.py
2022-06-28 11:02:10 +08:00
Wei Kang
6e609c67a2
Using streaming conformer as transducer encoder (#380)
* support streaming in conformer

* Add more documents

* support streaming on pruned_transducer_stateless2; add delay penalty; fixes for decode states

* Minor fixes

* streaming for pruned_transducer_stateless4

* Fix conv cache error, support async streaming decoding

* Fix style

* Fix style

* Fix style

* Add torch.jit.export

* mask the initial cache

* Cutting off invalid frames of encoder_embed output

* fix relative positional encoding in streaming decoding for compution saving

* Minor fixes

* Minor fixes

* Minor fixes

* Minor fixes

* Minor fixes

* Fix jit export for torch 1.6

* Minor fixes for streaming decoding

* Minor fixes on decode stream

* move model parameters to train.py

* make states in forward streaming optional

* update pretrain to support streaming model

* update results.md

* update tensorboard and pre-models

* fix typo

* Fix tests

* remove unused arguments

* add streaming decoding ci

* Minor fix

* Minor fix

* disable right context by default
2022-06-28 00:18:54 +08:00
Daniel Povey
0b811546f3 Apply reverse_cutoff with param_pow, to make it not too strong. 2022-06-26 11:07:08 +08:00
Daniel Povey
0aa5a334d6 Fix regarding reverse_cutoff formula 2022-06-25 18:24:05 +08:00
Daniel Povey
8a0277d493 Increase param_reverse_cutoff from 4 to 16 2022-06-25 18:02:00 +08:00
Daniel Povey
2ccc7ccbeb Make the application of param_reverse_cutoff conditional, not applied when rank is a problem 2022-06-25 17:56:04 +08:00
Jun Wang
d792bdc9bc
fix typo (#445) 2022-06-25 11:00:53 +08:00
Daniel Povey
146d7c5a93 Bug fix 2022-06-24 19:49:13 +08:00
Tiance Wang
c0ea334738
fix bug of concatenating list to tuple (#444) 2022-06-24 19:31:09 +08:00