44 Commits

Author SHA1 Message Date
pkufool
ddd8f9e15e Minor fixes 2022-04-11 20:58:43 +08:00
pkufool
cc0d4ffa4f Add mix precision support 2022-04-11 20:58:02 +08:00
Wei Kang
7012fd65b5
Support mix precision training on the reworked model (#305)
* Add mix precision support

* Minor fixes

* Minor fixes

* Minor fixes
2022-04-11 16:49:54 +08:00
Daniel Povey
5078332088 Fix adding learning rate to tensorboard 2022-04-11 14:58:15 +08:00
Daniel Povey
46d52dda10 Fix dir names 2022-04-11 12:03:41 +08:00
Daniel Povey
962cf868c9 Fix import 2022-04-10 15:31:46 +08:00
Daniel Povey
d1e4ae788d Refactor how learning rate is set. 2022-04-10 15:25:27 +08:00
Daniel Povey
82d58629ea Implement 2p version of learning rate schedule. 2022-04-10 13:50:31 +08:00
Daniel Povey
da50525ca5 Make lrate rule more symmetric 2022-04-10 13:25:40 +08:00
Daniel Povey
4d41ee0caa Implement 2o schedule 2022-04-09 18:37:03 +08:00
Daniel Povey
db72aee1f0 Set 2n rule.. 2022-04-09 18:15:56 +08:00
Daniel Povey
0f8ee68af2 Fix bug 2022-04-08 16:53:42 +08:00
Daniel Povey
f587cd527d Change exponential part of lrate to be epoch based 2022-04-08 16:24:21 +08:00
Daniel Povey
6ee32cf7af Set new scheduler 2022-04-08 16:10:06 +08:00
Daniel Povey
a41e93437c Change some defaults in LR-setting rule. 2022-04-06 12:36:58 +08:00
Daniel Povey
d1a669162c Fix bug in lambda 2022-04-05 13:31:52 +08:00
Daniel Povey
ed8eba91e1 Reduce model_warm_step from 4k to 3k 2022-04-05 13:24:09 +08:00
Daniel Povey
c3169222ae Simplified optimizer, rework somet things.. 2022-04-05 13:23:02 +08:00
Daniel Povey
0f5957394b Fix to reading scheudler from optim 2022-04-05 12:58:43 +08:00
Daniel Povey
1548cc7462 Fix checkpoint-writing 2022-04-05 11:19:40 +08:00
Daniel Povey
234366e51c Fix type of parameter 2022-04-05 00:18:36 +08:00
Daniel Povey
d1f2f93460 Some fixes.. 2022-04-04 22:40:18 +08:00
Daniel Povey
72f4a673b1 First draft of new approach to learning rates + init 2022-04-04 20:21:34 +08:00
Daniel Povey
4929e4cf32 Change how warm-step is set 2022-04-04 17:09:25 +08:00
Daniel Povey
34500afc43 Various bug fixes 2022-04-02 20:06:43 +08:00
Daniel Povey
8be10d3d6c First draft of model rework 2022-04-02 20:03:21 +08:00
Daniel Povey
eec597fdd5 Merge changes from master 2022-04-02 18:45:20 +08:00
Daniel Povey
709c387ce6 Initial refactoring to remove unnecessary vocab_size 2022-03-30 21:40:22 +08:00
Daniel Povey
4e453a4bf9 Rework conformer, remove some code. 2022-03-29 23:41:13 +08:00
Daniel Povey
11124b03ea Refactoring and simplifying conformer and frontend 2022-03-29 20:32:14 +08:00
Daniel Povey
262388134d Increase model_warm_step to 4k 2022-03-27 11:18:16 +08:00
Daniel Povey
d2ed3dfc90 Fix bug 2022-03-25 20:35:11 +08:00
Daniel Povey
4b650e9f01 Make warmup work by scaling layer contributions; leave residual layer-drop 2022-03-25 20:34:33 +08:00
Daniel Povey
aab72bc2a5 Add changes from master to decode.py, train.py 2022-03-24 13:10:54 +08:00
Daniel Povey
9a8aa1f54a Change how warmup works. 2022-03-22 15:36:20 +08:00
Daniel Povey
4004ca81d8 Increase warm_step (and valid_interval) 2022-03-22 13:32:24 +08:00
Daniel Povey
b82a505dfc Reduce initial pruned_loss scale from 0.01 to 0.0 2022-03-22 12:30:48 +08:00
Daniel Povey
ccbf8ba086 Incorporate changes from master into pruned_transducer_stateless2. 2022-03-21 21:12:43 +08:00
Daniel Povey
0ee2404ff0 Remove logging code that broke with newer Lhotse; fix bug with pruned_loss 2022-03-19 14:01:45 +08:00
Daniel Povey
2dfcd8f117 Double warm_step 2022-03-18 16:38:36 +08:00
Daniel Povey
cbe6b175d1 Reduce warmup scale on pruned loss form 0.1 to 0.01. 2022-03-17 16:46:59 +08:00
Daniel Povey
acc0eda5b0 Scale down pruned loss in warmup mode 2022-03-17 16:09:35 +08:00
Daniel Povey
13db33ffa2 Fix diagnostics-getting code 2022-03-17 15:53:53 +08:00
Daniel Povey
11bea4513e Add remaining files in pruned_transducer_stateless2 2022-03-17 11:17:52 +08:00