1376 Commits

Author SHA1 Message Date
Daniel Povey
466176eeff Use penalize_abs_values_gt, not ActivationBalancer. 2022-10-22 20:18:15 +08:00
Daniel Povey
7a55cac346 Increase max_factor in final lm_balancer and am_balancer 2022-10-22 20:02:54 +08:00
Daniel Povey
8b3bba9b54 Reduce max_abs on am_balancer 2022-10-22 19:52:11 +08:00
Daniel Povey
1908123af9 Adding activation balancers after simple_am_prob and simple_lm_prob 2022-10-22 19:37:35 +08:00
Daniel Povey
11886dc4f6 Change base lr to 0.1, also rename from initial lr in train.py 2022-10-22 18:22:26 +08:00
Daniel Povey
146626bb85 Renaming in optim.py; remove step() from scan_pessimistic_batches_for_oom in train.py 2022-10-22 17:44:21 +08:00
Daniel Povey
525e87a82d Add inf check hooks 2022-10-22 17:16:29 +08:00
Daniel Povey
e8066b5825 Merge branch 'scaled_adam_exp172' into scaled_adam_exp174 2022-10-22 15:44:04 +08:00
Daniel Povey
9919fb3e1b Increase grad_scale to Whiten module 2022-10-22 15:32:50 +08:00
Daniel Povey
af0fc31c78 Introduce warmup schedule in optimizer 2022-10-22 15:15:43 +08:00
Daniel Povey
069125686e Fixes to logging statements. 2022-10-22 15:08:07 +08:00
Daniel Povey
1d4382555c Increase initial-lr from 0.06 to 0.075 and decrease lr-epochs from 3.5 to 3. 2022-10-22 15:04:08 +08:00
Daniel Povey
74d775014d Increase initial-lr from 0.05 to 0.06. 2022-10-22 15:02:07 +08:00
Daniel Povey
aa5f34af64 Cosmetic change 2022-10-22 15:00:15 +08:00
Daniel Povey
1ec9fe5c98 Make warmup period decrease scale on simple loss, leaving pruned loss scale constant. 2022-10-22 14:48:53 +08:00
Daniel Povey
efde3757c7 Reset optimizer state when we change loss function definition. 2022-10-22 14:30:18 +08:00
Daniel Povey
84580ec022 Configuration changes: scores limit 5->10, min_prob 0.05->0.1, cur_grad_scale more aggressive increase 2022-10-22 14:09:53 +08:00
Daniel Povey
9672dffac2 Merge branch 'scaled_adam_exp168' into scaled_adam_exp169 2022-10-22 14:05:07 +08:00
Daniel Povey
8d1021d131 Remove comparison diagnostics, which were not that useful. 2022-10-22 13:57:00 +08:00
Daniel Povey
2e93e5d3b7 Add logging 2022-10-22 13:52:51 +08:00
Daniel Povey
fd3f21f84d Changes to grad scale logging; increase grad scale more frequently if less than one. 2022-10-22 13:36:26 +08:00
Fangjun Kuang
348494888d
Add kaldifst to requirements.txt (#631) 2022-10-22 13:14:44 +08:00
Daniel Povey
1d2fe8e3c2 Add more diagnostics to debug gradient scale problems 2022-10-22 12:49:29 +08:00
Daniel Povey
3298e18732 Increase limit on logit for SimpleCombiner to 25.0 2022-10-21 22:06:35 +08:00
Daniel Povey
e5fe3de17e Also apply limit on logit in SimpleCombiner 2022-10-21 21:43:45 +08:00
Daniel Povey
bdbd2cfce6 Penalize too large weights in softmax of AttentionDownsample() 2022-10-21 20:12:36 +08:00
ezerhouni
9b671e1c21
Add Shallow fusion in modified_beam_search (#630)
* Add utility for shallow fusion

* test batch size == 1 without shallow fusion

* Use shallow fusion for modified-beam-search

* Modified beam search with ngram rescoring

* Fix code according to review

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-10-21 16:44:56 +08:00
Daniel Povey
476fb9e9f3 Reduce min_prob of ActivationBalancer from 0.1 to 0.05. 2022-10-21 15:42:04 +08:00
Daniel Povey
9f68b5717c Reduce the limit on attention weights from 50 to 25. 2022-10-21 12:13:23 +08:00
Daniel Povey
c5cb52fed1 Remove the use of random_clamp in conformer.py. 2022-10-20 19:54:38 +08:00
Daniel Povey
dccff6b893 Remove use of RandomGrad 2022-10-20 19:35:11 +08:00
Daniel Povey
1018a77410 Use normal implementation of softmax. 2022-10-20 19:34:10 +08:00
Daniel Povey
6e6209419c Merge branch 'scaled_adam_exp150' into scaled_adam_exp155
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
2022-10-20 15:04:27 +08:00
Daniel Povey
4565d43d5c Add hard limit of attention weights to +- 50 2022-10-20 14:28:22 +08:00
Daniel Povey
6601035db1 Reduce min_abs from 1.0e-04 to 5.0e-06 2022-10-20 13:53:10 +08:00
Daniel Povey
5a0914fdcf Merge branch 'scaled_adam_exp149' into scaled_adam_exp150 2022-10-20 13:31:22 +08:00
Daniel Povey
679ba2ee5e Remove debug print 2022-10-20 13:30:55 +08:00
Daniel Povey
610281eaa2 Keep just the RandomGrad changes, vs. 149. Git history may not reflect real changes. 2022-10-20 13:28:50 +08:00
Daniel Povey
d137118484 Get the randomized backprop for softmax in autocast mode working. 2022-10-20 13:23:48 +08:00
Daniel Povey
d75d646dc4 Merge branch 'scaled_adam_exp147' into scaled_adam_exp149 2022-10-20 12:59:50 +08:00
Daniel Povey
f6b8f0f631 Fix bug in backprop of random_clamp() 2022-10-20 12:49:29 +08:00
Daniel Povey
f08a869769 Merge branch 'scaled_adam_exp151' into scaled_adam_exp150 2022-10-19 19:59:07 +08:00
Daniel Povey
cc15552510 Use full precision to do softmax and store ans. 2022-10-19 19:53:53 +08:00
Daniel Povey
a4443efa95 Add RandomGrad with min_abs=1.0e-04 2022-10-19 19:46:17 +08:00
Daniel Povey
0ad4462632 Reduce min_abs from 1e-03 to 1e-04 2022-10-19 19:27:28 +08:00
Daniel Povey
ef5a27388f Merge branch 'scaled_adam_exp146' into scaled_adam_exp149 2022-10-19 19:16:27 +08:00
Daniel Povey
9c54906e63 Implement randomized backprop for softmax. 2022-10-19 19:16:03 +08:00
marcoyang1998
c30b8d3a1c
fix number of parameters in RESULTS.md (#627) 2022-10-19 16:53:29 +08:00
Daniel Povey
d37c159174 Revert model.py so there are no constraints on the output. 2022-10-19 13:41:58 +08:00
Daniel Povey
45c38dec61 Remove in_balancer. 2022-10-19 12:35:17 +08:00