938 Commits

Author SHA1 Message Date
Daniel Povey
9919fb3e1b Increase grad_scale to Whiten module 2022-10-22 15:32:50 +08:00
Daniel Povey
af0fc31c78 Introduce warmup schedule in optimizer 2022-10-22 15:15:43 +08:00
Daniel Povey
069125686e Fixes to logging statements. 2022-10-22 15:08:07 +08:00
Daniel Povey
74d775014d Increase initial-lr from 0.05 to 0.06. 2022-10-22 15:02:07 +08:00
Daniel Povey
aa5f34af64 Cosmetic change 2022-10-22 15:00:15 +08:00
Daniel Povey
1ec9fe5c98 Make warmup period decrease scale on simple loss, leaving pruned loss scale constant. 2022-10-22 14:48:53 +08:00
Daniel Povey
efde3757c7 Reset optimizer state when we change loss function definition. 2022-10-22 14:30:18 +08:00
Daniel Povey
84580ec022 Configuration changes: scores limit 5->10, min_prob 0.05->0.1, cur_grad_scale more aggressive increase 2022-10-22 14:09:53 +08:00
Daniel Povey
9672dffac2 Merge branch 'scaled_adam_exp168' into scaled_adam_exp169 2022-10-22 14:05:07 +08:00
Daniel Povey
2e93e5d3b7 Add logging 2022-10-22 13:52:51 +08:00
Daniel Povey
fd3f21f84d Changes to grad scale logging; increase grad scale more frequently if less than one. 2022-10-22 13:36:26 +08:00
Daniel Povey
1d2fe8e3c2 Add more diagnostics to debug gradient scale problems 2022-10-22 12:49:29 +08:00
Daniel Povey
3298e18732 Increase limit on logit for SimpleCombiner to 25.0 2022-10-21 22:06:35 +08:00
Daniel Povey
e5fe3de17e Also apply limit on logit in SimpleCombiner 2022-10-21 21:43:45 +08:00
Daniel Povey
bdbd2cfce6 Penalize too large weights in softmax of AttentionDownsample() 2022-10-21 20:12:36 +08:00
Daniel Povey
476fb9e9f3 Reduce min_prob of ActivationBalancer from 0.1 to 0.05. 2022-10-21 15:42:04 +08:00
Daniel Povey
9f68b5717c Reduce the limit on attention weights from 50 to 25. 2022-10-21 12:13:23 +08:00
Daniel Povey
c5cb52fed1 Remove the use of random_clamp in conformer.py. 2022-10-20 19:54:38 +08:00
Daniel Povey
dccff6b893 Remove use of RandomGrad 2022-10-20 19:35:11 +08:00
Daniel Povey
1018a77410 Use normal implementation of softmax. 2022-10-20 19:34:10 +08:00
Daniel Povey
6e6209419c Merge branch 'scaled_adam_exp150' into scaled_adam_exp155
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
2022-10-20 15:04:27 +08:00
Daniel Povey
4565d43d5c Add hard limit of attention weights to +- 50 2022-10-20 14:28:22 +08:00
Daniel Povey
6601035db1 Reduce min_abs from 1.0e-04 to 5.0e-06 2022-10-20 13:53:10 +08:00
Daniel Povey
5a0914fdcf Merge branch 'scaled_adam_exp149' into scaled_adam_exp150 2022-10-20 13:31:22 +08:00
Daniel Povey
679ba2ee5e Remove debug print 2022-10-20 13:30:55 +08:00
Daniel Povey
610281eaa2 Keep just the RandomGrad changes, vs. 149. Git history may not reflect real changes. 2022-10-20 13:28:50 +08:00
Daniel Povey
d137118484 Get the randomized backprop for softmax in autocast mode working. 2022-10-20 13:23:48 +08:00
Daniel Povey
d75d646dc4 Merge branch 'scaled_adam_exp147' into scaled_adam_exp149 2022-10-20 12:59:50 +08:00
Daniel Povey
f6b8f0f631 Fix bug in backprop of random_clamp() 2022-10-20 12:49:29 +08:00
Daniel Povey
f08a869769 Merge branch 'scaled_adam_exp151' into scaled_adam_exp150 2022-10-19 19:59:07 +08:00
Daniel Povey
cc15552510 Use full precision to do softmax and store ans. 2022-10-19 19:53:53 +08:00
Daniel Povey
a4443efa95 Add RandomGrad with min_abs=1.0e-04 2022-10-19 19:46:17 +08:00
Daniel Povey
0ad4462632 Reduce min_abs from 1e-03 to 1e-04 2022-10-19 19:27:28 +08:00
Daniel Povey
ef5a27388f Merge branch 'scaled_adam_exp146' into scaled_adam_exp149 2022-10-19 19:16:27 +08:00
Daniel Povey
9c54906e63 Implement randomized backprop for softmax. 2022-10-19 19:16:03 +08:00
Daniel Povey
d37c159174 Revert model.py so there are no constraints on the output. 2022-10-19 13:41:58 +08:00
Daniel Povey
45c38dec61 Remove in_balancer. 2022-10-19 12:35:17 +08:00
Daniel Povey
f4442de1c4 Add reflect=0.1 to invocations of random_clamp() 2022-10-19 12:34:26 +08:00
Daniel Povey
8e15d4312a Add some random clamping in model.py 2022-10-19 12:19:13 +08:00
Daniel Povey
c3c655d0bd Random clip attention scores to -5..5. 2022-10-19 11:59:24 +08:00
Daniel Povey
6b3f9e5036 Changes to avoid bug in backward hooks, affecting diagnostics. 2022-10-19 11:06:17 +08:00
Daniel Povey
b37564c9c9 Cosmetic changes 2022-10-18 12:49:14 +08:00
Daniel Povey
b988bc0e33 Increase initial-lr from 0.04 to 0.05, plus changes for diagnostics 2022-10-18 11:45:24 +08:00
Daniel Povey
2675944f01 Use half the dim for values, vs. keys and queries. 2022-10-17 22:15:06 +08:00
Daniel Povey
3f495cd197 Reduce attention_dim to 192; cherry-pick scaled_adam_exp130 which is linear_pos interacting with query 2022-10-17 22:07:03 +08:00
Daniel Povey
03fe1ed200 Make attention dims configurable, not embed_dim//2, trying 256. 2022-10-17 11:03:29 +08:00
Daniel Povey
325f5539f9 Simplify the dropout mask, no non-dropped-out sequences 2022-10-16 19:14:24 +08:00
Daniel Povey
ae0067c384 Change LR schedule to start off higher 2022-10-16 11:45:33 +08:00
Daniel Povey
29d4e8ec6d Replace MaxEig with Whiten with limit=5.0, and move it to end of ConformerEncoderLayer 2022-10-16 11:36:12 +08:00
Daniel Povey
ef4650bc8e Revert whitening_limit from 1.1 to 2.2. 2022-10-16 11:31:08 +08:00