Daniel Povey
95aaa4a8d2
Store only half precision output for softmax.
2022-10-23 21:24:46 +08:00
Daniel Povey
d3876e32c4
Make it use float16 if in amp but use clamp to avoid wrapping error
2022-10-23 21:13:23 +08:00
Daniel Povey
85657946bb
Try a more exact way to round to uint8 that should prevent ever wrapping around to zero
2022-10-23 20:56:26 +08:00
Daniel Povey
d6aa386552
Fix randn to rand
2022-10-23 17:19:19 +08:00
Daniel Povey
e586cc319c
Change the discretization of the sigmoid to be expectation preserving.
2022-10-23 17:11:35 +08:00
Daniel Povey
09cbc9fdab
Save some memory in the autograd of DoubleSwish.
2022-10-23 16:59:43 +08:00
Daniel Povey
40588d3d8a
Revert 179->180 change, i.e. change max_abs for deriv_balancer2 back from 50.0 20.0
2022-10-23 16:18:58 +08:00
Daniel Povey
5b9d166cb9
--base-lr0.075->0.5; --lr-epochs 3->3.5
2022-10-23 13:56:25 +08:00
Daniel Povey
0406d0b059
Increase max_abs in ActivationBalancer of conv module from 20 to 50
2022-10-23 13:51:51 +08:00
Daniel Povey
9e86d1f44f
reduce initial scale in GradScaler
2022-10-23 00:14:38 +08:00
Daniel Povey
b7083e7aff
Increase default max_factor for ActivationBalancer from 0.02 to 0.04; decrease max_abs in ConvolutionModule.deriv_balancer2 from 100.0 to 20.0
2022-10-23 00:09:21 +08:00
Daniel Povey
ad2d3c2b36
Dont print out full non-finite tensor
2022-10-22 23:03:19 +08:00
Daniel Povey
e0c1dc66da
Increase probs of activation balancer and make it decay slower.
2022-10-22 22:18:38 +08:00
Daniel Povey
2964628ae1
don't do penalize_values_gt on simple_lm_proj and simple_am_proj; reduce --base-lr from 0.1 to 0.075
2022-10-22 21:12:58 +08:00
Daniel Povey
269b70122e
Add hooks.py, had negleted to git add it.
2022-10-22 20:58:52 +08:00
Daniel Povey
13ffd8e823
Trying to reduce grad_scale of Whiten() from 0.02 to 0.01.
2022-10-22 20:30:05 +08:00
Daniel Povey
466176eeff
Use penalize_abs_values_gt, not ActivationBalancer.
2022-10-22 20:18:15 +08:00
Daniel Povey
7a55cac346
Increase max_factor in final lm_balancer and am_balancer
2022-10-22 20:02:54 +08:00
Daniel Povey
8b3bba9b54
Reduce max_abs on am_balancer
2022-10-22 19:52:11 +08:00
Daniel Povey
1908123af9
Adding activation balancers after simple_am_prob and simple_lm_prob
2022-10-22 19:37:35 +08:00
Daniel Povey
11886dc4f6
Change base lr to 0.1, also rename from initial lr in train.py
2022-10-22 18:22:26 +08:00
Daniel Povey
146626bb85
Renaming in optim.py; remove step() from scan_pessimistic_batches_for_oom in train.py
2022-10-22 17:44:21 +08:00
Daniel Povey
525e87a82d
Add inf check hooks
2022-10-22 17:16:29 +08:00
Daniel Povey
e8066b5825
Merge branch 'scaled_adam_exp172' into scaled_adam_exp174
2022-10-22 15:44:04 +08:00
Daniel Povey
9919fb3e1b
Increase grad_scale to Whiten module
2022-10-22 15:32:50 +08:00
Daniel Povey
af0fc31c78
Introduce warmup schedule in optimizer
2022-10-22 15:15:43 +08:00
Daniel Povey
069125686e
Fixes to logging statements.
2022-10-22 15:08:07 +08:00
Daniel Povey
1d4382555c
Increase initial-lr from 0.06 to 0.075 and decrease lr-epochs from 3.5 to 3.
2022-10-22 15:04:08 +08:00
Daniel Povey
74d775014d
Increase initial-lr from 0.05 to 0.06.
2022-10-22 15:02:07 +08:00
Daniel Povey
aa5f34af64
Cosmetic change
2022-10-22 15:00:15 +08:00
Daniel Povey
1ec9fe5c98
Make warmup period decrease scale on simple loss, leaving pruned loss scale constant.
2022-10-22 14:48:53 +08:00
Daniel Povey
efde3757c7
Reset optimizer state when we change loss function definition.
2022-10-22 14:30:18 +08:00
Daniel Povey
84580ec022
Configuration changes: scores limit 5->10, min_prob 0.05->0.1, cur_grad_scale more aggressive increase
2022-10-22 14:09:53 +08:00
Daniel Povey
9672dffac2
Merge branch 'scaled_adam_exp168' into scaled_adam_exp169
2022-10-22 14:05:07 +08:00
Daniel Povey
8d1021d131
Remove comparison diagnostics, which were not that useful.
2022-10-22 13:57:00 +08:00
Daniel Povey
2e93e5d3b7
Add logging
2022-10-22 13:52:51 +08:00
Daniel Povey
fd3f21f84d
Changes to grad scale logging; increase grad scale more frequently if less than one.
2022-10-22 13:36:26 +08:00
Fangjun Kuang
348494888d
Add kaldifst to requirements.txt ( #631 )
2022-10-22 13:14:44 +08:00
Daniel Povey
1d2fe8e3c2
Add more diagnostics to debug gradient scale problems
2022-10-22 12:49:29 +08:00
Daniel Povey
3298e18732
Increase limit on logit for SimpleCombiner to 25.0
2022-10-21 22:06:35 +08:00
Daniel Povey
e5fe3de17e
Also apply limit on logit in SimpleCombiner
2022-10-21 21:43:45 +08:00
Daniel Povey
bdbd2cfce6
Penalize too large weights in softmax of AttentionDownsample()
2022-10-21 20:12:36 +08:00
ezerhouni
9b671e1c21
Add Shallow fusion in modified_beam_search ( #630 )
...
* Add utility for shallow fusion
* test batch size == 1 without shallow fusion
* Use shallow fusion for modified-beam-search
* Modified beam search with ngram rescoring
* Fix code according to review
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-10-21 16:44:56 +08:00
Daniel Povey
476fb9e9f3
Reduce min_prob of ActivationBalancer from 0.1 to 0.05.
2022-10-21 15:42:04 +08:00
Daniel Povey
9f68b5717c
Reduce the limit on attention weights from 50 to 25.
2022-10-21 12:13:23 +08:00
Daniel Povey
c5cb52fed1
Remove the use of random_clamp in conformer.py.
2022-10-20 19:54:38 +08:00
Daniel Povey
dccff6b893
Remove use of RandomGrad
2022-10-20 19:35:11 +08:00
Daniel Povey
1018a77410
Use normal implementation of softmax.
2022-10-20 19:34:10 +08:00
Daniel Povey
6e6209419c
Merge branch 'scaled_adam_exp150' into scaled_adam_exp155
...
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
2022-10-20 15:04:27 +08:00
Daniel Povey
4565d43d5c
Add hard limit of attention weights to +- 50
2022-10-20 14:28:22 +08:00