1816 Commits

Author SHA1 Message Date
Daniel Povey
ed1b4d5e5d Refactor zipformer for more flexibility so we can change number of encoder layers. 2022-10-28 17:32:38 +08:00
Daniel Povey
e592a920b4 Merge branch 'scaled_adam_exp198b' into scaled_adam_exp202 2022-10-28 13:13:55 +08:00
Daniel Povey
a067fe8026 Fix clamping of epsilon 2022-10-28 12:50:14 +08:00
Daniel Povey
7b8a0108ea Merge branch 'scaled_adam_exp188' into scaled_adam_exp198b 2022-10-28 12:49:36 +08:00
Daniel Povey
b9f6ba1aa2 Remove some unused variables. 2022-10-28 12:01:45 +08:00
Daniel Povey
c8abba75a9 Update decode.py by copying from pruned_transducer_stateless5 and changing directory name 2022-10-28 11:19:45 +08:00
Daniel Povey
5dfa141ca5 Rename Conformer to Zipformer 2022-10-27 22:43:46 +08:00
Daniel Povey
3f05e47447 Rename conformer.py to zipformer.py 2022-10-27 22:41:48 +08:00
Daniel Povey
be5c687fbd Merging upstream/master 2022-10-27 21:04:48 +08:00
Daniel Povey
f8c531cd23 Increase bypass_scale min from 0.4 to 0.5 2022-10-27 14:59:05 +08:00
Daniel Povey
2c400115e4 Increase bypass_scale from 0.2 to 0.4. 2022-10-27 14:30:46 +08:00
Daniel Povey
a7fc6ae38c Increase floor on bypass_scale from 0.1 to 0.2. 2022-10-27 14:09:34 +08:00
Daniel Povey
938510ac9f Fix clamping of bypass scale; remove a couple unused variables. 2022-10-27 14:05:53 +08:00
Daniel Povey
bf37c7ca85 Regularize how we apply the min and max to the eps of BasicNorm 2022-10-26 12:51:20 +08:00
Daniel Povey
a0507a83a5 Change scalar_max in optim.py from 2.0 to 5.0 2022-10-25 22:58:07 +08:00
Daniel Povey
78f3cba58c Add logging about memory used. 2022-10-25 19:19:33 +08:00
Daniel Povey
6a6df19bde Hopefully make penalize_abs_values_gt more memory efficient. 2022-10-25 18:41:33 +08:00
Daniel Povey
dbfbd8016b Cast to float16 in DoubleSwish forward 2022-10-25 13:16:00 +08:00
Daniel Povey
3159b09e8f Make 20 the limit for warmup_count 2022-10-25 12:58:27 +08:00
Daniel Povey
6ebff23cb9 Reduce cutoff from 100 to 5 for estimating OOM with warmup 2022-10-25 12:53:12 +08:00
Daniel Povey
9da5526659 Changes to more accurately estimate OOM conditions 2022-10-25 12:49:18 +08:00
Daniel Povey
1e8984174b Change to warmup schedule. 2022-10-25 12:27:00 +08:00
Daniel Povey
36cb279318 More memory efficient backprop for DoubleSwish. 2022-10-25 12:21:22 +08:00
Fangjun Kuang
499ac24ecb
Install kaldifst for GitHub actions (#632)
* Install kaldifst for GitHub actions
2022-10-24 15:07:29 +08:00
Daniel Povey
95aaa4a8d2 Store only half precision output for softmax. 2022-10-23 21:24:46 +08:00
Daniel Povey
d3876e32c4 Make it use float16 if in amp but use clamp to avoid wrapping error 2022-10-23 21:13:23 +08:00
Daniel Povey
85657946bb Try a more exact way to round to uint8 that should prevent ever wrapping around to zero 2022-10-23 20:56:26 +08:00
Daniel Povey
d6aa386552 Fix randn to rand 2022-10-23 17:19:19 +08:00
Daniel Povey
e586cc319c Change the discretization of the sigmoid to be expectation preserving. 2022-10-23 17:11:35 +08:00
Daniel Povey
09cbc9fdab Save some memory in the autograd of DoubleSwish. 2022-10-23 16:59:43 +08:00
Daniel Povey
40588d3d8a Revert 179->180 change, i.e. change max_abs for deriv_balancer2 back from 50.0 20.0 2022-10-23 16:18:58 +08:00
Daniel Povey
5b9d166cb9 --base-lr0.075->0.5; --lr-epochs 3->3.5 2022-10-23 13:56:25 +08:00
Daniel Povey
0406d0b059 Increase max_abs in ActivationBalancer of conv module from 20 to 50 2022-10-23 13:51:51 +08:00
Daniel Povey
9e86d1f44f reduce initial scale in GradScaler 2022-10-23 00:14:38 +08:00
Daniel Povey
b7083e7aff Increase default max_factor for ActivationBalancer from 0.02 to 0.04; decrease max_abs in ConvolutionModule.deriv_balancer2 from 100.0 to 20.0 2022-10-23 00:09:21 +08:00
Daniel Povey
ad2d3c2b36 Dont print out full non-finite tensor 2022-10-22 23:03:19 +08:00
Daniel Povey
e0c1dc66da Increase probs of activation balancer and make it decay slower. 2022-10-22 22:18:38 +08:00
Daniel Povey
2964628ae1 don't do penalize_values_gt on simple_lm_proj and simple_am_proj; reduce --base-lr from 0.1 to 0.075 2022-10-22 21:12:58 +08:00
Daniel Povey
269b70122e Add hooks.py, had negleted to git add it. 2022-10-22 20:58:52 +08:00
Daniel Povey
13ffd8e823 Trying to reduce grad_scale of Whiten() from 0.02 to 0.01. 2022-10-22 20:30:05 +08:00
Daniel Povey
466176eeff Use penalize_abs_values_gt, not ActivationBalancer. 2022-10-22 20:18:15 +08:00
Daniel Povey
7a55cac346 Increase max_factor in final lm_balancer and am_balancer 2022-10-22 20:02:54 +08:00
Daniel Povey
8b3bba9b54 Reduce max_abs on am_balancer 2022-10-22 19:52:11 +08:00
Daniel Povey
1908123af9 Adding activation balancers after simple_am_prob and simple_lm_prob 2022-10-22 19:37:35 +08:00
Daniel Povey
11886dc4f6 Change base lr to 0.1, also rename from initial lr in train.py 2022-10-22 18:22:26 +08:00
Daniel Povey
146626bb85 Renaming in optim.py; remove step() from scan_pessimistic_batches_for_oom in train.py 2022-10-22 17:44:21 +08:00
Daniel Povey
525e87a82d Add inf check hooks 2022-10-22 17:16:29 +08:00
Daniel Povey
e8066b5825 Merge branch 'scaled_adam_exp172' into scaled_adam_exp174 2022-10-22 15:44:04 +08:00
Daniel Povey
9919fb3e1b Increase grad_scale to Whiten module 2022-10-22 15:32:50 +08:00
Daniel Povey
af0fc31c78 Introduce warmup schedule in optimizer 2022-10-22 15:15:43 +08:00