30 Commits

Author SHA1 Message Date
Daniel Povey
b988bc0e33 Increase initial-lr from 0.04 to 0.05, plus changes for diagnostics 2022-10-18 11:45:24 +08:00
Daniel Povey
3f495cd197 Reduce attention_dim to 192; cherry-pick scaled_adam_exp130 which is linear_pos interacting with query 2022-10-17 22:07:03 +08:00
Daniel Povey
03fe1ed200 Make attention dims configurable, not embed_dim//2, trying 256. 2022-10-17 11:03:29 +08:00
Daniel Povey
ae0067c384 Change LR schedule to start off higher 2022-10-16 11:45:33 +08:00
Daniel Povey
12323f2fbf Refactor RelPosMultiheadAttention to have 2nd forward function and introduce more modules in conformer encoder layer 2022-10-10 15:27:26 +08:00
Daniel Povey
314f2381e2 Don't compute validation if printing diagnostics. 2022-10-07 14:03:17 +08:00
Daniel Povey
a3179c30e7 Various fixes, finish implementating frame masking 2022-10-06 20:29:45 +08:00
Daniel Povey
e4c9786e4a Merge branch 'scaled_adam_exp27' into scaled_adam_exp69
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
2022-10-06 18:04:48 +08:00
Daniel Povey
537c3537c0 Remove warmup 2022-10-06 12:33:43 +08:00
Daniel Povey
5fe8cb134f Remove final combination; implement layer drop that drops the final layers. 2022-10-04 22:19:44 +08:00
Daniel Povey
96e0d92fb7 Compute valid loss on batch 0. 2022-10-03 18:24:00 +08:00
Daniel Povey
d6ef1bec5f Change subsamplling factor from 1 to 2 2022-09-28 21:10:13 +08:00
Daniel Povey
df795912ed Try to reproduce baseline but with current code with 2 encoder stacks, as a baseline 2022-09-28 20:56:40 +08:00
Daniel Povey
01af88c2f6 Various fixes 2022-09-27 16:09:30 +08:00
Daniel Povey
d34eafa623 Closer to working.. 2022-09-27 15:47:58 +08:00
Daniel Povey
5f55f80fbb Configure train.py with clipping_scale=2.0 2022-09-16 17:19:52 +08:00
Daniel Povey
3b450c2682 Bug fix in train.py, fix optimzier name 2022-09-16 14:10:42 +08:00
Daniel Povey
633cbd551a Increase lr_update_period from 200,4000 to 400, 5000 2022-07-28 14:45:45 +08:00
Daniel Povey
b0f0c6c4ab Setting lr_update_period=(200,4k) in train.py 2022-07-25 04:38:12 +08:00
Daniel Povey
25cb8308d5 Add max_block_size=512 to PrAdam 2022-07-12 08:35:14 +08:00
Daniel Povey
50ee414486 Fix train.py for new optimizer 2022-07-09 10:09:53 +08:00
Daniel Povey
e996f7f371 First version I am running, of the speedup code 2022-06-18 15:17:51 +08:00
Daniel Povey
ea5cd69e3b Possibly fix bug RE learning rate 2022-06-17 20:50:00 +08:00
Daniel Povey
c1f487e36d Move optim2.py to optim.py; use this optimizer in train.py 2022-06-13 16:05:46 +08:00
Daniel Povey
a9a172aa69 Multiply lr by 10; simplify Cain. 2022-06-04 15:48:33 +08:00
Daniel Povey
ca09b9798f Remove decomposition code from checkpoint.py; restore double precision model_avg 2022-06-01 14:01:58 +08:00
Daniel Povey
b2259184b5 Use single precision for model average; increase average-period to 200. 2022-05-31 14:31:46 +08:00
Daniel Povey
ab9eb0d52c Use decompose=True arg for model averaging 2022-05-31 14:28:53 +08:00
Daniel Povey
1651fe0d42 Merge changes from pruned_transducer_stateless4->5 2022-05-31 13:00:11 +08:00
Daniel Povey
741dcd1d6d Move pruned_transducer_stateless4 to pruned_transducer_stateless7 2022-05-31 12:45:28 +08:00