Daniel Povey
|
5dfa141ca5
|
Rename Conformer to Zipformer
|
2022-10-27 22:43:46 +08:00 |
|
Daniel Povey
|
3f05e47447
|
Rename conformer.py to zipformer.py
|
2022-10-27 22:41:48 +08:00 |
|
Daniel Povey
|
78f3cba58c
|
Add logging about memory used.
|
2022-10-25 19:19:33 +08:00 |
|
Daniel Povey
|
9da5526659
|
Changes to more accurately estimate OOM conditions
|
2022-10-25 12:49:18 +08:00 |
|
Daniel Povey
|
1e8984174b
|
Change to warmup schedule.
|
2022-10-25 12:27:00 +08:00 |
|
Daniel Povey
|
5b9d166cb9
|
--base-lr0.075->0.5; --lr-epochs 3->3.5
|
2022-10-23 13:56:25 +08:00 |
|
Daniel Povey
|
9e86d1f44f
|
reduce initial scale in GradScaler
|
2022-10-23 00:14:38 +08:00 |
|
Daniel Povey
|
2964628ae1
|
don't do penalize_values_gt on simple_lm_proj and simple_am_proj; reduce --base-lr from 0.1 to 0.075
|
2022-10-22 21:12:58 +08:00 |
|
Daniel Povey
|
11886dc4f6
|
Change base lr to 0.1, also rename from initial lr in train.py
|
2022-10-22 18:22:26 +08:00 |
|
Daniel Povey
|
146626bb85
|
Renaming in optim.py; remove step() from scan_pessimistic_batches_for_oom in train.py
|
2022-10-22 17:44:21 +08:00 |
|
Daniel Povey
|
525e87a82d
|
Add inf check hooks
|
2022-10-22 17:16:29 +08:00 |
|
Daniel Povey
|
e8066b5825
|
Merge branch 'scaled_adam_exp172' into scaled_adam_exp174
|
2022-10-22 15:44:04 +08:00 |
|
Daniel Povey
|
069125686e
|
Fixes to logging statements.
|
2022-10-22 15:08:07 +08:00 |
|
Daniel Povey
|
1d4382555c
|
Increase initial-lr from 0.06 to 0.075 and decrease lr-epochs from 3.5 to 3.
|
2022-10-22 15:04:08 +08:00 |
|
Daniel Povey
|
74d775014d
|
Increase initial-lr from 0.05 to 0.06.
|
2022-10-22 15:02:07 +08:00 |
|
Daniel Povey
|
aa5f34af64
|
Cosmetic change
|
2022-10-22 15:00:15 +08:00 |
|
Daniel Povey
|
1ec9fe5c98
|
Make warmup period decrease scale on simple loss, leaving pruned loss scale constant.
|
2022-10-22 14:48:53 +08:00 |
|
Daniel Povey
|
efde3757c7
|
Reset optimizer state when we change loss function definition.
|
2022-10-22 14:30:18 +08:00 |
|
Daniel Povey
|
84580ec022
|
Configuration changes: scores limit 5->10, min_prob 0.05->0.1, cur_grad_scale more aggressive increase
|
2022-10-22 14:09:53 +08:00 |
|
Daniel Povey
|
2e93e5d3b7
|
Add logging
|
2022-10-22 13:52:51 +08:00 |
|
Daniel Povey
|
fd3f21f84d
|
Changes to grad scale logging; increase grad scale more frequently if less than one.
|
2022-10-22 13:36:26 +08:00 |
|
Daniel Povey
|
1d2fe8e3c2
|
Add more diagnostics to debug gradient scale problems
|
2022-10-22 12:49:29 +08:00 |
|
Daniel Povey
|
b37564c9c9
|
Cosmetic changes
|
2022-10-18 12:49:14 +08:00 |
|
Daniel Povey
|
b988bc0e33
|
Increase initial-lr from 0.04 to 0.05, plus changes for diagnostics
|
2022-10-18 11:45:24 +08:00 |
|
Daniel Povey
|
3f495cd197
|
Reduce attention_dim to 192; cherry-pick scaled_adam_exp130 which is linear_pos interacting with query
|
2022-10-17 22:07:03 +08:00 |
|
Daniel Povey
|
03fe1ed200
|
Make attention dims configurable, not embed_dim//2, trying 256.
|
2022-10-17 11:03:29 +08:00 |
|
Daniel Povey
|
ae0067c384
|
Change LR schedule to start off higher
|
2022-10-16 11:45:33 +08:00 |
|
Daniel Povey
|
12323f2fbf
|
Refactor RelPosMultiheadAttention to have 2nd forward function and introduce more modules in conformer encoder layer
|
2022-10-10 15:27:26 +08:00 |
|
Daniel Povey
|
314f2381e2
|
Don't compute validation if printing diagnostics.
|
2022-10-07 14:03:17 +08:00 |
|
Daniel Povey
|
a3179c30e7
|
Various fixes, finish implementating frame masking
|
2022-10-06 20:29:45 +08:00 |
|
Daniel Povey
|
e4c9786e4a
|
Merge branch 'scaled_adam_exp27' into scaled_adam_exp69
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
|
2022-10-06 18:04:48 +08:00 |
|
Daniel Povey
|
537c3537c0
|
Remove warmup
|
2022-10-06 12:33:43 +08:00 |
|
Daniel Povey
|
5fe8cb134f
|
Remove final combination; implement layer drop that drops the final layers.
|
2022-10-04 22:19:44 +08:00 |
|
Daniel Povey
|
96e0d92fb7
|
Compute valid loss on batch 0.
|
2022-10-03 18:24:00 +08:00 |
|
Daniel Povey
|
d6ef1bec5f
|
Change subsamplling factor from 1 to 2
|
2022-09-28 21:10:13 +08:00 |
|
Daniel Povey
|
df795912ed
|
Try to reproduce baseline but with current code with 2 encoder stacks, as a baseline
|
2022-09-28 20:56:40 +08:00 |
|
Daniel Povey
|
01af88c2f6
|
Various fixes
|
2022-09-27 16:09:30 +08:00 |
|
Daniel Povey
|
d34eafa623
|
Closer to working..
|
2022-09-27 15:47:58 +08:00 |
|
Daniel Povey
|
5f55f80fbb
|
Configure train.py with clipping_scale=2.0
|
2022-09-16 17:19:52 +08:00 |
|
Daniel Povey
|
3b450c2682
|
Bug fix in train.py, fix optimzier name
|
2022-09-16 14:10:42 +08:00 |
|
Daniel Povey
|
633cbd551a
|
Increase lr_update_period from 200,4000 to 400, 5000
|
2022-07-28 14:45:45 +08:00 |
|
Daniel Povey
|
b0f0c6c4ab
|
Setting lr_update_period=(200,4k) in train.py
|
2022-07-25 04:38:12 +08:00 |
|
Daniel Povey
|
25cb8308d5
|
Add max_block_size=512 to PrAdam
|
2022-07-12 08:35:14 +08:00 |
|
Daniel Povey
|
50ee414486
|
Fix train.py for new optimizer
|
2022-07-09 10:09:53 +08:00 |
|
Daniel Povey
|
e996f7f371
|
First version I am running, of the speedup code
|
2022-06-18 15:17:51 +08:00 |
|
Daniel Povey
|
ea5cd69e3b
|
Possibly fix bug RE learning rate
|
2022-06-17 20:50:00 +08:00 |
|
Daniel Povey
|
c1f487e36d
|
Move optim2.py to optim.py; use this optimizer in train.py
|
2022-06-13 16:05:46 +08:00 |
|
Daniel Povey
|
a9a172aa69
|
Multiply lr by 10; simplify Cain.
|
2022-06-04 15:48:33 +08:00 |
|
Daniel Povey
|
ca09b9798f
|
Remove decomposition code from checkpoint.py; restore double precision model_avg
|
2022-06-01 14:01:58 +08:00 |
|
Daniel Povey
|
b2259184b5
|
Use single precision for model average; increase average-period to 200.
|
2022-05-31 14:31:46 +08:00 |
|