65 Commits

Author SHA1 Message Date
Daniel Povey
d1d4be8ecc Remove debug statement 2022-11-09 14:18:23 +08:00
Daniel Povey
423f9e3026 Increase query-head-dim from 24 to 32. 2022-11-09 13:28:29 +08:00
Daniel Povey
364a4c3838 Reduce pos_dim from 384 to 128. 2022-11-09 13:27:27 +08:00
Daniel Povey
cc260711b8 Make pos_dim the same as it was in scaled_adam_exp229.. although this was probably too high. 2022-11-09 13:26:18 +08:00
Daniel Povey
20e6d2a157 Rework zipformer code for clarity and extensibility 2022-11-09 12:56:07 +08:00
Daniel Povey
8b0722e626 Rework how warmup count is produced; should not affect results. 2022-10-30 14:17:41 +08:00
Daniel Povey
ff03ec88a5 Tuning change to num encoder layers, inspired by relative param importance. 2022-10-29 15:56:02 +08:00
Daniel Povey
96ea4cf1be Have 6 different encoder stacks, U-shaped network. 2022-10-28 20:36:45 +08:00
Daniel Povey
7b57a34227 Have 4 encoder stacks 2022-10-28 19:32:14 +08:00
Daniel Povey
d7d5188bd9 Refactor how the downsampling is done so that it happens later, but the 1st encoder stack still operates after a subsampling of 2. 2022-10-28 19:20:21 +08:00
Daniel Povey
0a89f51dc9 Have a 3rd encoder, at downsampling factor of 8. 2022-10-28 17:42:00 +08:00
Daniel Povey
ed1b4d5e5d Refactor zipformer for more flexibility so we can change number of encoder layers. 2022-10-28 17:32:38 +08:00
Daniel Povey
5dfa141ca5 Rename Conformer to Zipformer 2022-10-27 22:43:46 +08:00
Daniel Povey
3f05e47447 Rename conformer.py to zipformer.py 2022-10-27 22:41:48 +08:00
Daniel Povey
78f3cba58c Add logging about memory used. 2022-10-25 19:19:33 +08:00
Daniel Povey
9da5526659 Changes to more accurately estimate OOM conditions 2022-10-25 12:49:18 +08:00
Daniel Povey
1e8984174b Change to warmup schedule. 2022-10-25 12:27:00 +08:00
Daniel Povey
5b9d166cb9 --base-lr0.075->0.5; --lr-epochs 3->3.5 2022-10-23 13:56:25 +08:00
Daniel Povey
9e86d1f44f reduce initial scale in GradScaler 2022-10-23 00:14:38 +08:00
Daniel Povey
2964628ae1 don't do penalize_values_gt on simple_lm_proj and simple_am_proj; reduce --base-lr from 0.1 to 0.075 2022-10-22 21:12:58 +08:00
Daniel Povey
11886dc4f6 Change base lr to 0.1, also rename from initial lr in train.py 2022-10-22 18:22:26 +08:00
Daniel Povey
146626bb85 Renaming in optim.py; remove step() from scan_pessimistic_batches_for_oom in train.py 2022-10-22 17:44:21 +08:00
Daniel Povey
525e87a82d Add inf check hooks 2022-10-22 17:16:29 +08:00
Daniel Povey
e8066b5825 Merge branch 'scaled_adam_exp172' into scaled_adam_exp174 2022-10-22 15:44:04 +08:00
Daniel Povey
069125686e Fixes to logging statements. 2022-10-22 15:08:07 +08:00
Daniel Povey
1d4382555c Increase initial-lr from 0.06 to 0.075 and decrease lr-epochs from 3.5 to 3. 2022-10-22 15:04:08 +08:00
Daniel Povey
74d775014d Increase initial-lr from 0.05 to 0.06. 2022-10-22 15:02:07 +08:00
Daniel Povey
aa5f34af64 Cosmetic change 2022-10-22 15:00:15 +08:00
Daniel Povey
1ec9fe5c98 Make warmup period decrease scale on simple loss, leaving pruned loss scale constant. 2022-10-22 14:48:53 +08:00
Daniel Povey
efde3757c7 Reset optimizer state when we change loss function definition. 2022-10-22 14:30:18 +08:00
Daniel Povey
84580ec022 Configuration changes: scores limit 5->10, min_prob 0.05->0.1, cur_grad_scale more aggressive increase 2022-10-22 14:09:53 +08:00
Daniel Povey
2e93e5d3b7 Add logging 2022-10-22 13:52:51 +08:00
Daniel Povey
fd3f21f84d Changes to grad scale logging; increase grad scale more frequently if less than one. 2022-10-22 13:36:26 +08:00
Daniel Povey
1d2fe8e3c2 Add more diagnostics to debug gradient scale problems 2022-10-22 12:49:29 +08:00
Daniel Povey
b37564c9c9 Cosmetic changes 2022-10-18 12:49:14 +08:00
Daniel Povey
b988bc0e33 Increase initial-lr from 0.04 to 0.05, plus changes for diagnostics 2022-10-18 11:45:24 +08:00
Daniel Povey
3f495cd197 Reduce attention_dim to 192; cherry-pick scaled_adam_exp130 which is linear_pos interacting with query 2022-10-17 22:07:03 +08:00
Daniel Povey
03fe1ed200 Make attention dims configurable, not embed_dim//2, trying 256. 2022-10-17 11:03:29 +08:00
Daniel Povey
ae0067c384 Change LR schedule to start off higher 2022-10-16 11:45:33 +08:00
Daniel Povey
12323f2fbf Refactor RelPosMultiheadAttention to have 2nd forward function and introduce more modules in conformer encoder layer 2022-10-10 15:27:26 +08:00
Daniel Povey
314f2381e2 Don't compute validation if printing diagnostics. 2022-10-07 14:03:17 +08:00
Daniel Povey
a3179c30e7 Various fixes, finish implementating frame masking 2022-10-06 20:29:45 +08:00
Daniel Povey
e4c9786e4a Merge branch 'scaled_adam_exp27' into scaled_adam_exp69
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
2022-10-06 18:04:48 +08:00
Daniel Povey
537c3537c0 Remove warmup 2022-10-06 12:33:43 +08:00
Daniel Povey
5fe8cb134f Remove final combination; implement layer drop that drops the final layers. 2022-10-04 22:19:44 +08:00
Daniel Povey
96e0d92fb7 Compute valid loss on batch 0. 2022-10-03 18:24:00 +08:00
Daniel Povey
d6ef1bec5f Change subsamplling factor from 1 to 2 2022-09-28 21:10:13 +08:00
Daniel Povey
df795912ed Try to reproduce baseline but with current code with 2 encoder stacks, as a baseline 2022-09-28 20:56:40 +08:00
Daniel Povey
01af88c2f6 Various fixes 2022-09-27 16:09:30 +08:00
Daniel Povey
d34eafa623 Closer to working.. 2022-09-27 15:47:58 +08:00