Daniel Povey
|
a52ec3da28
|
Change feedforward dims: increase 1536->1792 for largest ff dim and move it one step later.
|
2022-11-20 14:24:41 +08:00 |
|
Daniel Povey
|
8a095c1cd1
|
Add SmallConvModule; decrease feedforward dims to keep about same num params.
|
2022-11-18 12:46:40 +08:00 |
|
Daniel Povey
|
e9806950f5
|
Reduce pos-dim from 96 to 48.
|
2022-11-17 23:42:39 +08:00 |
|
Daniel Povey
|
27f8497fea
|
Reduce pos_dim from 128 to 96.
|
2022-11-17 10:39:36 +08:00 |
|
Daniel Povey
|
526b5e59a6
|
Increase pos-head-dim from 2 to 4.
|
2022-11-16 11:53:55 +08:00 |
|
Daniel Povey
|
fc74ff63fb
|
Remove one feedforward module and give params to the other 2.
|
2022-11-16 11:46:05 +08:00 |
|
Daniel Povey
|
d542fa61ff
|
Double pos_dim from 64 to 128.
|
2022-11-16 11:35:25 +08:00 |
|
Daniel Povey
|
b32dec1119
|
Add printing capability
|
2022-11-14 14:16:28 +08:00 |
|
Daniel Povey
|
aa0b1a37cd
|
Change to valid interval for libri-100
|
2022-11-13 23:29:17 +08:00 |
|
Daniel Povey
|
a245d39e4c
|
Reduce pos-dim from 128 to 64.
|
2022-11-13 15:29:17 +08:00 |
|
Daniel Povey
|
603be9933b
|
Reducd pos-head-dim from 8 to 2
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/train.py
|
2022-11-12 23:22:55 +08:00 |
|
Daniel Povey
|
4988c815c9
|
Use more attention heads in slowest layer.
|
2022-11-11 22:56:14 +08:00 |
|
Daniel Povey
|
f8210e1d80
|
Reduce feedforward dim of the 4th and 5th encoder stacks.
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/train.py
|
2022-11-09 14:52:44 +08:00 |
|
Daniel Povey
|
d1d4be8ecc
|
Remove debug statement
|
2022-11-09 14:18:23 +08:00 |
|
Daniel Povey
|
423f9e3026
|
Increase query-head-dim from 24 to 32.
|
2022-11-09 13:28:29 +08:00 |
|
Daniel Povey
|
364a4c3838
|
Reduce pos_dim from 384 to 128.
|
2022-11-09 13:27:27 +08:00 |
|
Daniel Povey
|
cc260711b8
|
Make pos_dim the same as it was in scaled_adam_exp229.. although this was probably too high.
|
2022-11-09 13:26:18 +08:00 |
|
Daniel Povey
|
20e6d2a157
|
Rework zipformer code for clarity and extensibility
|
2022-11-09 12:56:07 +08:00 |
|
Daniel Povey
|
8b0722e626
|
Rework how warmup count is produced; should not affect results.
|
2022-10-30 14:17:41 +08:00 |
|
Daniel Povey
|
ff03ec88a5
|
Tuning change to num encoder layers, inspired by relative param importance.
|
2022-10-29 15:56:02 +08:00 |
|
Daniel Povey
|
96ea4cf1be
|
Have 6 different encoder stacks, U-shaped network.
|
2022-10-28 20:36:45 +08:00 |
|
Daniel Povey
|
7b57a34227
|
Have 4 encoder stacks
|
2022-10-28 19:32:14 +08:00 |
|
Daniel Povey
|
d7d5188bd9
|
Refactor how the downsampling is done so that it happens later, but the 1st encoder stack still operates after a subsampling of 2.
|
2022-10-28 19:20:21 +08:00 |
|
Daniel Povey
|
0a89f51dc9
|
Have a 3rd encoder, at downsampling factor of 8.
|
2022-10-28 17:42:00 +08:00 |
|
Daniel Povey
|
ed1b4d5e5d
|
Refactor zipformer for more flexibility so we can change number of encoder layers.
|
2022-10-28 17:32:38 +08:00 |
|
Daniel Povey
|
5dfa141ca5
|
Rename Conformer to Zipformer
|
2022-10-27 22:43:46 +08:00 |
|
Daniel Povey
|
3f05e47447
|
Rename conformer.py to zipformer.py
|
2022-10-27 22:41:48 +08:00 |
|
Daniel Povey
|
78f3cba58c
|
Add logging about memory used.
|
2022-10-25 19:19:33 +08:00 |
|
Daniel Povey
|
9da5526659
|
Changes to more accurately estimate OOM conditions
|
2022-10-25 12:49:18 +08:00 |
|
Daniel Povey
|
1e8984174b
|
Change to warmup schedule.
|
2022-10-25 12:27:00 +08:00 |
|
Daniel Povey
|
5b9d166cb9
|
--base-lr0.075->0.5; --lr-epochs 3->3.5
|
2022-10-23 13:56:25 +08:00 |
|
Daniel Povey
|
9e86d1f44f
|
reduce initial scale in GradScaler
|
2022-10-23 00:14:38 +08:00 |
|
Daniel Povey
|
2964628ae1
|
don't do penalize_values_gt on simple_lm_proj and simple_am_proj; reduce --base-lr from 0.1 to 0.075
|
2022-10-22 21:12:58 +08:00 |
|
Daniel Povey
|
11886dc4f6
|
Change base lr to 0.1, also rename from initial lr in train.py
|
2022-10-22 18:22:26 +08:00 |
|
Daniel Povey
|
146626bb85
|
Renaming in optim.py; remove step() from scan_pessimistic_batches_for_oom in train.py
|
2022-10-22 17:44:21 +08:00 |
|
Daniel Povey
|
525e87a82d
|
Add inf check hooks
|
2022-10-22 17:16:29 +08:00 |
|
Daniel Povey
|
e8066b5825
|
Merge branch 'scaled_adam_exp172' into scaled_adam_exp174
|
2022-10-22 15:44:04 +08:00 |
|
Daniel Povey
|
069125686e
|
Fixes to logging statements.
|
2022-10-22 15:08:07 +08:00 |
|
Daniel Povey
|
1d4382555c
|
Increase initial-lr from 0.06 to 0.075 and decrease lr-epochs from 3.5 to 3.
|
2022-10-22 15:04:08 +08:00 |
|
Daniel Povey
|
74d775014d
|
Increase initial-lr from 0.05 to 0.06.
|
2022-10-22 15:02:07 +08:00 |
|
Daniel Povey
|
aa5f34af64
|
Cosmetic change
|
2022-10-22 15:00:15 +08:00 |
|
Daniel Povey
|
1ec9fe5c98
|
Make warmup period decrease scale on simple loss, leaving pruned loss scale constant.
|
2022-10-22 14:48:53 +08:00 |
|
Daniel Povey
|
efde3757c7
|
Reset optimizer state when we change loss function definition.
|
2022-10-22 14:30:18 +08:00 |
|
Daniel Povey
|
84580ec022
|
Configuration changes: scores limit 5->10, min_prob 0.05->0.1, cur_grad_scale more aggressive increase
|
2022-10-22 14:09:53 +08:00 |
|
Daniel Povey
|
2e93e5d3b7
|
Add logging
|
2022-10-22 13:52:51 +08:00 |
|
Daniel Povey
|
fd3f21f84d
|
Changes to grad scale logging; increase grad scale more frequently if less than one.
|
2022-10-22 13:36:26 +08:00 |
|
Daniel Povey
|
1d2fe8e3c2
|
Add more diagnostics to debug gradient scale problems
|
2022-10-22 12:49:29 +08:00 |
|
Daniel Povey
|
b37564c9c9
|
Cosmetic changes
|
2022-10-18 12:49:14 +08:00 |
|
Daniel Povey
|
b988bc0e33
|
Increase initial-lr from 0.04 to 0.05, plus changes for diagnostics
|
2022-10-18 11:45:24 +08:00 |
|
Daniel Povey
|
3f495cd197
|
Reduce attention_dim to 192; cherry-pick scaled_adam_exp130 which is linear_pos interacting with query
|
2022-10-17 22:07:03 +08:00 |
|