Daniel Povey
12e8c3f0fa
One more layer on input
2022-11-29 16:47:24 +08:00
Daniel Povey
87ef4078d3
Add two more layers.
2022-11-28 13:56:40 +08:00
Daniel Povey
f483f1e0ef
Implement attention weights sharing for successive layers, for Zipformer
2022-11-28 13:41:11 +08:00
Daniel Povey
a6fb9772a8
Remove 4 layers.
2022-11-27 13:29:29 +08:00
Daniel Povey
f71b1d2c3a
Add 4 more layers
2022-11-26 21:18:24 +08:00
Daniel Povey
320c58401f
Increase 2 feedforward dims from 1.5k to 2k.
2022-11-26 19:45:41 +08:00
Daniel Povey
1d0252d420
Merge branch 'scaled_adam_exp466' into scaled_adam_exp472.
...
Below is a more complete list of the changes I am making, although some of
these may be counted in the last
numbers XXX below correspond to branches numbered scaled_adam_expXXX.
- from 412/413 (cherry-picked): dropout for attention in attention_squeeze and nonlin_attention modules,
but simplified this a little to use the same dropout schedule and drop them out all together
also have all 3 submodules use separate heads.
- from 460->461, which is in the history of 464, revert the part about balancing output out attention_squeeze module.
- merge from 462->467, about using TanSwish not tanh.
- merge 462->465, remove whitening in self-attention module
- merge the part of 465->466 that was about diagnostics (name in Whiten module)
2022-11-23 14:41:09 +08:00
Daniel Povey
6c5763fbb3
Implement subtracted momentum [0.33,0.66], and print name in Whiten module.
2022-11-22 21:57:48 +08:00
Daniel Povey
99cd9f5788
Add more layers.
2022-11-22 19:48:42 +08:00
Daniel Povey
211e3af680
Remove changes in previous merge commit that did not relate to length_factor.
2022-11-21 14:32:05 +08:00
Daniel Povey
a52ec3da28
Change feedforward dims: increase 1536->1792 for largest ff dim and move it one step later.
2022-11-20 14:24:41 +08:00
Daniel Povey
8a095c1cd1
Add SmallConvModule; decrease feedforward dims to keep about same num params.
2022-11-18 12:46:40 +08:00
Daniel Povey
e9806950f5
Reduce pos-dim from 96 to 48.
2022-11-17 23:42:39 +08:00
Daniel Povey
27f8497fea
Reduce pos_dim from 128 to 96.
2022-11-17 10:39:36 +08:00
Daniel Povey
526b5e59a6
Increase pos-head-dim from 2 to 4.
2022-11-16 11:53:55 +08:00
Daniel Povey
fc74ff63fb
Remove one feedforward module and give params to the other 2.
2022-11-16 11:46:05 +08:00
Daniel Povey
d542fa61ff
Double pos_dim from 64 to 128.
2022-11-16 11:35:25 +08:00
Daniel Povey
b32dec1119
Add printing capability
2022-11-14 14:16:28 +08:00
Daniel Povey
aa0b1a37cd
Change to valid interval for libri-100
2022-11-13 23:29:17 +08:00
Daniel Povey
a245d39e4c
Reduce pos-dim from 128 to 64.
2022-11-13 15:29:17 +08:00
Daniel Povey
603be9933b
Reducd pos-head-dim from 8 to 2
...
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/train.py
2022-11-12 23:22:55 +08:00
Daniel Povey
4988c815c9
Use more attention heads in slowest layer.
2022-11-11 22:56:14 +08:00
Daniel Povey
f8210e1d80
Reduce feedforward dim of the 4th and 5th encoder stacks.
...
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/train.py
2022-11-09 14:52:44 +08:00
Daniel Povey
d1d4be8ecc
Remove debug statement
2022-11-09 14:18:23 +08:00
Daniel Povey
423f9e3026
Increase query-head-dim from 24 to 32.
2022-11-09 13:28:29 +08:00
Daniel Povey
364a4c3838
Reduce pos_dim from 384 to 128.
2022-11-09 13:27:27 +08:00
Daniel Povey
cc260711b8
Make pos_dim the same as it was in scaled_adam_exp229.. although this was probably too high.
2022-11-09 13:26:18 +08:00
Daniel Povey
20e6d2a157
Rework zipformer code for clarity and extensibility
2022-11-09 12:56:07 +08:00
Daniel Povey
8b0722e626
Rework how warmup count is produced; should not affect results.
2022-10-30 14:17:41 +08:00
Daniel Povey
ff03ec88a5
Tuning change to num encoder layers, inspired by relative param importance.
2022-10-29 15:56:02 +08:00
Daniel Povey
96ea4cf1be
Have 6 different encoder stacks, U-shaped network.
2022-10-28 20:36:45 +08:00
Daniel Povey
7b57a34227
Have 4 encoder stacks
2022-10-28 19:32:14 +08:00
Daniel Povey
d7d5188bd9
Refactor how the downsampling is done so that it happens later, but the 1st encoder stack still operates after a subsampling of 2.
2022-10-28 19:20:21 +08:00
Daniel Povey
0a89f51dc9
Have a 3rd encoder, at downsampling factor of 8.
2022-10-28 17:42:00 +08:00
Daniel Povey
ed1b4d5e5d
Refactor zipformer for more flexibility so we can change number of encoder layers.
2022-10-28 17:32:38 +08:00
Daniel Povey
5dfa141ca5
Rename Conformer to Zipformer
2022-10-27 22:43:46 +08:00
Daniel Povey
3f05e47447
Rename conformer.py to zipformer.py
2022-10-27 22:41:48 +08:00
Daniel Povey
78f3cba58c
Add logging about memory used.
2022-10-25 19:19:33 +08:00
Daniel Povey
9da5526659
Changes to more accurately estimate OOM conditions
2022-10-25 12:49:18 +08:00
Daniel Povey
1e8984174b
Change to warmup schedule.
2022-10-25 12:27:00 +08:00
Daniel Povey
5b9d166cb9
--base-lr0.075->0.5; --lr-epochs 3->3.5
2022-10-23 13:56:25 +08:00
Daniel Povey
9e86d1f44f
reduce initial scale in GradScaler
2022-10-23 00:14:38 +08:00
Daniel Povey
2964628ae1
don't do penalize_values_gt on simple_lm_proj and simple_am_proj; reduce --base-lr from 0.1 to 0.075
2022-10-22 21:12:58 +08:00
Daniel Povey
11886dc4f6
Change base lr to 0.1, also rename from initial lr in train.py
2022-10-22 18:22:26 +08:00
Daniel Povey
146626bb85
Renaming in optim.py; remove step() from scan_pessimistic_batches_for_oom in train.py
2022-10-22 17:44:21 +08:00
Daniel Povey
525e87a82d
Add inf check hooks
2022-10-22 17:16:29 +08:00
Daniel Povey
e8066b5825
Merge branch 'scaled_adam_exp172' into scaled_adam_exp174
2022-10-22 15:44:04 +08:00
Daniel Povey
069125686e
Fixes to logging statements.
2022-10-22 15:08:07 +08:00
Daniel Povey
1d4382555c
Increase initial-lr from 0.06 to 0.075 and decrease lr-epochs from 3.5 to 3.
2022-10-22 15:04:08 +08:00
Daniel Povey
74d775014d
Increase initial-lr from 0.05 to 0.06.
2022-10-22 15:02:07 +08:00