Daniel Povey
|
3f495cd197
|
Reduce attention_dim to 192; cherry-pick scaled_adam_exp130 which is linear_pos interacting with query
|
2022-10-17 22:07:03 +08:00 |
|
Daniel Povey
|
03fe1ed200
|
Make attention dims configurable, not embed_dim//2, trying 256.
|
2022-10-17 11:03:29 +08:00 |
|
Daniel Povey
|
325f5539f9
|
Simplify the dropout mask, no non-dropped-out sequences
|
2022-10-16 19:14:24 +08:00 |
|
Daniel Povey
|
ae0067c384
|
Change LR schedule to start off higher
|
2022-10-16 11:45:33 +08:00 |
|
Daniel Povey
|
29d4e8ec6d
|
Replace MaxEig with Whiten with limit=5.0, and move it to end of ConformerEncoderLayer
|
2022-10-16 11:36:12 +08:00 |
|
Daniel Povey
|
ef4650bc8e
|
Revert whitening_limit from 1.1 to 2.2.
|
2022-10-16 11:31:08 +08:00 |
|
Daniel Povey
|
1135669e93
|
Bug fix RE float16
|
2022-10-16 10:58:22 +08:00 |
|
Daniel Povey
|
fc728f2738
|
Reorganize Whiten() code; configs are not the same as before. Also remove MaxEig for self_attn module
|
2022-10-15 23:20:18 +08:00 |
|
Daniel Povey
|
9919a05612
|
Fix debug stats.
|
2022-10-15 16:47:46 +08:00 |
|
Daniel Povey
|
252798b6a1
|
Decrease whitening limit from 2.0 to 1.1.
|
2022-10-15 16:06:15 +08:00 |
|
Daniel Povey
|
593a6e946d
|
Fix an issue with scaling of grad.
|
2022-10-15 15:36:55 +08:00 |
|
Daniel Povey
|
fcbb960da1
|
Also whiten the keys in conformer.
|
2022-10-15 15:32:20 +08:00 |
|
Daniel Povey
|
91840faa97
|
Implement whitening of values in conformer.
|
2022-10-15 15:27:05 +08:00 |
|
Daniel Povey
|
125e1b167c
|
Merge branch 'scaled_adam_exp117' into scaled_adam_exp119
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
|
2022-10-15 14:34:56 +08:00 |
|
Daniel Povey
|
a0ef291f95
|
Merging 109: linear positional encoding
|
2022-10-15 12:58:59 +08:00 |
|
Daniel Povey
|
0d452b5edb
|
Merge exp106 (remove persistent attention scores)
|
2022-10-15 12:54:21 +08:00 |
|
Daniel Povey
|
80d51efd15
|
Change cutoff for small_grad_norm
|
2022-10-14 23:29:55 +08:00 |
|
Daniel Povey
|
822465f73b
|
Bug fixes; change debug freq
|
2022-10-14 23:25:29 +08:00 |
|
Daniel Povey
|
0557dbb720
|
use larger delta but only penalize if small grad norm
|
2022-10-14 23:23:20 +08:00 |
|
Daniel Povey
|
394d4c95f9
|
Remove debug statements
|
2022-10-14 23:09:05 +08:00 |
|
Daniel Povey
|
a780984e6b
|
Penalize attention-weight entropies above a limit.
|
2022-10-14 23:01:30 +08:00 |
|
Daniel Povey
|
1812f6cb28
|
Add different debug info.
|
2022-10-14 21:16:23 +08:00 |
|
Daniel Povey
|
90953537ad
|
Remove debug statement
|
2022-10-14 20:59:26 +08:00 |
|
Daniel Povey
|
18ff1de337
|
Add debug code for attention weihts and eigs
|
2022-10-14 20:57:17 +08:00 |
|
Daniel Povey
|
96023419da
|
Reworking of ActivationBalancer code to hopefully balance speed and effectiveness.
|
2022-10-14 19:20:32 +08:00 |
|
Daniel Povey
|
5f375be159
|
Merge branch 'scaled_adam_exp103b2' into scaled_adam_exp103b4
|
2022-10-14 15:27:10 +08:00 |
|
Daniel Povey
|
15b91c12d6
|
Reduce stats period from 10 to 4.
|
2022-10-14 15:14:06 +08:00 |
|
Daniel Povey
|
db8b9919da
|
Reduce beta from 0.75 to 0.0.
|
2022-10-14 15:12:59 +08:00 |
|
Daniel Povey
|
ae6478c687
|
This should just be a cosmetic change, regularizing how we get the warmup times from the layers.
|
2022-10-13 19:41:28 +08:00 |
|
Daniel Povey
|
7d8e460a53
|
Revert dropout on attention scores to 0.0.
|
2022-10-13 15:09:50 +08:00 |
|
Daniel Povey
|
2a50def7c6
|
Simplify how the positional-embedding scores work in attention (thanks to Zengwei for this concept)
|
2022-10-13 15:08:00 +08:00 |
|
Daniel Povey
|
23d6bf7765
|
Fix bug when channel_dim < 0
|
2022-10-13 13:52:28 +08:00 |
|
Daniel Povey
|
b09a1b2ae6
|
Fix bug when channel_dim < 0
|
2022-10-13 13:40:43 +08:00 |
|
Daniel Povey
|
9270e32a51
|
Remove unused config value
|
2022-10-13 13:34:35 +08:00 |
|
Daniel Povey
|
63334137ee
|
Merge branch 'scaled_adam_exp106' into scaled_adam_exp108
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
|
2022-10-13 12:22:13 +08:00 |
|
Daniel Povey
|
9e30f2bf12
|
Make the ActivationBalancer regress to the data mean, not zero, when enforcing abs constraint.
|
2022-10-13 12:05:45 +08:00 |
|
Daniel Povey
|
49c6b6943d
|
Change scale_factor_scale from 0.5 to 0.8
|
2022-10-12 20:55:52 +08:00 |
|
Daniel Povey
|
b736bb4840
|
Cosmetic improvements
|
2022-10-12 19:34:48 +08:00 |
|
Daniel Povey
|
12323025d7
|
Make ActivationBalancer and MaxEig more efficient.
|
2022-10-12 18:44:52 +08:00 |
|
Daniel Povey
|
eb58e6d74b
|
Remove persistent attention scores.
|
2022-10-12 12:50:00 +08:00 |
|
Daniel Povey
|
1825336841
|
Fix issue with diagnostics if stats is None
|
2022-10-11 11:05:52 +08:00 |
|
Daniel Povey
|
569762397f
|
Reduce final layerdrop_prob from 0.075 to 0.05.
|
2022-10-10 19:04:52 +08:00 |
|
Daniel Povey
|
12323f2fbf
|
Refactor RelPosMultiheadAttention to have 2nd forward function and introduce more modules in conformer encoder layer
|
2022-10-10 15:27:26 +08:00 |
|
Daniel Povey
|
f941991331
|
Fix bug in choosing layers to drop
|
2022-10-10 13:38:36 +08:00 |
|
Daniel Povey
|
857b3735e7
|
Fix bug where fewer layers were dropped than should be; remove unnecesary print statement.
|
2022-10-10 13:18:40 +08:00 |
|
Daniel Povey
|
09c9b02f6f
|
Increase final layerdrop prob from 0.05 to 0.075
|
2022-10-10 12:20:13 +08:00 |
|
Daniel Povey
|
9f059f7115
|
Fix s -> scaling for import.
|
2022-10-10 11:50:15 +08:00 |
|
Daniel Povey
|
d7f6e8eb51
|
Only apply ActivationBalancer with prob 0.25.
|
2022-10-10 00:26:31 +08:00 |
|
Daniel Povey
|
dece8ad204
|
Various fixes from debugging with nvtx, but removed the NVTX annotations.
|
2022-10-09 21:14:52 +08:00 |
|
Daniel Povey
|
bd7dce460b
|
Reintroduce batching to the optimizer
|
2022-10-09 20:29:23 +08:00 |
|