Daniel Povey
|
45c38dec61
|
Remove in_balancer.
|
2022-10-19 12:35:17 +08:00 |
|
Daniel Povey
|
f4442de1c4
|
Add reflect=0.1 to invocations of random_clamp()
|
2022-10-19 12:34:26 +08:00 |
|
Daniel Povey
|
c3c655d0bd
|
Random clip attention scores to -5..5.
|
2022-10-19 11:59:24 +08:00 |
|
Daniel Povey
|
6b3f9e5036
|
Changes to avoid bug in backward hooks, affecting diagnostics.
|
2022-10-19 11:06:17 +08:00 |
|
Daniel Povey
|
b37564c9c9
|
Cosmetic changes
|
2022-10-18 12:49:14 +08:00 |
|
Daniel Povey
|
b988bc0e33
|
Increase initial-lr from 0.04 to 0.05, plus changes for diagnostics
|
2022-10-18 11:45:24 +08:00 |
|
Daniel Povey
|
2675944f01
|
Use half the dim for values, vs. keys and queries.
|
2022-10-17 22:15:06 +08:00 |
|
Daniel Povey
|
3f495cd197
|
Reduce attention_dim to 192; cherry-pick scaled_adam_exp130 which is linear_pos interacting with query
|
2022-10-17 22:07:03 +08:00 |
|
Daniel Povey
|
03fe1ed200
|
Make attention dims configurable, not embed_dim//2, trying 256.
|
2022-10-17 11:03:29 +08:00 |
|
Daniel Povey
|
325f5539f9
|
Simplify the dropout mask, no non-dropped-out sequences
|
2022-10-16 19:14:24 +08:00 |
|
Daniel Povey
|
29d4e8ec6d
|
Replace MaxEig with Whiten with limit=5.0, and move it to end of ConformerEncoderLayer
|
2022-10-16 11:36:12 +08:00 |
|
Daniel Povey
|
ef4650bc8e
|
Revert whitening_limit from 1.1 to 2.2.
|
2022-10-16 11:31:08 +08:00 |
|
Daniel Povey
|
fc728f2738
|
Reorganize Whiten() code; configs are not the same as before. Also remove MaxEig for self_attn module
|
2022-10-15 23:20:18 +08:00 |
|
Daniel Povey
|
9919a05612
|
Fix debug stats.
|
2022-10-15 16:47:46 +08:00 |
|
Daniel Povey
|
252798b6a1
|
Decrease whitening limit from 2.0 to 1.1.
|
2022-10-15 16:06:15 +08:00 |
|
Daniel Povey
|
593a6e946d
|
Fix an issue with scaling of grad.
|
2022-10-15 15:36:55 +08:00 |
|
Daniel Povey
|
fcbb960da1
|
Also whiten the keys in conformer.
|
2022-10-15 15:32:20 +08:00 |
|
Daniel Povey
|
91840faa97
|
Implement whitening of values in conformer.
|
2022-10-15 15:27:05 +08:00 |
|
Daniel Povey
|
125e1b167c
|
Merge branch 'scaled_adam_exp117' into scaled_adam_exp119
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
|
2022-10-15 14:34:56 +08:00 |
|
Daniel Povey
|
80d51efd15
|
Change cutoff for small_grad_norm
|
2022-10-14 23:29:55 +08:00 |
|
Daniel Povey
|
822465f73b
|
Bug fixes; change debug freq
|
2022-10-14 23:25:29 +08:00 |
|
Daniel Povey
|
0557dbb720
|
use larger delta but only penalize if small grad norm
|
2022-10-14 23:23:20 +08:00 |
|
Daniel Povey
|
394d4c95f9
|
Remove debug statements
|
2022-10-14 23:09:05 +08:00 |
|
Daniel Povey
|
a780984e6b
|
Penalize attention-weight entropies above a limit.
|
2022-10-14 23:01:30 +08:00 |
|
Daniel Povey
|
1812f6cb28
|
Add different debug info.
|
2022-10-14 21:16:23 +08:00 |
|
Daniel Povey
|
90953537ad
|
Remove debug statement
|
2022-10-14 20:59:26 +08:00 |
|
Daniel Povey
|
18ff1de337
|
Add debug code for attention weihts and eigs
|
2022-10-14 20:57:17 +08:00 |
|
Daniel Povey
|
ae6478c687
|
This should just be a cosmetic change, regularizing how we get the warmup times from the layers.
|
2022-10-13 19:41:28 +08:00 |
|
Daniel Povey
|
7d8e460a53
|
Revert dropout on attention scores to 0.0.
|
2022-10-13 15:09:50 +08:00 |
|
Daniel Povey
|
2a50def7c6
|
Simplify how the positional-embedding scores work in attention (thanks to Zengwei for this concept)
|
2022-10-13 15:08:00 +08:00 |
|
Daniel Povey
|
63334137ee
|
Merge branch 'scaled_adam_exp106' into scaled_adam_exp108
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
|
2022-10-13 12:22:13 +08:00 |
|
Daniel Povey
|
b736bb4840
|
Cosmetic improvements
|
2022-10-12 19:34:48 +08:00 |
|
Daniel Povey
|
12323025d7
|
Make ActivationBalancer and MaxEig more efficient.
|
2022-10-12 18:44:52 +08:00 |
|
Daniel Povey
|
eb58e6d74b
|
Remove persistent attention scores.
|
2022-10-12 12:50:00 +08:00 |
|
Daniel Povey
|
569762397f
|
Reduce final layerdrop_prob from 0.075 to 0.05.
|
2022-10-10 19:04:52 +08:00 |
|
Daniel Povey
|
12323f2fbf
|
Refactor RelPosMultiheadAttention to have 2nd forward function and introduce more modules in conformer encoder layer
|
2022-10-10 15:27:26 +08:00 |
|
Daniel Povey
|
f941991331
|
Fix bug in choosing layers to drop
|
2022-10-10 13:38:36 +08:00 |
|
Daniel Povey
|
857b3735e7
|
Fix bug where fewer layers were dropped than should be; remove unnecesary print statement.
|
2022-10-10 13:18:40 +08:00 |
|
Daniel Povey
|
09c9b02f6f
|
Increase final layerdrop prob from 0.05 to 0.075
|
2022-10-10 12:20:13 +08:00 |
|
Daniel Povey
|
9f059f7115
|
Fix s -> scaling for import.
|
2022-10-10 11:50:15 +08:00 |
|
Daniel Povey
|
00841f0f49
|
Remove unused code LearnedScale.
|
2022-10-09 16:07:31 +08:00 |
|
Daniel Povey
|
cf450908c6
|
Revert also the changes in scaled_adam_exp85 regarding warmup schedule
|
2022-10-09 14:26:32 +08:00 |
|
Daniel Povey
|
40fa33d702
|
Decrease initial_layerdrop_prob from 0.75 to 0.5
|
2022-10-09 13:59:56 +08:00 |
|
Daniel Povey
|
44ad73c44f
|
For speed, drop the same num layers per job.
|
2022-10-09 13:40:24 +08:00 |
|
Daniel Povey
|
f8f200e2b2
|
Make layerdrop different in different processes.
|
2022-10-09 12:25:12 +08:00 |
|
Daniel Povey
|
e6540865f3
|
Do warmup by dropping out whole layers.
|
2022-10-09 11:50:24 +08:00 |
|
Daniel Povey
|
5255969544
|
Revert "Change warmup schedule and increase warmup_batches from 4k to 6k"
This reverts commit 86845bd5d859ceb6f83cd83f3719c3e6641de987.
|
2022-10-09 11:30:27 +08:00 |
|
Daniel Povey
|
d467338837
|
Limit bypass scale to >= 0.1
|
2022-10-08 21:37:21 +08:00 |
|
Daniel Povey
|
bc9fbe2579
|
Bug fix
|
2022-10-08 21:06:09 +08:00 |
|
Daniel Povey
|
9023fe7151
|
Change the initial keep-prob back from 0.25 to 0.5
|
2022-10-08 20:55:15 +08:00 |
|