Daniel Povey
|
0406d0b059
|
Increase max_abs in ActivationBalancer of conv module from 20 to 50
|
2022-10-23 13:51:51 +08:00 |
|
Daniel Povey
|
b7083e7aff
|
Increase default max_factor for ActivationBalancer from 0.02 to 0.04; decrease max_abs in ConvolutionModule.deriv_balancer2 from 100.0 to 20.0
|
2022-10-23 00:09:21 +08:00 |
|
Daniel Povey
|
e0c1dc66da
|
Increase probs of activation balancer and make it decay slower.
|
2022-10-22 22:18:38 +08:00 |
|
Daniel Povey
|
13ffd8e823
|
Trying to reduce grad_scale of Whiten() from 0.02 to 0.01.
|
2022-10-22 20:30:05 +08:00 |
|
Daniel Povey
|
9919fb3e1b
|
Increase grad_scale to Whiten module
|
2022-10-22 15:32:50 +08:00 |
|
Daniel Povey
|
84580ec022
|
Configuration changes: scores limit 5->10, min_prob 0.05->0.1, cur_grad_scale more aggressive increase
|
2022-10-22 14:09:53 +08:00 |
|
Daniel Povey
|
3298e18732
|
Increase limit on logit for SimpleCombiner to 25.0
|
2022-10-21 22:06:35 +08:00 |
|
Daniel Povey
|
e5fe3de17e
|
Also apply limit on logit in SimpleCombiner
|
2022-10-21 21:43:45 +08:00 |
|
Daniel Povey
|
bdbd2cfce6
|
Penalize too large weights in softmax of AttentionDownsample()
|
2022-10-21 20:12:36 +08:00 |
|
Daniel Povey
|
9f68b5717c
|
Reduce the limit on attention weights from 50 to 25.
|
2022-10-21 12:13:23 +08:00 |
|
Daniel Povey
|
c5cb52fed1
|
Remove the use of random_clamp in conformer.py.
|
2022-10-20 19:54:38 +08:00 |
|
Daniel Povey
|
dccff6b893
|
Remove use of RandomGrad
|
2022-10-20 19:35:11 +08:00 |
|
Daniel Povey
|
1018a77410
|
Use normal implementation of softmax.
|
2022-10-20 19:34:10 +08:00 |
|
Daniel Povey
|
6e6209419c
|
Merge branch 'scaled_adam_exp150' into scaled_adam_exp155
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
|
2022-10-20 15:04:27 +08:00 |
|
Daniel Povey
|
4565d43d5c
|
Add hard limit of attention weights to +- 50
|
2022-10-20 14:28:22 +08:00 |
|
Daniel Povey
|
a4443efa95
|
Add RandomGrad with min_abs=1.0e-04
|
2022-10-19 19:46:17 +08:00 |
|
Daniel Povey
|
ef5a27388f
|
Merge branch 'scaled_adam_exp146' into scaled_adam_exp149
|
2022-10-19 19:16:27 +08:00 |
|
Daniel Povey
|
9c54906e63
|
Implement randomized backprop for softmax.
|
2022-10-19 19:16:03 +08:00 |
|
Daniel Povey
|
45c38dec61
|
Remove in_balancer.
|
2022-10-19 12:35:17 +08:00 |
|
Daniel Povey
|
f4442de1c4
|
Add reflect=0.1 to invocations of random_clamp()
|
2022-10-19 12:34:26 +08:00 |
|
Daniel Povey
|
c3c655d0bd
|
Random clip attention scores to -5..5.
|
2022-10-19 11:59:24 +08:00 |
|
Daniel Povey
|
6b3f9e5036
|
Changes to avoid bug in backward hooks, affecting diagnostics.
|
2022-10-19 11:06:17 +08:00 |
|
Daniel Povey
|
b37564c9c9
|
Cosmetic changes
|
2022-10-18 12:49:14 +08:00 |
|
Daniel Povey
|
b988bc0e33
|
Increase initial-lr from 0.04 to 0.05, plus changes for diagnostics
|
2022-10-18 11:45:24 +08:00 |
|
Daniel Povey
|
2675944f01
|
Use half the dim for values, vs. keys and queries.
|
2022-10-17 22:15:06 +08:00 |
|
Daniel Povey
|
3f495cd197
|
Reduce attention_dim to 192; cherry-pick scaled_adam_exp130 which is linear_pos interacting with query
|
2022-10-17 22:07:03 +08:00 |
|
Daniel Povey
|
03fe1ed200
|
Make attention dims configurable, not embed_dim//2, trying 256.
|
2022-10-17 11:03:29 +08:00 |
|
Daniel Povey
|
325f5539f9
|
Simplify the dropout mask, no non-dropped-out sequences
|
2022-10-16 19:14:24 +08:00 |
|
Daniel Povey
|
29d4e8ec6d
|
Replace MaxEig with Whiten with limit=5.0, and move it to end of ConformerEncoderLayer
|
2022-10-16 11:36:12 +08:00 |
|
Daniel Povey
|
ef4650bc8e
|
Revert whitening_limit from 1.1 to 2.2.
|
2022-10-16 11:31:08 +08:00 |
|
Daniel Povey
|
fc728f2738
|
Reorganize Whiten() code; configs are not the same as before. Also remove MaxEig for self_attn module
|
2022-10-15 23:20:18 +08:00 |
|
Daniel Povey
|
9919a05612
|
Fix debug stats.
|
2022-10-15 16:47:46 +08:00 |
|
Daniel Povey
|
252798b6a1
|
Decrease whitening limit from 2.0 to 1.1.
|
2022-10-15 16:06:15 +08:00 |
|
Daniel Povey
|
593a6e946d
|
Fix an issue with scaling of grad.
|
2022-10-15 15:36:55 +08:00 |
|
Daniel Povey
|
fcbb960da1
|
Also whiten the keys in conformer.
|
2022-10-15 15:32:20 +08:00 |
|
Daniel Povey
|
91840faa97
|
Implement whitening of values in conformer.
|
2022-10-15 15:27:05 +08:00 |
|
Daniel Povey
|
125e1b167c
|
Merge branch 'scaled_adam_exp117' into scaled_adam_exp119
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
|
2022-10-15 14:34:56 +08:00 |
|
Daniel Povey
|
80d51efd15
|
Change cutoff for small_grad_norm
|
2022-10-14 23:29:55 +08:00 |
|
Daniel Povey
|
822465f73b
|
Bug fixes; change debug freq
|
2022-10-14 23:25:29 +08:00 |
|
Daniel Povey
|
0557dbb720
|
use larger delta but only penalize if small grad norm
|
2022-10-14 23:23:20 +08:00 |
|
Daniel Povey
|
394d4c95f9
|
Remove debug statements
|
2022-10-14 23:09:05 +08:00 |
|
Daniel Povey
|
a780984e6b
|
Penalize attention-weight entropies above a limit.
|
2022-10-14 23:01:30 +08:00 |
|
Daniel Povey
|
1812f6cb28
|
Add different debug info.
|
2022-10-14 21:16:23 +08:00 |
|
Daniel Povey
|
90953537ad
|
Remove debug statement
|
2022-10-14 20:59:26 +08:00 |
|
Daniel Povey
|
18ff1de337
|
Add debug code for attention weihts and eigs
|
2022-10-14 20:57:17 +08:00 |
|
Daniel Povey
|
ae6478c687
|
This should just be a cosmetic change, regularizing how we get the warmup times from the layers.
|
2022-10-13 19:41:28 +08:00 |
|
Daniel Povey
|
7d8e460a53
|
Revert dropout on attention scores to 0.0.
|
2022-10-13 15:09:50 +08:00 |
|
Daniel Povey
|
2a50def7c6
|
Simplify how the positional-embedding scores work in attention (thanks to Zengwei for this concept)
|
2022-10-13 15:08:00 +08:00 |
|
Daniel Povey
|
63334137ee
|
Merge branch 'scaled_adam_exp106' into scaled_adam_exp108
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
|
2022-10-13 12:22:13 +08:00 |
|
Daniel Povey
|
b736bb4840
|
Cosmetic improvements
|
2022-10-12 19:34:48 +08:00 |
|