Daniel Povey
|
6601035db1
|
Reduce min_abs from 1.0e-04 to 5.0e-06
|
2022-10-20 13:53:10 +08:00 |
|
Daniel Povey
|
5a0914fdcf
|
Merge branch 'scaled_adam_exp149' into scaled_adam_exp150
|
2022-10-20 13:31:22 +08:00 |
|
Daniel Povey
|
679ba2ee5e
|
Remove debug print
|
2022-10-20 13:30:55 +08:00 |
|
Daniel Povey
|
610281eaa2
|
Keep just the RandomGrad changes, vs. 149. Git history may not reflect real changes.
|
2022-10-20 13:28:50 +08:00 |
|
Daniel Povey
|
d137118484
|
Get the randomized backprop for softmax in autocast mode working.
|
2022-10-20 13:23:48 +08:00 |
|
Daniel Povey
|
d75d646dc4
|
Merge branch 'scaled_adam_exp147' into scaled_adam_exp149
|
2022-10-20 12:59:50 +08:00 |
|
Daniel Povey
|
f6b8f0f631
|
Fix bug in backprop of random_clamp()
|
2022-10-20 12:49:29 +08:00 |
|
Daniel Povey
|
f08a869769
|
Merge branch 'scaled_adam_exp151' into scaled_adam_exp150
|
2022-10-19 19:59:07 +08:00 |
|
Daniel Povey
|
cc15552510
|
Use full precision to do softmax and store ans.
|
2022-10-19 19:53:53 +08:00 |
|
Daniel Povey
|
a4443efa95
|
Add RandomGrad with min_abs=1.0e-04
|
2022-10-19 19:46:17 +08:00 |
|
Daniel Povey
|
0ad4462632
|
Reduce min_abs from 1e-03 to 1e-04
|
2022-10-19 19:27:28 +08:00 |
|
Daniel Povey
|
ef5a27388f
|
Merge branch 'scaled_adam_exp146' into scaled_adam_exp149
|
2022-10-19 19:16:27 +08:00 |
|
Daniel Povey
|
9c54906e63
|
Implement randomized backprop for softmax.
|
2022-10-19 19:16:03 +08:00 |
|
marcoyang1998
|
c30b8d3a1c
|
fix number of parameters in RESULTS.md (#627)
|
2022-10-19 16:53:29 +08:00 |
|
Daniel Povey
|
d37c159174
|
Revert model.py so there are no constraints on the output.
|
2022-10-19 13:41:58 +08:00 |
|
Daniel Povey
|
45c38dec61
|
Remove in_balancer.
|
2022-10-19 12:35:17 +08:00 |
|
Daniel Povey
|
f4442de1c4
|
Add reflect=0.1 to invocations of random_clamp()
|
2022-10-19 12:34:26 +08:00 |
|
Daniel Povey
|
8e15d4312a
|
Add some random clamping in model.py
|
2022-10-19 12:19:13 +08:00 |
|
Daniel Povey
|
c3c655d0bd
|
Random clip attention scores to -5..5.
|
2022-10-19 11:59:24 +08:00 |
|
Daniel Povey
|
6b3f9e5036
|
Changes to avoid bug in backward hooks, affecting diagnostics.
|
2022-10-19 11:06:17 +08:00 |
|
Teo Wen Shen
|
15c1a4a441
|
CSJ Data Preparation (#617)
* workspace setup
* csj prepare done
* Change compute_fbank_musan.py t soft link
* add description
* change lhotse prepare csj command
* split train-dev here
* Add header
* remove debug
* save manifest_statistics
* generate transcript in Lhotse
* update comments in config file
|
2022-10-18 15:56:43 +08:00 |
|
Daniel Povey
|
b37564c9c9
|
Cosmetic changes
|
2022-10-18 12:49:14 +08:00 |
|
Daniel Povey
|
b988bc0e33
|
Increase initial-lr from 0.04 to 0.05, plus changes for diagnostics
|
2022-10-18 11:45:24 +08:00 |
|
Fangjun Kuang
|
d69bb826ed
|
Support exporting LSTM with projection to ONNX (#621)
* Support exporting LSTM with projection to ONNX
* Add missing files
* small fixes
|
2022-10-18 11:25:31 +08:00 |
|
Fangjun Kuang
|
d1f16a04bd
|
fix type hints for decode.py (#623)
|
2022-10-18 06:56:12 +08:00 |
|
Daniel Povey
|
2675944f01
|
Use half the dim for values, vs. keys and queries.
|
2022-10-17 22:15:06 +08:00 |
|
Daniel Povey
|
3f495cd197
|
Reduce attention_dim to 192; cherry-pick scaled_adam_exp130 which is linear_pos interacting with query
|
2022-10-17 22:07:03 +08:00 |
|
Daniel Povey
|
03fe1ed200
|
Make attention dims configurable, not embed_dim//2, trying 256.
|
2022-10-17 11:03:29 +08:00 |
|
Daniel Povey
|
325f5539f9
|
Simplify the dropout mask, no non-dropped-out sequences
|
2022-10-16 19:14:24 +08:00 |
|
Daniel Povey
|
ae0067c384
|
Change LR schedule to start off higher
|
2022-10-16 11:45:33 +08:00 |
|
Daniel Povey
|
29d4e8ec6d
|
Replace MaxEig with Whiten with limit=5.0, and move it to end of ConformerEncoderLayer
|
2022-10-16 11:36:12 +08:00 |
|
Daniel Povey
|
ef4650bc8e
|
Revert whitening_limit from 1.1 to 2.2.
|
2022-10-16 11:31:08 +08:00 |
|
Daniel Povey
|
1135669e93
|
Bug fix RE float16
|
2022-10-16 10:58:22 +08:00 |
|
Daniel Povey
|
fc728f2738
|
Reorganize Whiten() code; configs are not the same as before. Also remove MaxEig for self_attn module
|
2022-10-15 23:20:18 +08:00 |
|
Daniel Povey
|
9919a05612
|
Fix debug stats.
|
2022-10-15 16:47:46 +08:00 |
|
Daniel Povey
|
252798b6a1
|
Decrease whitening limit from 2.0 to 1.1.
|
2022-10-15 16:06:15 +08:00 |
|
Daniel Povey
|
593a6e946d
|
Fix an issue with scaling of grad.
|
2022-10-15 15:36:55 +08:00 |
|
Daniel Povey
|
fcbb960da1
|
Also whiten the keys in conformer.
|
2022-10-15 15:32:20 +08:00 |
|
Daniel Povey
|
91840faa97
|
Implement whitening of values in conformer.
|
2022-10-15 15:27:05 +08:00 |
|
Daniel Povey
|
125e1b167c
|
Merge branch 'scaled_adam_exp117' into scaled_adam_exp119
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
|
2022-10-15 14:34:56 +08:00 |
|
Daniel Povey
|
a0ef291f95
|
Merging 109: linear positional encoding
|
2022-10-15 12:58:59 +08:00 |
|
Daniel Povey
|
0d452b5edb
|
Merge exp106 (remove persistent attention scores)
|
2022-10-15 12:54:21 +08:00 |
|
Daniel Povey
|
80d51efd15
|
Change cutoff for small_grad_norm
|
2022-10-14 23:29:55 +08:00 |
|
Daniel Povey
|
822465f73b
|
Bug fixes; change debug freq
|
2022-10-14 23:25:29 +08:00 |
|
Daniel Povey
|
0557dbb720
|
use larger delta but only penalize if small grad norm
|
2022-10-14 23:23:20 +08:00 |
|
Daniel Povey
|
394d4c95f9
|
Remove debug statements
|
2022-10-14 23:09:05 +08:00 |
|
Daniel Povey
|
a780984e6b
|
Penalize attention-weight entropies above a limit.
|
2022-10-14 23:01:30 +08:00 |
|
Daniel Povey
|
1812f6cb28
|
Add different debug info.
|
2022-10-14 21:16:23 +08:00 |
|
Daniel Povey
|
90953537ad
|
Remove debug statement
|
2022-10-14 20:59:26 +08:00 |
|
Daniel Povey
|
18ff1de337
|
Add debug code for attention weihts and eigs
|
2022-10-14 20:57:17 +08:00 |
|