1855 Commits

Author SHA1 Message Date
Daniel Povey
fcbb960da1 Also whiten the keys in conformer. 2022-10-15 15:32:20 +08:00
Daniel Povey
91840faa97 Implement whitening of values in conformer. 2022-10-15 15:27:05 +08:00
Daniel Povey
125e1b167c Merge branch 'scaled_adam_exp117' into scaled_adam_exp119
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
2022-10-15 14:34:56 +08:00
Daniel Povey
a0ef291f95 Merging 109: linear positional encoding 2022-10-15 12:58:59 +08:00
Daniel Povey
0d452b5edb Merge exp106 (remove persistent attention scores) 2022-10-15 12:54:21 +08:00
Daniel Povey
80d51efd15 Change cutoff for small_grad_norm 2022-10-14 23:29:55 +08:00
Daniel Povey
822465f73b Bug fixes; change debug freq 2022-10-14 23:25:29 +08:00
Daniel Povey
0557dbb720 use larger delta but only penalize if small grad norm 2022-10-14 23:23:20 +08:00
Daniel Povey
394d4c95f9 Remove debug statements 2022-10-14 23:09:05 +08:00
Daniel Povey
a780984e6b Penalize attention-weight entropies above a limit. 2022-10-14 23:01:30 +08:00
Daniel Povey
1812f6cb28 Add different debug info. 2022-10-14 21:16:23 +08:00
Daniel Povey
90953537ad Remove debug statement 2022-10-14 20:59:26 +08:00
Daniel Povey
18ff1de337 Add debug code for attention weihts and eigs 2022-10-14 20:57:17 +08:00
Daniel Povey
96023419da Reworking of ActivationBalancer code to hopefully balance speed and effectiveness. 2022-10-14 19:20:32 +08:00
Daniel Povey
5f375be159 Merge branch 'scaled_adam_exp103b2' into scaled_adam_exp103b4 2022-10-14 15:27:10 +08:00
Daniel Povey
15b91c12d6 Reduce stats period from 10 to 4. 2022-10-14 15:14:06 +08:00
Daniel Povey
db8b9919da Reduce beta from 0.75 to 0.0. 2022-10-14 15:12:59 +08:00
Fangjun Kuang
a66e74b92f
Fix links in the doc (#619) 2022-10-14 12:23:47 +08:00
Fangjun Kuang
11bff57586
Add doc about model export (#618)
* Add doc about model export

* fix typos
2022-10-14 10:16:34 +08:00
Daniel Povey
ae6478c687 This should just be a cosmetic change, regularizing how we get the warmup times from the layers. 2022-10-13 19:41:28 +08:00
Fangjun Kuang
c39cba5191
Support exporting to ONNX for the wenetspeech recipe (#615)
* Support exporting to ONNX for the wenetspeech recipe
2022-10-13 15:17:20 +08:00
Zengwei Yao
aa58c2ee02
Modify ActivationBalancer for speed (#612)
* add a probability to apply ActivationBalancer

* minor fix

* minor fix
2022-10-13 15:14:28 +08:00
Daniel Povey
7d8e460a53 Revert dropout on attention scores to 0.0. 2022-10-13 15:09:50 +08:00
Daniel Povey
2a50def7c6 Simplify how the positional-embedding scores work in attention (thanks to Zengwei for this concept) 2022-10-13 15:08:00 +08:00
Daniel Povey
23d6bf7765 Fix bug when channel_dim < 0 2022-10-13 13:52:28 +08:00
Daniel Povey
b09a1b2ae6 Fix bug when channel_dim < 0 2022-10-13 13:40:43 +08:00
Daniel Povey
9270e32a51 Remove unused config value 2022-10-13 13:34:35 +08:00
Daniel Povey
63334137ee Merge branch 'scaled_adam_exp106' into scaled_adam_exp108
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
2022-10-13 12:22:13 +08:00
Daniel Povey
9e30f2bf12 Make the ActivationBalancer regress to the data mean, not zero, when enforcing abs constraint. 2022-10-13 12:05:45 +08:00
Daniel Povey
49c6b6943d Change scale_factor_scale from 0.5 to 0.8 2022-10-12 20:55:52 +08:00
Daniel Povey
b736bb4840 Cosmetic improvements 2022-10-12 19:34:48 +08:00
Daniel Povey
12323025d7 Make ActivationBalancer and MaxEig more efficient. 2022-10-12 18:44:52 +08:00
Daniel Povey
eb58e6d74b Remove persistent attention scores. 2022-10-12 12:50:00 +08:00
Fangjun Kuang
1c07d2fb37
Remove all-in-one for onnx export (#614)
* Remove all-in-one for onnx export

* Exit on error for CI
2022-10-12 10:34:06 +08:00
Yunusemre
f3db4ea871
exporting projection layers of joiner separately for onnx (#584)
* exporting projection layers of joiner separately for onnx
2022-10-11 18:22:28 +08:00
KajiMaCN
0019463c83
update docs (#611)
* update docs

Co-authored-by: unknown <mazhihao@jshcbd.cn>
Co-authored-by: KajiMaCN <moonlightshadowmzh@gmail.com>
2022-10-11 13:24:59 +08:00
Daniel Povey
1825336841 Fix issue with diagnostics if stats is None 2022-10-11 11:05:52 +08:00
Fangjun Kuang
3614d7ff6d
Add dill to requirements.txt (#613)
* Add dill to requirements.txt

* Disable style check for python 3.7
2022-10-10 22:50:25 +08:00
Daniel Povey
569762397f Reduce final layerdrop_prob from 0.075 to 0.05. 2022-10-10 19:04:52 +08:00
Daniel Povey
12323f2fbf Refactor RelPosMultiheadAttention to have 2nd forward function and introduce more modules in conformer encoder layer 2022-10-10 15:27:26 +08:00
Daniel Povey
f941991331 Fix bug in choosing layers to drop 2022-10-10 13:38:36 +08:00
Daniel Povey
857b3735e7 Fix bug where fewer layers were dropped than should be; remove unnecesary print statement. 2022-10-10 13:18:40 +08:00
Daniel Povey
09c9b02f6f Increase final layerdrop prob from 0.05 to 0.075 2022-10-10 12:20:13 +08:00
Daniel Povey
9f059f7115 Fix s -> scaling for import. 2022-10-10 11:50:15 +08:00
Daniel Povey
d7f6e8eb51 Only apply ActivationBalancer with prob 0.25. 2022-10-10 00:26:31 +08:00
Daniel Povey
dece8ad204 Various fixes from debugging with nvtx, but removed the NVTX annotations. 2022-10-09 21:14:52 +08:00
Daniel Povey
bd7dce460b Reintroduce batching to the optimizer 2022-10-09 20:29:23 +08:00
Daniel Povey
00841f0f49 Remove unused code LearnedScale. 2022-10-09 16:07:31 +08:00
Daniel Povey
cf450908c6 Revert also the changes in scaled_adam_exp85 regarding warmup schedule 2022-10-09 14:26:32 +08:00
Daniel Povey
40fa33d702 Decrease initial_layerdrop_prob from 0.75 to 0.5 2022-10-09 13:59:56 +08:00