Daniel Povey
9672dffac2
Merge branch 'scaled_adam_exp168' into scaled_adam_exp169
2022-10-22 14:05:07 +08:00
Daniel Povey
8d1021d131
Remove comparison diagnostics, which were not that useful.
2022-10-22 13:57:00 +08:00
Daniel Povey
2e93e5d3b7
Add logging
2022-10-22 13:52:51 +08:00
Daniel Povey
fd3f21f84d
Changes to grad scale logging; increase grad scale more frequently if less than one.
2022-10-22 13:36:26 +08:00
Fangjun Kuang
348494888d
Add kaldifst to requirements.txt ( #631 )
2022-10-22 13:14:44 +08:00
Daniel Povey
1d2fe8e3c2
Add more diagnostics to debug gradient scale problems
2022-10-22 12:49:29 +08:00
Daniel Povey
3298e18732
Increase limit on logit for SimpleCombiner to 25.0
2022-10-21 22:06:35 +08:00
Daniel Povey
e5fe3de17e
Also apply limit on logit in SimpleCombiner
2022-10-21 21:43:45 +08:00
Daniel Povey
bdbd2cfce6
Penalize too large weights in softmax of AttentionDownsample()
2022-10-21 20:12:36 +08:00
ezerhouni
9b671e1c21
Add Shallow fusion in modified_beam_search ( #630 )
...
* Add utility for shallow fusion
* test batch size == 1 without shallow fusion
* Use shallow fusion for modified-beam-search
* Modified beam search with ngram rescoring
* Fix code according to review
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-10-21 16:44:56 +08:00
Daniel Povey
476fb9e9f3
Reduce min_prob of ActivationBalancer from 0.1 to 0.05.
2022-10-21 15:42:04 +08:00
Daniel Povey
9f68b5717c
Reduce the limit on attention weights from 50 to 25.
2022-10-21 12:13:23 +08:00
Daniel Povey
c5cb52fed1
Remove the use of random_clamp in conformer.py.
2022-10-20 19:54:38 +08:00
Daniel Povey
dccff6b893
Remove use of RandomGrad
2022-10-20 19:35:11 +08:00
Daniel Povey
1018a77410
Use normal implementation of softmax.
2022-10-20 19:34:10 +08:00
Daniel Povey
6e6209419c
Merge branch 'scaled_adam_exp150' into scaled_adam_exp155
...
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless7/conformer.py
2022-10-20 15:04:27 +08:00
Daniel Povey
4565d43d5c
Add hard limit of attention weights to +- 50
2022-10-20 14:28:22 +08:00
Daniel Povey
6601035db1
Reduce min_abs from 1.0e-04 to 5.0e-06
2022-10-20 13:53:10 +08:00
Daniel Povey
5a0914fdcf
Merge branch 'scaled_adam_exp149' into scaled_adam_exp150
2022-10-20 13:31:22 +08:00
Daniel Povey
679ba2ee5e
Remove debug print
2022-10-20 13:30:55 +08:00
Daniel Povey
610281eaa2
Keep just the RandomGrad changes, vs. 149. Git history may not reflect real changes.
2022-10-20 13:28:50 +08:00
Daniel Povey
d137118484
Get the randomized backprop for softmax in autocast mode working.
2022-10-20 13:23:48 +08:00
Daniel Povey
d75d646dc4
Merge branch 'scaled_adam_exp147' into scaled_adam_exp149
2022-10-20 12:59:50 +08:00
Daniel Povey
f6b8f0f631
Fix bug in backprop of random_clamp()
2022-10-20 12:49:29 +08:00
Daniel Povey
f08a869769
Merge branch 'scaled_adam_exp151' into scaled_adam_exp150
2022-10-19 19:59:07 +08:00
Daniel Povey
cc15552510
Use full precision to do softmax and store ans.
2022-10-19 19:53:53 +08:00
Daniel Povey
a4443efa95
Add RandomGrad with min_abs=1.0e-04
2022-10-19 19:46:17 +08:00
Daniel Povey
0ad4462632
Reduce min_abs from 1e-03 to 1e-04
2022-10-19 19:27:28 +08:00
Daniel Povey
ef5a27388f
Merge branch 'scaled_adam_exp146' into scaled_adam_exp149
2022-10-19 19:16:27 +08:00
Daniel Povey
9c54906e63
Implement randomized backprop for softmax.
2022-10-19 19:16:03 +08:00
marcoyang1998
c30b8d3a1c
fix number of parameters in RESULTS.md ( #627 )
2022-10-19 16:53:29 +08:00
Daniel Povey
d37c159174
Revert model.py so there are no constraints on the output.
2022-10-19 13:41:58 +08:00
Daniel Povey
45c38dec61
Remove in_balancer.
2022-10-19 12:35:17 +08:00
Daniel Povey
f4442de1c4
Add reflect=0.1 to invocations of random_clamp()
2022-10-19 12:34:26 +08:00
Daniel Povey
8e15d4312a
Add some random clamping in model.py
2022-10-19 12:19:13 +08:00
Daniel Povey
c3c655d0bd
Random clip attention scores to -5..5.
2022-10-19 11:59:24 +08:00
Daniel Povey
6b3f9e5036
Changes to avoid bug in backward hooks, affecting diagnostics.
2022-10-19 11:06:17 +08:00
Teo Wen Shen
15c1a4a441
CSJ Data Preparation ( #617 )
...
* workspace setup
* csj prepare done
* Change compute_fbank_musan.py t soft link
* add description
* change lhotse prepare csj command
* split train-dev here
* Add header
* remove debug
* save manifest_statistics
* generate transcript in Lhotse
* update comments in config file
2022-10-18 15:56:43 +08:00
Daniel Povey
b37564c9c9
Cosmetic changes
2022-10-18 12:49:14 +08:00
Daniel Povey
b988bc0e33
Increase initial-lr from 0.04 to 0.05, plus changes for diagnostics
2022-10-18 11:45:24 +08:00
Fangjun Kuang
d69bb826ed
Support exporting LSTM with projection to ONNX ( #621 )
...
* Support exporting LSTM with projection to ONNX
* Add missing files
* small fixes
2022-10-18 11:25:31 +08:00
Fangjun Kuang
d1f16a04bd
fix type hints for decode.py ( #623 )
2022-10-18 06:56:12 +08:00
Daniel Povey
2675944f01
Use half the dim for values, vs. keys and queries.
2022-10-17 22:15:06 +08:00
Daniel Povey
3f495cd197
Reduce attention_dim to 192; cherry-pick scaled_adam_exp130 which is linear_pos interacting with query
2022-10-17 22:07:03 +08:00
Daniel Povey
03fe1ed200
Make attention dims configurable, not embed_dim//2, trying 256.
2022-10-17 11:03:29 +08:00
Daniel Povey
325f5539f9
Simplify the dropout mask, no non-dropped-out sequences
2022-10-16 19:14:24 +08:00
Daniel Povey
ae0067c384
Change LR schedule to start off higher
2022-10-16 11:45:33 +08:00
Daniel Povey
29d4e8ec6d
Replace MaxEig with Whiten with limit=5.0, and move it to end of ConformerEncoderLayer
2022-10-16 11:36:12 +08:00
Daniel Povey
ef4650bc8e
Revert whitening_limit from 1.1 to 2.2.
2022-10-16 11:31:08 +08:00
Daniel Povey
1135669e93
Bug fix RE float16
2022-10-16 10:58:22 +08:00