Daniel Povey
eeb95ed502
Fix issue with cov scale
2022-06-10 16:25:45 +08:00
Daniel Povey
c671e213fc
Increase beta from 0.95 to 0.98
2022-06-10 14:39:58 +08:00
Daniel Povey
c6cfb1e5fa
Remove logging that was excessive
2022-06-10 14:25:23 +08:00
Daniel Povey
ff0309947a
Do scaling a different way, so loss function is more consistent; accum stats in backward pass
2022-06-10 14:16:44 +08:00
Daniel Povey
58cbc3d961
Move PseudoNormalizeFunction to a different place.
2022-06-10 14:01:13 +08:00
Daniel Povey
c92d9d72aa
Fix inf issue
2022-06-10 11:20:47 +08:00
Daniel Povey
950cd4a3e8
Introduce normalization..
2022-06-10 10:47:18 +08:00
Daniel Povey
4a5143e548
Increase decay to 1k
2022-06-10 10:09:46 +08:00
Daniel Povey
e2ef8732d1
Increase beta to 0.95
2022-06-10 10:05:28 +08:00
Daniel Povey
a61e21ac85
Change beta to 0.9
2022-06-09 23:33:05 +08:00
Daniel Povey
2c5ebc065e
Change eps to 1e-20
2022-06-09 23:24:33 +08:00
Daniel Povey
c533f91fa2
Remove one line..
2022-06-09 23:13:16 +08:00
Daniel Povey
0fd2cb141f
Code cleanup and refactoring
2022-06-09 22:54:56 +08:00
Daniel Povey
2621cb7f54
Change beta to 0.8
2022-06-09 20:17:12 +08:00
Daniel Povey
082a890635
Fix apply_prob_decay to 500
2022-06-09 19:20:03 +08:00
Daniel Povey
fca844d80c
Make it really have 2k decay and revert to 0.02 scale
2022-06-09 17:45:11 +08:00
Daniel Povey
56d6dd55ae
Bug fixes
2022-06-09 12:06:35 +08:00
Daniel Povey
1669e21c0c
Use decorrelation in conformer layers also
2022-06-09 11:31:52 +08:00
Daniel Povey
b9a476c7bb
Remove loss factor from decorr_loss_scale
2022-06-08 20:19:17 +08:00
Daniel Povey
8e56445c70
Try to resolve graph-freed problem
2022-06-08 20:07:35 +08:00
Daniel Povey
46ca1cd4c4
Add Decorrelate module that adds something to gradients in backward pass
2022-06-08 19:44:58 +08:00
Daniel Povey
9fb8645168
Implement JoinDropout
2022-06-08 16:17:42 +08:00
Daniel Povey
e7886d49a9
Bug fix
2022-06-08 11:05:29 +08:00
Daniel Povey
a83bde1372
Simplify implementation as current idea was not working to decorrelate
2022-06-08 10:24:41 +08:00
Daniel Povey
135be1e19c
Change dropout_rate from 0.2 to 0.1; fix logging statement; fix assignment to rand_scales, nonrand_scales to use [:]
2022-06-08 00:42:04 +08:00
Daniel Povey
a6050cb2de
Implement new, more principled but maybe slower version.
2022-06-07 23:38:38 +08:00
Daniel Povey
75c822c7e9
Pre and post-multiply by inv_sqrt_stddev,stddev
2022-06-07 20:32:18 +08:00
Daniel Povey
a270973b69
Add gaussian version of decorrelation
2022-06-07 18:55:48 +08:00
Daniel Povey
5d24489752
Have 2 scales on dropout
2022-06-07 18:31:42 +08:00
Daniel Povey
53ca61db7a
Reduce scale on decorrelation by 5, to 0.01
2022-06-07 17:10:54 +08:00
Daniel Povey
7c6d923d3f
Add decorrelation to joiner
2022-06-07 16:47:54 +08:00
Daniel Povey
cd6b707e2b
Various bug fixes
2022-06-07 16:45:32 +08:00
Daniel Povey
40a0934b4e
Implement GaussProjDrop
2022-06-07 11:51:24 +08:00
Daniel Povey
4352a16f57
Fix bug that relates to modifying U in place
2022-06-06 17:43:15 +08:00
Daniel Povey
31848dcd11
Randomize the projections
2022-06-06 16:07:28 +08:00
Daniel Povey
6fdb356315
Bug fix RE GPU device
2022-06-06 15:40:20 +08:00
Daniel Povey
71e927411a
Implement FixedProjDrop
2022-06-06 15:38:59 +08:00
Daniel Povey
28df3ba43f
Fix bug re half precision
2022-06-05 23:26:59 +08:00
Daniel Povey
d76aedb790
Make it work for half
2022-06-05 23:25:51 +08:00
Daniel Povey
e535887abb
Bug fixes.
2022-06-05 23:24:02 +08:00
Daniel Povey
136ffb0597
Add ProjDrop for axis-independent dropout
2022-06-05 23:00:48 +08:00
Fangjun Kuang
2f1e23cde1
Narrower and deeper conformer ( #330 )
...
* Copy files for editing.
* Add random combine from #229 .
* Minor fixes.
* Pass model parameters from the command line.
* Fix warnings.
* Fix warnings.
* Update readme.
* Rename to avoid conflicts.
* Update results.
* Add CI for pruned_transducer_stateless5
* Typo fixes.
* Remove random combiner.
* Update decode.py and train.py to use periodically averaged models.
* Minor fixes.
* Revert to use random combiner.
* Update results.
* Minor fixes.
2022-05-23 14:39:11 +08:00
Daniel Povey
4e23fb2252
Improve diagnostics code memory-wise and accumulate more stats. ( #373 )
...
* Update diagnostics, hopefully print more stats.
# Conflicts:
# egs/librispeech/ASR/pruned_transducer_stateless4b/train.py
* Remove memory-limit options arg
* Remove unnecessary option for diagnostics code, collect on more batches
2022-05-19 11:45:59 +08:00
Fangjun Kuang
f6ce135608
Various fixes to support torch script. ( #371 )
...
* Various fixes to support torch script.
* Add tests to ensure that the model is torch scriptable.
* Update tests.
2022-05-16 21:46:59 +08:00
Fangjun Kuang
f23dd43719
Update results for libri+giga multi dataset setup. ( #363 )
...
* Update results for libri+giga multi dataset setup.
2022-05-14 21:45:39 +08:00
Fangjun Kuang
7b7acdf369
Support --iter in export.py ( #360 )
2022-05-13 10:51:44 +08:00
Fangjun Kuang
aeb8986e35
Ignore padding frames during RNN-T decoding. ( #358 )
...
* Ignore padding frames during RNN-T decoding.
* Fix outdated decoding code.
* Minor fixes.
2022-05-13 07:39:14 +08:00
Zengwei Yao
c059ef3169
Keep model_avg on cpu ( #348 )
...
* keep model_avg on cpu
* explicitly convert model_avg to cpu
* minor fix
* remove device convertion for model_avg
* modify usage of the model device in train.py
* change model.device to next(model.parameters()).device for decoding
* assert params.start_epoch>0
* assert params.start_epoch>0, params.start_epoch
2022-05-07 10:42:34 +08:00
Fangjun Kuang
32f05c00e3
Save batch to disk on exception. ( #350 )
2022-05-06 17:49:40 +08:00
Fangjun Kuang
e1c3e98980
Save batch to disk on OOM. ( #343 )
...
* Save batch to disk on OOM.
* minor fixes
* Fixes after review.
* Fix style issues.
2022-05-05 15:09:23 +08:00