Daniel Povey
c736b39c7d
Remove unnecessary option for diagnostics code, collect on more batches
2022-05-19 11:35:54 +08:00
Fangjun Kuang
f6ce135608
Various fixes to support torch script. ( #371 )
...
* Various fixes to support torch script.
* Add tests to ensure that the model is torch scriptable.
* Update tests.
2022-05-16 21:46:59 +08:00
Fangjun Kuang
f23dd43719
Update results for libri+giga multi dataset setup. ( #363 )
...
* Update results for libri+giga multi dataset setup.
2022-05-14 21:45:39 +08:00
Fangjun Kuang
7b7acdf369
Support --iter in export.py ( #360 )
2022-05-13 10:51:44 +08:00
Fangjun Kuang
aeb8986e35
Ignore padding frames during RNN-T decoding. ( #358 )
...
* Ignore padding frames during RNN-T decoding.
* Fix outdated decoding code.
* Minor fixes.
2022-05-13 07:39:14 +08:00
Zengwei Yao
c059ef3169
Keep model_avg on cpu ( #348 )
...
* keep model_avg on cpu
* explicitly convert model_avg to cpu
* minor fix
* remove device convertion for model_avg
* modify usage of the model device in train.py
* change model.device to next(model.parameters()).device for decoding
* assert params.start_epoch>0
* assert params.start_epoch>0, params.start_epoch
2022-05-07 10:42:34 +08:00
Fangjun Kuang
32f05c00e3
Save batch to disk on exception. ( #350 )
2022-05-06 17:49:40 +08:00
Fangjun Kuang
e1c3e98980
Save batch to disk on OOM. ( #343 )
...
* Save batch to disk on OOM.
* minor fixes
* Fixes after review.
* Fix style issues.
2022-05-05 15:09:23 +08:00
Fangjun Kuang
ac84220de9
Modified conformer with multi datasets ( #312 )
...
* Copy files for editing.
* Use librispeech + gigaspeech with modified conformer.
* Support specifying number of workers for on-the-fly feature extraction.
* Feature extraction code for GigaSpeech.
* Combine XL splits lazily during training.
* Fix warnings in decoding.
* Add decoding code for GigaSpeech.
* Fix decoding the gigaspeech dataset.
We have to use the decoder/joiner networks for the GigaSpeech dataset.
* Disable speed perturbe for XL subset.
* Compute the Nbest oracle WER for RNN-T decoding.
* Minor fixes.
* Minor fixes.
* Add results.
* Update results.
* Update CI.
* Update results.
* Fix style issues.
* Update results.
* Fix style issues.
2022-04-29 15:40:30 +08:00
Fangjun Kuang
caab6cfd92
Support specifying iteration number of checkpoints for decoding. ( #336 )
...
See also #289
2022-04-28 14:09:22 +08:00
pehonnet
9a98e6ced6
fix fp16 option in example usage ( #332 )
2022-04-25 18:51:53 +08:00
Guo Liyong
78418ac37c
fix comments
2022-04-13 13:09:24 +08:00
Daniel Povey
2a854f5607
Merge pull request #309 from danpovey/update_results
...
Update results; will further update this before merge
2022-04-12 12:22:48 +08:00
Mingshuang Luo
93c60a9d30
Code style check for librispeech pruned transducer stateless2 ( #308 )
2022-04-11 22:15:18 +08:00
Daniel Povey
ead822477c
Fix rebase
2022-04-11 21:01:13 +08:00
Daniel Povey
e8eb0b94d9
Updating RESULTS.md; fix in beam_search.py
2022-04-11 21:00:11 +08:00
pkufool
a92133ef96
Minor fixes
2022-04-11 20:58:47 +08:00
pkufool
ddd8f9e15e
Minor fixes
2022-04-11 20:58:43 +08:00
pkufool
cc0d4ffa4f
Add mix precision support
2022-04-11 20:58:02 +08:00
Wei Kang
7012fd65b5
Support mix precision training on the reworked model ( #305 )
...
* Add mix precision support
* Minor fixes
* Minor fixes
* Minor fixes
2022-04-11 16:49:54 +08:00
Daniel Povey
03c7c2613d
Fix docs in optim.py
2022-04-11 15:13:42 +08:00
Daniel Povey
5078332088
Fix adding learning rate to tensorboard
2022-04-11 14:58:15 +08:00
Daniel Povey
d5f9d49e53
Modify beam search to be efficient with current joienr
2022-04-11 12:35:29 +08:00
Daniel Povey
46d52dda10
Fix dir names
2022-04-11 12:03:41 +08:00
Daniel Povey
962cf868c9
Fix import
2022-04-10 15:31:46 +08:00
Daniel Povey
d1e4ae788d
Refactor how learning rate is set.
2022-04-10 15:25:27 +08:00
Daniel Povey
82d58629ea
Implement 2p version of learning rate schedule.
2022-04-10 13:50:31 +08:00
Daniel Povey
da50525ca5
Make lrate rule more symmetric
2022-04-10 13:25:40 +08:00
Daniel Povey
4d41ee0caa
Implement 2o schedule
2022-04-09 18:37:03 +08:00
Daniel Povey
db72aee1f0
Set 2n rule..
2022-04-09 18:15:56 +08:00
Daniel Povey
0f8ee68af2
Fix bug
2022-04-08 16:53:42 +08:00
Daniel Povey
f587cd527d
Change exponential part of lrate to be epoch based
2022-04-08 16:24:21 +08:00
Daniel Povey
6ee32cf7af
Set new scheduler
2022-04-08 16:10:06 +08:00
Daniel Povey
61486a0f76
Remove initial_speed
2022-04-06 13:17:26 +08:00
Daniel Povey
a41e93437c
Change some defaults in LR-setting rule.
2022-04-06 12:36:58 +08:00
Daniel Povey
2545237eb3
Changing initial_speed from 0.25 to 01
2022-04-05 18:00:54 +08:00
Daniel Povey
25724b5ce9
Bug-fix RE sign of target_rms
2022-04-05 13:49:35 +08:00
Daniel Povey
d1a669162c
Fix bug in lambda
2022-04-05 13:31:52 +08:00
Daniel Povey
ed8eba91e1
Reduce model_warm_step from 4k to 3k
2022-04-05 13:24:09 +08:00
Daniel Povey
c3169222ae
Simplified optimizer, rework somet things..
2022-04-05 13:23:02 +08:00
Daniel Povey
0f5957394b
Fix to reading scheudler from optim
2022-04-05 12:58:43 +08:00
Daniel Povey
1548cc7462
Fix checkpoint-writing
2022-04-05 11:19:40 +08:00
Daniel Povey
47d49f29d7
Fix weight decay formula by adding 1/1-beta
2022-04-05 00:31:55 +08:00
Daniel Povey
2b0727a355
Fix weight decay formula by adding 1/1-beta
2022-04-05 00:31:28 +08:00
Daniel Povey
234366e51c
Fix type of parameter
2022-04-05 00:18:36 +08:00
Daniel Povey
179d0605ea
Change initialization to 0.25
2022-04-04 23:34:39 +08:00
Daniel Povey
d1f2f93460
Some fixes..
2022-04-04 22:40:18 +08:00
Daniel Povey
72f4a673b1
First draft of new approach to learning rates + init
2022-04-04 20:21:34 +08:00
Daniel Povey
4929e4cf32
Change how warm-step is set
2022-04-04 17:09:25 +08:00
Daniel Povey
a5bbcd7b71
Make training more efficient, avoid redoing some projections.
2022-04-04 14:14:03 +08:00