507 Commits

Author SHA1 Message Date
Daniel Povey
a1ae2f8fa9 Revert some accidental changes 2022-06-05 11:40:55 +08:00
Daniel Povey
a9a172aa69 Multiply lr by 10; simplify Cain. 2022-06-04 15:48:33 +08:00
Daniel Povey
679972b905 Fix bug; make epsilon work both ways (small+large); increase epsilon to 0.1 2022-06-03 19:37:48 +08:00
Daniel Povey
8085ed6ef9 Turn off natural gradient update for biases. 2022-06-03 18:40:14 +08:00
Daniel Povey
3fff0c75bb Code cleanup 2022-06-03 11:54:12 +08:00
Daniel Povey
d6e65a0e7f Remove decompose=True 2022-06-03 11:48:45 +08:00
Daniel Povey
a66a0d84d5 Natural gradient, with power -0.5 (halfway; -1 would be NG) 2022-06-02 14:01:03 +08:00
Daniel Povey
b1f6797af1 Remove some rebalancing code that I am now not going to use. 2022-06-01 22:19:28 +08:00
Daniel Povey
0c73664aef Reduce threshold to 1024 2022-06-01 14:42:56 +08:00
Daniel Povey
ca09b9798f Remove decomposition code from checkpoint.py; restore double precision model_avg 2022-06-01 14:01:58 +08:00
Daniel Povey
03e07e80ce More drafts for rebalancing code 2022-06-01 13:58:42 +08:00
Daniel Povey
9c9bf4f1e3 Some drafts of rebalancing code in optim.py 2022-06-01 11:34:19 +08:00
Daniel Povey
bc5c782294 Limit magnitude of linear_pos 2022-06-01 10:40:54 +08:00
Daniel Povey
61619c031e Add activation balancer to stop activations in self_attn from getting too large 2022-06-01 00:40:45 +08:00
Daniel Povey
b2259184b5 Use single precision for model average; increase average-period to 200. 2022-05-31 14:31:46 +08:00
Daniel Povey
ab9eb0d52c Use decompose=True arg for model averaging 2022-05-31 14:28:53 +08:00
Daniel Povey
1651fe0d42 Merge changes from pruned_transducer_stateless4->5 2022-05-31 13:00:11 +08:00
Daniel Povey
c7cf229f56 Revers pruned_transducer_stateless4 to upstream/master 2022-05-31 12:45:51 +08:00
Daniel Povey
741dcd1d6d Move pruned_transducer_stateless4 to pruned_transducer_stateless7 2022-05-31 12:45:28 +08:00
Daniel Povey
8f877efec5 Remove pruned_transducer_stateless4b 2022-05-31 12:29:45 +08:00
Daniel Povey
7011956c6c Merge remote-tracking branch 'upstream/master' into cain3d_clean_merge 2022-05-31 12:17:45 +08:00
Daniel Povey
c3df609805 Revert lrate changes 2022-05-30 16:24:40 +08:00
Daniel Povey
b01c09a693 Remove the natural gradient stuff while keeping cosmetic changes. 2022-05-30 11:56:11 +08:00
Daniel Povey
8a96f29a11 Further increase learning rate 2022-05-29 11:13:27 +08:00
Daniel Povey
2b8ea98fc2 Improve documentation; remove unused code. 2022-05-28 19:22:09 +08:00
Daniel Povey
295595d334 Revert the exclusion of dim=500 2022-05-28 17:49:16 +08:00
LIyong.Guo
c4ee2bc0af
[Ready to merge]stateless6: states4 + hubert distillation. (#387)
* a copy of stateless4 as base

* distillation with hubert

* fix typo

* example usage

* usage

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* fix comment

* add results of 100hours

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* check fairseq and quantization

* a short intro to distillation framework

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* add intro of statless6 in README

* fix type error of dst_manifest_dir

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* make export.py call stateless6/train.py instead of stateless2/train.py

* update results by stateless6

* adjust results format

* fix typo

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-05-28 12:37:50 +08:00
Daniel Povey
0b645662f9 Increase learning rates. 2022-05-28 12:12:34 +08:00
Daniel Povey
e771472a30 Speed up learning rate schedule. 2022-05-28 11:30:45 +08:00
Daniel Povey
828defb019 Some temp code for loading old state dict 2022-05-28 00:29:47 +08:00
Daniel Povey
d89cb53a3b Revert max_eff_lr to initial_lr 2022-05-27 21:42:20 +08:00
Daniel Povey
7aa47408af Bug fixes to avoid inf alpha 2022-05-27 21:41:05 +08:00
Daniel Povey
0787583580 Increase max_eff_lr 2022-05-27 21:03:08 +08:00
Daniel Povey
fd0e9d4bad Fix bug for scalars. 2022-05-27 20:48:41 +08:00
Daniel Povey
503b79252f Add new update and max_eff_lr 2022-05-27 20:44:10 +08:00
Daniel Povey
4efe920401 More consistent use of eps. 2022-05-27 17:41:19 +08:00
Daniel Povey
eed864a3db Change power from 0.66 to 1.0, like natural gradient. 2022-05-27 16:45:42 +08:00
Daniel Povey
89fad8cc5a Change power to 0.66 2022-05-27 16:39:54 +08:00
Daniel Povey
61e7929c60 Remove unused arg 2022-05-26 15:18:03 +08:00
Daniel Povey
8e454bcf9e Exclude size=500 dim from projection; try to use double for model average 2022-05-26 15:15:27 +08:00
Mingshuang Luo
c8c8645081
[Ready to merge] Pruned-transducer-stateless2 recipe for aidatatang_200zh (#375)
* add pruned-rnnt2 model for aidatatang_200zh

* do some changes

* change for README.md

* do some changes
2022-05-24 23:07:40 +08:00
Ewald Enzinger
8c5722de8c
[egs] Add prefix when reading manifests due to recent lhotse changes (#382)
* [egs] Add prefix when reading manifests due to recent lhotse changes

* Fix wenetspeech

* Fix style issues
2022-05-23 23:37:35 +08:00
Mingshuang Luo
0e57b30495
[Ready to merge] Pruned Transducer Stateless2 for WenetSpeech (char-based) (#349)
* add char-based pruned-rnnt2 for wenetspeech

* style check

* style check

* change for export.py

* do some changes

* do some changes

* a small change for .flake8

* solve the conflicts
2022-05-23 17:13:01 +08:00
Fangjun Kuang
2f1e23cde1
Narrower and deeper conformer (#330)
* Copy files for editing.

* Add random combine from #229.

* Minor fixes.

* Pass model parameters from the command line.

* Fix warnings.

* Fix warnings.

* Update readme.

* Rename to avoid conflicts.

* Update results.

* Add CI for pruned_transducer_stateless5

* Typo fixes.

* Remove random combiner.

* Update decode.py and train.py to use periodically averaged models.

* Minor fixes.

* Revert to use random combiner.

* Update results.

* Minor fixes.
2022-05-23 14:39:11 +08:00
Daniel Povey
9ef11e64ba Some small fixes, to bias_correction2 formula and remove bias-u,v-scale 2022-05-22 16:28:33 +08:00
Daniel Povey
b916789ca3 Further increase scales 2022-05-22 12:25:26 +08:00
Daniel Povey
9e206d53fc Increase initial scale for conv and self_attn 2022-05-22 12:18:57 +08:00
Daniel Povey
56d9928934 Scale down modules at initialization 2022-05-22 11:56:59 +08:00
Daniel Povey
5d57dd3930 Change initial bias scales from 0.1 to 0.2 2022-05-22 10:59:51 +08:00
Daniel Povey
435b073979 Change init of biases to all -0.1..0.1 2022-05-22 10:43:06 +08:00