Triplecq
1e6fe2eae1
restore
2024-01-14 08:05:49 -05:00
Triplecq
8eae6ec7d1
Add pruned_transducer_stateless2 from reazonspeech branch
2024-01-14 05:23:26 -05:00
LIyong.Guo
c4ee2bc0af
[Ready to merge]stateless6: states4 + hubert distillation. ( #387 )
...
* a copy of stateless4 as base
* distillation with hubert
* fix typo
* example usage
* usage
* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* fix comment
* add results of 100hours
* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* check fairseq and quantization
* a short intro to distillation framework
* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* add intro of statless6 in README
* fix type error of dst_manifest_dir
* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
* make export.py call stateless6/train.py instead of stateless2/train.py
* update results by stateless6
* adjust results format
* fix typo
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-05-28 12:37:50 +08:00
Fangjun Kuang
05cb297858
Update result for full libri + GigaSpeech using transducer_stateless. ( #231 )
2022-03-01 17:01:46 +08:00
Fangjun Kuang
2332ba312d
Begin to use multiple datasets in training ( #213 )
...
* Begin to use multiple datasets.
* Finish preparing training datasets.
* Minor fixes
* Copy files.
* Finish training code.
* Display losses for gigaspeech and librispeech separately.
* Fix decode.py
* Make the probability to select a batch from GigaSpeech configurable.
* Update results.
* Minor fixes.
2022-02-21 15:27:27 +08:00