9 Commits

Author SHA1 Message Date
pkufool
19c4214958
Fix code style and add copyright. (#18)
* Fix style and add copyright

* Minor fix

* Remove duplicate lines

* Reformat conformer.py by black

* Reformat code style with black.

* Fix github workflows

* Fix lhotse installation

* Install icefall requirements

* Update k2 version, remove lhotse from test workflow
2021-08-23 10:43:59 +08:00
Fangjun Kuang
8469f9ae0a
Refactor asr_datamodule. (#15)
* WIP: Refactor asr_datamodule.

* Fixes after review.

* Minor fixes.
2021-08-21 09:53:46 +08:00
Fangjun Kuang
9d0cc9d829
Support computing nbest oracle WER. (#10)
* Support computing nbest oracle WER.

* Add scale to all nbest based decoding/rescoring methods.

* Add script to run pretrained models.

* Use torchaudio to extract features.

* Support decoding multiple files at the same time.

Also, use kaldifeat for feature extraction.

* Support decoding with LM rescoring and attention-decoder rescoring.

* Minor fixes.

* Replace scale with lattice-score-scale.

* Add usage example with a provided pretrained model.
2021-08-20 11:53:37 +08:00
pkufool
ef233486ae
The training script produce WER of 2.57% on librispeech test-clean (#13)
* Add grad_clip and weight-decay, small fix of dataloader and masking

* Add RESULTS.md
2021-08-20 10:08:08 +08:00
Fangjun Kuang
12a2fd023e
Add doc about installation and usage (#7)
* Add readme.

* Add TOC.

* fix typos

* Minor fixes after review.
2021-08-12 12:44:04 +08:00
Fangjun Kuang
5a0b9bcb23
Refactoring (#4)
* Fix an error in TDNN-LSTM training.

* WIP: Refactoring

* Refactor transformer.py

* Remove unused code.

* Minor fixes.
2021-08-04 14:53:02 +08:00
Fangjun Kuang
398ed80d7a Minor fixes to support DDP training. 2021-07-31 15:26:57 +08:00
Fangjun Kuang
b94d97da37 Disable gradient computation in evaluation mode. 2021-07-29 20:37:31 +08:00
Fangjun Kuang
acc63a9172 WIP: Add BPE training code. 2021-07-29 20:23:52 +08:00