Fangjun Kuang 1d44da845b
RNN-T Conformer training for LibriSpeech (#143)
* Begin to add RNN-T training for librispeech.

* Copy files from conformer_ctc.

Will edit it.

* Use conformer/transformer model as encoder.

* Begin to add training script.

* Add training code.

* Remove long utterances to avoid OOM when a large max_duraiton is used.

* Begin to add decoding script.

* Add decoding script.

* Minor fixes.

* Add beam search.

* Use LSTM layers for the encoder.

Need more tunings.

* Use stateless decoder.

* Minor fixes to make it ready for merge.

* Fix README.

* Update RESULT.md to include RNN-T Conformer.

* Minor fixes.

* Fix tests.

* Minor fixes.

* Minor fixes.

* Fix tests.
2021-12-18 07:42:51 +08:00

20 lines
380 B
Markdown

## Introduction
The encoder consists of LSTM layers in this folder. You can use the
following command to start the training:
```bash
cd egs/librispeech/ASR
export CUDA_VISIBLE_DEVICES="0,1,2"
./transducer_lstm/train.py \
--world-size 3 \
--num-epochs 30 \
--start-epoch 0 \
--exp-dir transducer_lstm/exp \
--full-libri 1 \
--max-duration 300 \
--lr-factor 3
```