Fangjun Kuang 5a0b9bcb23
Refactoring (#4)
* Fix an error in TDNN-LSTM training.

* WIP: Refactoring

* Refactor transformer.py

* Remove unused code.

* Minor fixes.
2021-08-04 14:53:02 +08:00
..
2021-07-24 17:13:20 +08:00
2021-08-04 14:53:02 +08:00
2021-07-24 17:13:20 +08:00
2021-07-25 21:40:09 +08:00
2021-08-04 14:53:02 +08:00

(To be filled in)

It will contain:

  • How to run
  • WERs
cd $PWD/..

./prepare.sh

./tdnn_lstm_ctc/train.py

If you have 4 GPUs and want to use GPU 1 and GPU 3 for DDP training, you can do the following:

export CUDA_VISIBLE_DEVICES="1,3"
./tdnn_lstm_ctc/train.py --world-size=2