minor fix for docstr and default param. (#1490)

* Update train.py and README.md
This commit is contained in:
zr_jin 2024-02-05 12:47:52 +08:00 committed by GitHub
parent b9e6327adf
commit a813186f64
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 6 additions and 3 deletions

View File

@ -74,6 +74,9 @@ The [LibriSpeech][librispeech] recipe supports the most comprehensive set of mod
- LSTM-based Predictor - LSTM-based Predictor
- [Stateless Predictor](https://research.google/pubs/rnn-transducer-with-stateless-prediction-network/) - [Stateless Predictor](https://research.google/pubs/rnn-transducer-with-stateless-prediction-network/)
#### Whisper
- [OpenAi Whisper](https://arxiv.org/abs/2212.04356) (We support fine-tuning on AiShell-1.)
If you are willing to contribute to icefall, please refer to [contributing](https://icefall.readthedocs.io/en/latest/contributing/index.html) for more details. If you are willing to contribute to icefall, please refer to [contributing](https://icefall.readthedocs.io/en/latest/contributing/index.html) for more details.
We would like to highlight the performance of some of the recipes here. We would like to highlight the performance of some of the recipes here.

View File

@ -19,7 +19,7 @@
Usage: Usage:
#fine-tuning with deepspeed zero stage 1 #fine-tuning with deepspeed zero stage 1
torchrun --nproc-per-node 8 ./whisper/train.py \ torchrun --nproc_per_node 8 ./whisper/train.py \
--max-duration 200 \ --max-duration 200 \
--exp-dir whisper/exp_large_v2 \ --exp-dir whisper/exp_large_v2 \
--model-name large-v2 \ --model-name large-v2 \
@ -28,7 +28,7 @@ torchrun --nproc-per-node 8 ./whisper/train.py \
--deepspeed_config ./whisper/ds_config_zero1.json --deepspeed_config ./whisper/ds_config_zero1.json
# fine-tuning with ddp # fine-tuning with ddp
torchrun --nproc-per-node 8 ./whisper/train.py \ torchrun --nproc_per_node 8 ./whisper/train.py \
--max-duration 200 \ --max-duration 200 \
--exp-dir whisper/exp_medium \ --exp-dir whisper/exp_medium \
--manifest-dir data/fbank_whisper \ --manifest-dir data/fbank_whisper \
@ -136,7 +136,7 @@ def get_parser():
parser.add_argument( parser.add_argument(
"--exp-dir", "--exp-dir",
type=str, type=str,
default="pruned_transducer_stateless7/exp", default="whisper/exp",
help="""The experiment dir. help="""The experiment dir.
It specifies the directory where all training related It specifies the directory where all training related
files, e.g., checkpoints, log, etc, are saved files, e.g., checkpoints, log, etc, are saved