mirror of
https://github.com/k2-fsa/icefall.git
synced 2025-08-09 01:52:41 +00:00
[Ready to merge] Pruned Transducer Stateless2 for WenetSpeech (char-based) (#349)
* add char-based pruned-rnnt2 for wenetspeech * style check * style check * change for export.py * do some changes * do some changes * a small change for .flake8 * solve the conflicts
This commit is contained in:
parent
2f1e23cde1
commit
0e57b30495
33
README.md
33
README.md
@ -20,6 +20,8 @@ We provide 6 recipes at present:
|
|||||||
- [TIMIT][timit]
|
- [TIMIT][timit]
|
||||||
- [TED-LIUM3][tedlium3]
|
- [TED-LIUM3][tedlium3]
|
||||||
- [GigaSpeech][gigaspeech]
|
- [GigaSpeech][gigaspeech]
|
||||||
|
- [Aidatatang_200zh][aidatatang_200zh]
|
||||||
|
- [WenetSpeech][wenetspeech]
|
||||||
|
|
||||||
### yesno
|
### yesno
|
||||||
|
|
||||||
@ -217,6 +219,33 @@ and [Pruned stateless RNN-T: Conformer encoder + Embedding decoder + k2 pruned R
|
|||||||
| fast beam search | 10.50 | 10.69 |
|
| fast beam search | 10.50 | 10.69 |
|
||||||
| modified beam search | 10.40 | 10.51 |
|
| modified beam search | 10.40 | 10.51 |
|
||||||
|
|
||||||
|
### Aidatatang_200zh
|
||||||
|
|
||||||
|
We provide one model for this recipe: [Pruned stateless RNN-T: Conformer encoder + Embedding decoder + k2 pruned RNN-T loss][Aidatatang_200zh_pruned_transducer_stateless2].
|
||||||
|
|
||||||
|
#### Pruned stateless RNN-T: Conformer encoder + Embedding decoder + k2 pruned RNN-T loss
|
||||||
|
|
||||||
|
| | Dev | Test |
|
||||||
|
|----------------------|-------|-------|
|
||||||
|
| greedy search | 5.53 | 6.59 |
|
||||||
|
| fast beam search | 5.30 | 6.34 |
|
||||||
|
| modified beam search | 5.27 | 6.33 |
|
||||||
|
|
||||||
|
We provide a Colab notebook to run a pre-trained Pruned Transducer Stateless model: [](https://colab.research.google.com/drive/1wNSnSj3T5oOctbh5IGCa393gKOoQw2GH?usp=sharing)
|
||||||
|
|
||||||
|
### WenetSpeech
|
||||||
|
|
||||||
|
We provide one model for this recipe: [Pruned stateless RNN-T: Conformer encoder + Embedding decoder + k2 pruned RNN-T loss][WenetSpeech_pruned_transducer_stateless2].
|
||||||
|
|
||||||
|
#### Pruned stateless RNN-T: Conformer encoder + Embedding decoder + k2 pruned RNN-T loss (trained with L subset)
|
||||||
|
|
||||||
|
| | Dev | Test-Net | Test-Meeting |
|
||||||
|
|----------------------|-------|----------|--------------|
|
||||||
|
| greedy search | 7.80 | 8.75 | 13.49 |
|
||||||
|
| fast beam search | 7.94 | 8.74 | 13.80 |
|
||||||
|
| modified beam search | 7.76 | 8.71 | 13.41 |
|
||||||
|
|
||||||
|
We provide a Colab notebook to run a pre-trained Pruned Transducer Stateless model: [](https://colab.research.google.com/drive/1EV4e1CHa1GZgEF-bZgizqI9RyFFehIiN?usp=sharing)
|
||||||
|
|
||||||
## Deployment with C++
|
## Deployment with C++
|
||||||
|
|
||||||
@ -243,10 +272,14 @@ Please see: [ contains the latest results.
|
||||||
|
|
||||||
|
# Transducers
|
||||||
|
|
||||||
|
There are various folders containing the name `transducer` in this folder.
|
||||||
|
The following table lists the differences among them.
|
||||||
|
|
||||||
|
| | Encoder | Decoder | Comment |
|
||||||
|
|---------------------------------------|---------------------|--------------------|-----------------------------|
|
||||||
|
| `pruned_transducer_stateless2` | Conformer(modified) | Embedding + Conv1d | Using k2 pruned RNN-T loss | |
|
||||||
|
|
||||||
|
The decoder in `transducer_stateless` is modified from the paper
|
||||||
|
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419/).
|
||||||
|
We place an additional Conv1d layer right after the input embedding layer.
|
93
egs/wenetspeech/ASR/RESULTS.md
Normal file
93
egs/wenetspeech/ASR/RESULTS.md
Normal file
@ -0,0 +1,93 @@
|
|||||||
|
## Results
|
||||||
|
|
||||||
|
### WenetSpeech char-based training results (Pruned Transducer 2)
|
||||||
|
|
||||||
|
#### 2022-05-19
|
||||||
|
|
||||||
|
Using the codes from this PR https://github.com/k2-fsa/icefall/pull/349.
|
||||||
|
|
||||||
|
When training with the L subset, the WERs are
|
||||||
|
|
||||||
|
| | dev | test-net | test-meeting | comment |
|
||||||
|
|------------------------------------|-------|----------|--------------|------------------------------------------|
|
||||||
|
| greedy search | 7.80 | 8.75 | 13.49 | --epoch 10, --avg 2, --max-duration 100 |
|
||||||
|
| modified beam search (beam size 4) | 7.76 | 8.71 | 13.41 | --epoch 10, --avg 2, --max-duration 100 |
|
||||||
|
| fast beam search (set as default) | 7.94 | 8.74 | 13.80 | --epoch 10, --avg 2, --max-duration 1500 |
|
||||||
|
|
||||||
|
The training command for reproducing is given below:
|
||||||
|
|
||||||
|
```
|
||||||
|
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
|
||||||
|
|
||||||
|
./pruned_transducer_stateless2/train.py \
|
||||||
|
--lang-dir data/lang_char \
|
||||||
|
--exp-dir pruned_transducer_stateless2/exp \
|
||||||
|
--world-size 8 \
|
||||||
|
--num-epochs 15 \
|
||||||
|
--start-epoch 0 \
|
||||||
|
--max-duration 180 \
|
||||||
|
--valid-interval 3000 \
|
||||||
|
--model-warm-step 3000 \
|
||||||
|
--save-every-n 8000 \
|
||||||
|
--training-subset L
|
||||||
|
```
|
||||||
|
|
||||||
|
The tensorboard training log can be found at
|
||||||
|
https://tensorboard.dev/experiment/wM4ZUNtASRavJx79EOYYcg/#scalars
|
||||||
|
|
||||||
|
The decoding command is:
|
||||||
|
```
|
||||||
|
epoch=10
|
||||||
|
avg=2
|
||||||
|
|
||||||
|
## greedy search
|
||||||
|
./pruned_transducer_stateless2/decode.py \
|
||||||
|
--epoch $epoch \
|
||||||
|
--avg $avg \
|
||||||
|
--exp-dir ./pruned_transducer_stateless2/exp \
|
||||||
|
--lang-dir data/lang_char \
|
||||||
|
--max-duration 100 \
|
||||||
|
--decoding-method greedy_search
|
||||||
|
|
||||||
|
## modified beam search
|
||||||
|
./pruned_transducer_stateless2/decode.py \
|
||||||
|
--epoch $epoch \
|
||||||
|
--avg $avg \
|
||||||
|
--exp-dir ./pruned_transducer_stateless2/exp \
|
||||||
|
--lang-dir data/lang_char \
|
||||||
|
--max-duration 100 \
|
||||||
|
--decoding-method modified_beam_search \
|
||||||
|
--beam-size 4
|
||||||
|
|
||||||
|
## fast beam search
|
||||||
|
./pruned_transducer_stateless2/decode.py \
|
||||||
|
--epoch $epoch \
|
||||||
|
--avg $avg \
|
||||||
|
--exp-dir ./pruned_transducer_stateless2/exp \
|
||||||
|
--lang-dir data/lang_char \
|
||||||
|
--max-duration 1500 \
|
||||||
|
--decoding-method fast_beam_search \
|
||||||
|
--beam 4 \
|
||||||
|
--max-contexts 4 \
|
||||||
|
--max-states 8
|
||||||
|
```
|
||||||
|
|
||||||
|
When training with the M subset, the WERs are
|
||||||
|
|
||||||
|
| | dev | test-net | test-meeting | comment |
|
||||||
|
|------------------------------------|--------|-----------|---------------|-------------------------------------------|
|
||||||
|
| greedy search | 10.40 | 11.31 | 19.64 | --epoch 29, --avg 11, --max-duration 100 |
|
||||||
|
| modified beam search (beam size 4) | 9.85 | 11.04 | 18.20 | --epoch 29, --avg 11, --max-duration 100 |
|
||||||
|
| fast beam search (set as default) | 10.18 | 11.10 | 19.32 | --epoch 29, --avg 11, --max-duration 1500 |
|
||||||
|
|
||||||
|
|
||||||
|
When training with the S subset, the WERs are
|
||||||
|
|
||||||
|
| | dev | test-net | test-meeting | comment |
|
||||||
|
|------------------------------------|--------|-----------|---------------|-------------------------------------------|
|
||||||
|
| greedy search | 19.92 | 25.20 | 35.35 | --epoch 29, --avg 24, --max-duration 100 |
|
||||||
|
| modified beam search (beam size 4) | 18.62 | 23.88 | 33.80 | --epoch 29, --avg 24, --max-duration 100 |
|
||||||
|
| fast beam search (set as default) | 19.31 | 24.41 | 34.87 | --epoch 29, --avg 24, --max-duration 1500 |
|
||||||
|
|
||||||
|
|
||||||
|
A pre-trained model and decoding logs can be found at <https://huggingface.co/luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2>
|
1
egs/wenetspeech/ASR/local/compute_fbank_musan.py
Symbolic link
1
egs/wenetspeech/ASR/local/compute_fbank_musan.py
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../../librispeech/ASR/local/compute_fbank_musan.py
|
93
egs/wenetspeech/ASR/local/compute_fbank_wenetspeech_dev_test.py
Executable file
93
egs/wenetspeech/ASR/local/compute_fbank_wenetspeech_dev_test.py
Executable file
@ -0,0 +1,93 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# Copyright 2021 Johns Hopkins University (Piotr Żelasko)
|
||||||
|
# Copyright 2021 Xiaomi Corp. (Fangjun Kuang)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import logging
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import torch
|
||||||
|
from lhotse import (
|
||||||
|
CutSet,
|
||||||
|
KaldifeatFbank,
|
||||||
|
KaldifeatFbankConfig,
|
||||||
|
LilcomHdf5Writer,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Torch's multithreaded behavior needs to be disabled or
|
||||||
|
# it wastes a lot of CPU and slow things down.
|
||||||
|
# Do this outside of main() in case it needs to take effect
|
||||||
|
# even when we are not invoking the main (e.g. when spawning subprocesses).
|
||||||
|
torch.set_num_threads(1)
|
||||||
|
torch.set_num_interop_threads(1)
|
||||||
|
|
||||||
|
|
||||||
|
def compute_fbank_wenetspeech_dev_test():
|
||||||
|
in_out_dir = Path("data/fbank")
|
||||||
|
# number of workers in dataloader
|
||||||
|
num_workers = 42
|
||||||
|
|
||||||
|
# number of seconds in a batch
|
||||||
|
batch_duration = 600
|
||||||
|
|
||||||
|
subsets = ("S", "M", "DEV", "TEST_NET", "TEST_MEETING")
|
||||||
|
|
||||||
|
device = torch.device("cpu")
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
device = torch.device("cuda", 0)
|
||||||
|
extractor = KaldifeatFbank(KaldifeatFbankConfig(device=device))
|
||||||
|
|
||||||
|
logging.info(f"device: {device}")
|
||||||
|
|
||||||
|
for partition in subsets:
|
||||||
|
cuts_path = in_out_dir / f"cuts_{partition}.jsonl.gz"
|
||||||
|
if cuts_path.is_file():
|
||||||
|
logging.info(f"{cuts_path} exists - skipping")
|
||||||
|
continue
|
||||||
|
|
||||||
|
raw_cuts_path = in_out_dir / f"cuts_{partition}_raw.jsonl.gz"
|
||||||
|
|
||||||
|
logging.info(f"Loading {raw_cuts_path}")
|
||||||
|
cut_set = CutSet.from_file(raw_cuts_path)
|
||||||
|
|
||||||
|
logging.info("Computing features")
|
||||||
|
|
||||||
|
cut_set = cut_set.compute_and_store_features_batch(
|
||||||
|
extractor=extractor,
|
||||||
|
storage_path=f"{in_out_dir}/feats_{partition}",
|
||||||
|
num_workers=num_workers,
|
||||||
|
batch_duration=batch_duration,
|
||||||
|
storage_type=LilcomHdf5Writer,
|
||||||
|
)
|
||||||
|
cut_set = cut_set.trim_to_supervisions(
|
||||||
|
keep_overlapping=False, min_duration=None
|
||||||
|
)
|
||||||
|
|
||||||
|
logging.info(f"Saving to {cuts_path}")
|
||||||
|
cut_set.to_file(cuts_path)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
formatter = (
|
||||||
|
"%(asctime)s %(levelname)s [%(filename)s:%(lineno)d] %(message)s"
|
||||||
|
)
|
||||||
|
logging.basicConfig(format=formatter, level=logging.INFO)
|
||||||
|
|
||||||
|
compute_fbank_wenetspeech_dev_test()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
181
egs/wenetspeech/ASR/local/compute_fbank_wenetspeech_splits.py
Executable file
181
egs/wenetspeech/ASR/local/compute_fbank_wenetspeech_splits.py
Executable file
@ -0,0 +1,181 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# Copyright 2021 Johns Hopkins University (Piotr Żelasko)
|
||||||
|
# Copyright 2021 Xiaomi Corp. (Fangjun Kuang)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
from datetime import datetime
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import torch
|
||||||
|
from lhotse import (
|
||||||
|
ChunkedLilcomHdf5Writer,
|
||||||
|
CutSet,
|
||||||
|
KaldifeatFbank,
|
||||||
|
KaldifeatFbankConfig,
|
||||||
|
set_audio_duration_mismatch_tolerance,
|
||||||
|
set_caching_enabled,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Torch's multithreaded behavior needs to be disabled or
|
||||||
|
# it wastes a lot of CPU and slow things down.
|
||||||
|
# Do this outside of main() in case it needs to take effect
|
||||||
|
# even when we are not invoking the main (e.g. when spawning subprocesses).
|
||||||
|
torch.set_num_threads(1)
|
||||||
|
torch.set_num_interop_threads(1)
|
||||||
|
|
||||||
|
|
||||||
|
def get_parser():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
formatter_class=argparse.ArgumentDefaultsHelpFormatter
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--training-subset",
|
||||||
|
type=str,
|
||||||
|
default="L",
|
||||||
|
help="The training subset for computing fbank feature.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--num-workers",
|
||||||
|
type=int,
|
||||||
|
default=20,
|
||||||
|
help="Number of dataloading workers used for reading the audio.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--batch-duration",
|
||||||
|
type=float,
|
||||||
|
default=600.0,
|
||||||
|
help="The maximum number of audio seconds in a batch."
|
||||||
|
"Determines batch size dynamically.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--num-splits",
|
||||||
|
type=int,
|
||||||
|
required=True,
|
||||||
|
help="The number of splits of the L subset",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--start",
|
||||||
|
type=int,
|
||||||
|
default=0,
|
||||||
|
help="Process pieces starting from this number (inclusive).",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--stop",
|
||||||
|
type=int,
|
||||||
|
default=-1,
|
||||||
|
help="Stop processing pieces until this number (exclusive).",
|
||||||
|
)
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def compute_fbank_wenetspeech_splits(args):
|
||||||
|
subset = args.training_subset
|
||||||
|
subset = str(subset)
|
||||||
|
num_splits = args.num_splits
|
||||||
|
output_dir = f"data/fbank/{subset}_split_{num_splits}"
|
||||||
|
output_dir = Path(output_dir)
|
||||||
|
assert output_dir.exists(), f"{output_dir} does not exist!"
|
||||||
|
|
||||||
|
num_digits = len(str(num_splits))
|
||||||
|
|
||||||
|
start = args.start
|
||||||
|
stop = args.stop
|
||||||
|
if stop < start:
|
||||||
|
stop = num_splits
|
||||||
|
|
||||||
|
stop = min(stop, num_splits)
|
||||||
|
|
||||||
|
device = torch.device("cpu")
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
device = torch.device("cuda", 0)
|
||||||
|
extractor = KaldifeatFbank(KaldifeatFbankConfig(device=device))
|
||||||
|
logging.info(f"device: {device}")
|
||||||
|
|
||||||
|
set_audio_duration_mismatch_tolerance(0.01) # 10ms tolerance
|
||||||
|
set_caching_enabled(False)
|
||||||
|
for i in range(start, stop):
|
||||||
|
idx = f"{i + 1}".zfill(num_digits)
|
||||||
|
logging.info(f"Processing {idx}/{num_splits}")
|
||||||
|
|
||||||
|
cuts_path = output_dir / f"cuts_{subset}.{idx}.jsonl.gz"
|
||||||
|
if cuts_path.is_file():
|
||||||
|
logging.info(f"{cuts_path} exists - skipping")
|
||||||
|
continue
|
||||||
|
|
||||||
|
raw_cuts_path = output_dir / f"cuts_{subset}_raw.{idx}.jsonl.gz"
|
||||||
|
|
||||||
|
logging.info(f"Loading {raw_cuts_path}")
|
||||||
|
cut_set = CutSet.from_file(raw_cuts_path)
|
||||||
|
|
||||||
|
logging.info("Computing features")
|
||||||
|
|
||||||
|
cut_set = cut_set.compute_and_store_features_batch(
|
||||||
|
extractor=extractor,
|
||||||
|
storage_path=f"{output_dir}/feats_{subset}_{idx}",
|
||||||
|
num_workers=args.num_workers,
|
||||||
|
batch_duration=args.batch_duration,
|
||||||
|
storage_type=ChunkedLilcomHdf5Writer,
|
||||||
|
)
|
||||||
|
|
||||||
|
logging.info("About to split cuts into smaller chunks.")
|
||||||
|
cut_set = cut_set.trim_to_supervisions(
|
||||||
|
keep_overlapping=False, min_duration=None
|
||||||
|
)
|
||||||
|
|
||||||
|
logging.info(f"Saving to {cuts_path}")
|
||||||
|
cut_set.to_file(cuts_path)
|
||||||
|
logging.info(f"Saved to {cuts_path}")
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
now = datetime.now()
|
||||||
|
date_time = now.strftime("%Y-%m-%d-%H-%M-%S")
|
||||||
|
|
||||||
|
log_filename = "log-compute_fbank_wenetspeech_splits"
|
||||||
|
formatter = (
|
||||||
|
"%(asctime)s %(levelname)s [%(filename)s:%(lineno)d] %(message)s"
|
||||||
|
)
|
||||||
|
log_filename = f"{log_filename}-{date_time}"
|
||||||
|
|
||||||
|
logging.basicConfig(
|
||||||
|
filename=log_filename,
|
||||||
|
format=formatter,
|
||||||
|
level=logging.INFO,
|
||||||
|
filemode="w",
|
||||||
|
)
|
||||||
|
|
||||||
|
console = logging.StreamHandler()
|
||||||
|
console.setLevel(logging.INFO)
|
||||||
|
console.setFormatter(logging.Formatter(formatter))
|
||||||
|
logging.getLogger("").addHandler(console)
|
||||||
|
|
||||||
|
parser = get_parser()
|
||||||
|
args = parser.parse_args()
|
||||||
|
logging.info(vars(args))
|
||||||
|
|
||||||
|
compute_fbank_wenetspeech_splits(args)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
132
egs/wenetspeech/ASR/local/display_manifest_statistics.py
Normal file
132
egs/wenetspeech/ASR/local/display_manifest_statistics.py
Normal file
@ -0,0 +1,132 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# Copyright 2021 Xiaomi Corp. (authors: Fangjun Kuang
|
||||||
|
# Mingshuang Luo)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
"""
|
||||||
|
This file displays duration statistics of utterances in a manifest.
|
||||||
|
You can use the displayed value to choose minimum/maximum duration
|
||||||
|
to remove short and long utterances during the training.
|
||||||
|
See the function `remove_short_and_long_utt()`
|
||||||
|
in ../../../librispeech/ASR/transducer/train.py
|
||||||
|
for usage.
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
from lhotse import load_manifest
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
paths = [
|
||||||
|
"./data/fbank/cuts_S.jsonl.gz",
|
||||||
|
"./data/fbank/cuts_M.jsonl.gz",
|
||||||
|
"./data/fbank/cuts_DEV.jsonl.gz",
|
||||||
|
"./data/fbank/cuts_TEST_NET.jsonl.gz",
|
||||||
|
"./data/fbank/cuts_TEST_MEETING.jsonl.gz",
|
||||||
|
]
|
||||||
|
|
||||||
|
for path in paths:
|
||||||
|
print(f"Starting display the statistics for {path}")
|
||||||
|
cuts = load_manifest(path)
|
||||||
|
cuts.describe()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
|
|
||||||
|
"""
|
||||||
|
Starting display the statistics for ./data/fbank/cuts_S.jsonl.gz
|
||||||
|
Duration statistics (seconds):
|
||||||
|
mean 2.4
|
||||||
|
std 1.8
|
||||||
|
min 0.2
|
||||||
|
25% 1.4
|
||||||
|
50% 2.0
|
||||||
|
75% 2.9
|
||||||
|
99% 8.0
|
||||||
|
99.5% 8.7
|
||||||
|
99.9% 11.9
|
||||||
|
max 405.1
|
||||||
|
|
||||||
|
Starting display the statistics for ./data/fbank/cuts_M.jsonl.gz
|
||||||
|
Cuts count: 4543341
|
||||||
|
Total duration (hours): 3021.1
|
||||||
|
Speech duration (hours): 3021.1 (100.0%)
|
||||||
|
***
|
||||||
|
Duration statistics (seconds):
|
||||||
|
mean 2.4
|
||||||
|
std 1.6
|
||||||
|
min 0.2
|
||||||
|
25% 1.4
|
||||||
|
50% 2.0
|
||||||
|
75% 2.9
|
||||||
|
99% 8.0
|
||||||
|
99.5% 8.8
|
||||||
|
99.9% 12.1
|
||||||
|
max 405.1
|
||||||
|
|
||||||
|
Starting display the statistics for ./data/fbank/cuts_DEV.jsonl.gz
|
||||||
|
Cuts count: 13825
|
||||||
|
Total duration (hours): 20.0
|
||||||
|
Speech duration (hours): 20.0 (100.0%)
|
||||||
|
***
|
||||||
|
Duration statistics (seconds):
|
||||||
|
mean 5.2
|
||||||
|
std 2.2
|
||||||
|
min 1.0
|
||||||
|
25% 3.3
|
||||||
|
50% 4.9
|
||||||
|
75% 7.0
|
||||||
|
99% 9.6
|
||||||
|
99.5% 9.8
|
||||||
|
99.9% 10.0
|
||||||
|
max 10.0
|
||||||
|
|
||||||
|
Starting display the statistics for ./data/fbank/cuts_TEST_NET.jsonl.gz
|
||||||
|
Cuts count: 24774
|
||||||
|
Total duration (hours): 23.1
|
||||||
|
Speech duration (hours): 23.1 (100.0%)
|
||||||
|
***
|
||||||
|
Duration statistics (seconds):
|
||||||
|
mean 3.4
|
||||||
|
std 2.6
|
||||||
|
min 0.1
|
||||||
|
25% 1.4
|
||||||
|
50% 2.4
|
||||||
|
75% 4.8
|
||||||
|
99% 13.1
|
||||||
|
99.5% 14.5
|
||||||
|
99.9% 18.5
|
||||||
|
max 33.3
|
||||||
|
|
||||||
|
Starting display the statistics for ./data/fbank/cuts_TEST_MEETING.jsonl.gz
|
||||||
|
Cuts count: 8370
|
||||||
|
Total duration (hours): 15.2
|
||||||
|
Speech duration (hours): 15.2 (100.0%)
|
||||||
|
***
|
||||||
|
Duration statistics (seconds):
|
||||||
|
mean 6.5
|
||||||
|
std 3.5
|
||||||
|
min 0.8
|
||||||
|
25% 3.7
|
||||||
|
50% 5.8
|
||||||
|
75% 8.8
|
||||||
|
99% 15.2
|
||||||
|
99.5% 16.0
|
||||||
|
99.9% 18.8
|
||||||
|
max 24.6
|
||||||
|
|
||||||
|
"""
|
246
egs/wenetspeech/ASR/local/prepare_char.py
Executable file
246
egs/wenetspeech/ASR/local/prepare_char.py
Executable file
@ -0,0 +1,246 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# Copyright 2021 Xiaomi Corp. (authors: Fangjun Kuang,
|
||||||
|
# Wei Kang,
|
||||||
|
# Mingshuang Luo)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
This script takes as input `lang_dir`, which should contain::
|
||||||
|
- lang_dir/text,
|
||||||
|
- lang_dir/words.txt
|
||||||
|
and generates the following files in the directory `lang_dir`:
|
||||||
|
- lexicon.txt
|
||||||
|
- lexicon_disambig.txt
|
||||||
|
- L.pt
|
||||||
|
- L_disambig.pt
|
||||||
|
- tokens.txt
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Dict, List
|
||||||
|
|
||||||
|
import k2
|
||||||
|
import torch
|
||||||
|
from prepare_lang import (
|
||||||
|
Lexicon,
|
||||||
|
add_disambig_symbols,
|
||||||
|
add_self_loops,
|
||||||
|
write_lexicon,
|
||||||
|
write_mapping,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def lexicon_to_fst_no_sil(
|
||||||
|
lexicon: Lexicon,
|
||||||
|
token2id: Dict[str, int],
|
||||||
|
word2id: Dict[str, int],
|
||||||
|
need_self_loops: bool = False,
|
||||||
|
) -> k2.Fsa:
|
||||||
|
"""Convert a lexicon to an FST (in k2 format).
|
||||||
|
Args:
|
||||||
|
lexicon:
|
||||||
|
The input lexicon. See also :func:`read_lexicon`
|
||||||
|
token2id:
|
||||||
|
A dict mapping tokens to IDs.
|
||||||
|
word2id:
|
||||||
|
A dict mapping words to IDs.
|
||||||
|
need_self_loops:
|
||||||
|
If True, add self-loop to states with non-epsilon output symbols
|
||||||
|
on at least one arc out of the state. The input label for this
|
||||||
|
self loop is `token2id["#0"]` and the output label is `word2id["#0"]`.
|
||||||
|
Returns:
|
||||||
|
Return an instance of `k2.Fsa` representing the given lexicon.
|
||||||
|
"""
|
||||||
|
loop_state = 0 # words enter and leave from here
|
||||||
|
next_state = 1 # the next un-allocated state, will be incremented as we go
|
||||||
|
|
||||||
|
arcs = []
|
||||||
|
|
||||||
|
# The blank symbol <blk> is defined in local/train_bpe_model.py
|
||||||
|
assert token2id["<blk>"] == 0
|
||||||
|
assert word2id["<eps>"] == 0
|
||||||
|
|
||||||
|
eps = 0
|
||||||
|
|
||||||
|
for word, pieces in lexicon:
|
||||||
|
assert len(pieces) > 0, f"{word} has no pronunciations"
|
||||||
|
cur_state = loop_state
|
||||||
|
|
||||||
|
word = word2id[word]
|
||||||
|
pieces = [
|
||||||
|
token2id[i] if i in token2id else token2id["<unk>"] for i in pieces
|
||||||
|
]
|
||||||
|
|
||||||
|
for i in range(len(pieces) - 1):
|
||||||
|
w = word if i == 0 else eps
|
||||||
|
arcs.append([cur_state, next_state, pieces[i], w, 0])
|
||||||
|
|
||||||
|
cur_state = next_state
|
||||||
|
next_state += 1
|
||||||
|
|
||||||
|
# now for the last piece of this word
|
||||||
|
i = len(pieces) - 1
|
||||||
|
w = word if i == 0 else eps
|
||||||
|
arcs.append([cur_state, loop_state, pieces[i], w, 0])
|
||||||
|
|
||||||
|
if need_self_loops:
|
||||||
|
disambig_token = token2id["#0"]
|
||||||
|
disambig_word = word2id["#0"]
|
||||||
|
arcs = add_self_loops(
|
||||||
|
arcs,
|
||||||
|
disambig_token=disambig_token,
|
||||||
|
disambig_word=disambig_word,
|
||||||
|
)
|
||||||
|
|
||||||
|
final_state = next_state
|
||||||
|
arcs.append([loop_state, final_state, -1, -1, 0])
|
||||||
|
arcs.append([final_state])
|
||||||
|
|
||||||
|
arcs = sorted(arcs, key=lambda arc: arc[0])
|
||||||
|
arcs = [[str(i) for i in arc] for arc in arcs]
|
||||||
|
arcs = [" ".join(arc) for arc in arcs]
|
||||||
|
arcs = "\n".join(arcs)
|
||||||
|
|
||||||
|
fsa = k2.Fsa.from_str(arcs, acceptor=False)
|
||||||
|
return fsa
|
||||||
|
|
||||||
|
|
||||||
|
def contain_oov(token_sym_table: Dict[str, int], tokens: List[str]) -> bool:
|
||||||
|
"""Check if all the given tokens are in token symbol table.
|
||||||
|
Args:
|
||||||
|
token_sym_table:
|
||||||
|
Token symbol table that contains all the valid tokens.
|
||||||
|
tokens:
|
||||||
|
A list of tokens.
|
||||||
|
Returns:
|
||||||
|
Return True if there is any token not in the token_sym_table,
|
||||||
|
otherwise False.
|
||||||
|
"""
|
||||||
|
for tok in tokens:
|
||||||
|
if tok not in token_sym_table:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def generate_lexicon(
|
||||||
|
token_sym_table: Dict[str, int], words: List[str]
|
||||||
|
) -> Lexicon:
|
||||||
|
"""Generate a lexicon from a word list and token_sym_table.
|
||||||
|
Args:
|
||||||
|
token_sym_table:
|
||||||
|
Token symbol table that mapping token to token ids.
|
||||||
|
words:
|
||||||
|
A list of strings representing words.
|
||||||
|
Returns:
|
||||||
|
Return a dict whose keys are words and values are the corresponding
|
||||||
|
tokens.
|
||||||
|
"""
|
||||||
|
lexicon = []
|
||||||
|
for word in words:
|
||||||
|
chars = list(word.strip(" \t"))
|
||||||
|
if contain_oov(token_sym_table, chars):
|
||||||
|
continue
|
||||||
|
lexicon.append((word, chars))
|
||||||
|
|
||||||
|
# The OOV word is <UNK>
|
||||||
|
lexicon.append(("<UNK>", ["<unk>"]))
|
||||||
|
return lexicon
|
||||||
|
|
||||||
|
|
||||||
|
def generate_tokens(text_file: str) -> Dict[str, int]:
|
||||||
|
"""Generate tokens from the given text file.
|
||||||
|
Args:
|
||||||
|
text_file:
|
||||||
|
A file that contains text lines to generate tokens.
|
||||||
|
Returns:
|
||||||
|
Return a dict whose keys are tokens and values are token ids ranged
|
||||||
|
from 0 to len(keys) - 1.
|
||||||
|
"""
|
||||||
|
tokens: Dict[str, int] = dict()
|
||||||
|
tokens["<blk>"] = 0
|
||||||
|
tokens["<sos/eos>"] = 1
|
||||||
|
tokens["<unk>"] = 2
|
||||||
|
whitespace = re.compile(r"([ \t\r\n]+)")
|
||||||
|
with open(text_file, "r", encoding="utf-8") as f:
|
||||||
|
for line in f:
|
||||||
|
line = re.sub(whitespace, "", line)
|
||||||
|
tokens_list = list(line)
|
||||||
|
for token in tokens_list:
|
||||||
|
if token not in tokens:
|
||||||
|
tokens[token] = len(tokens)
|
||||||
|
return tokens
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = argparse.ArgumentParser()
|
||||||
|
parser.add_argument("--lang-dir", type=str, help="The lang directory.")
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
lang_dir = Path(args.lang_dir)
|
||||||
|
text_file = lang_dir / "text"
|
||||||
|
|
||||||
|
word_sym_table = k2.SymbolTable.from_file(lang_dir / "words.txt")
|
||||||
|
|
||||||
|
words = word_sym_table.symbols
|
||||||
|
|
||||||
|
excluded = ["<eps>", "!SIL", "<SPOKEN_NOISE>", "<UNK>", "#0", "<s>", "</s>"]
|
||||||
|
for w in excluded:
|
||||||
|
if w in words:
|
||||||
|
words.remove(w)
|
||||||
|
|
||||||
|
token_sym_table = generate_tokens(text_file)
|
||||||
|
|
||||||
|
lexicon = generate_lexicon(token_sym_table, words)
|
||||||
|
|
||||||
|
lexicon_disambig, max_disambig = add_disambig_symbols(lexicon)
|
||||||
|
|
||||||
|
next_token_id = max(token_sym_table.values()) + 1
|
||||||
|
for i in range(max_disambig + 1):
|
||||||
|
disambig = f"#{i}"
|
||||||
|
assert disambig not in token_sym_table
|
||||||
|
token_sym_table[disambig] = next_token_id
|
||||||
|
next_token_id += 1
|
||||||
|
|
||||||
|
word_sym_table.add("#0")
|
||||||
|
word_sym_table.add("<s>")
|
||||||
|
word_sym_table.add("</s>")
|
||||||
|
|
||||||
|
write_mapping(lang_dir / "tokens.txt", token_sym_table)
|
||||||
|
|
||||||
|
write_lexicon(lang_dir / "lexicon.txt", lexicon)
|
||||||
|
write_lexicon(lang_dir / "lexicon_disambig.txt", lexicon_disambig)
|
||||||
|
|
||||||
|
L = lexicon_to_fst_no_sil(
|
||||||
|
lexicon,
|
||||||
|
token2id=token_sym_table,
|
||||||
|
word2id=word_sym_table,
|
||||||
|
)
|
||||||
|
|
||||||
|
L_disambig = lexicon_to_fst_no_sil(
|
||||||
|
lexicon_disambig,
|
||||||
|
token2id=token_sym_table,
|
||||||
|
word2id=word_sym_table,
|
||||||
|
need_self_loops=True,
|
||||||
|
)
|
||||||
|
torch.save(L.as_dict(), lang_dir / "L.pt")
|
||||||
|
torch.save(L_disambig.as_dict(), lang_dir / "L_disambig.pt")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
1
egs/wenetspeech/ASR/local/prepare_lang.py
Symbolic link
1
egs/wenetspeech/ASR/local/prepare_lang.py
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../../librispeech/ASR/local/prepare_lang.py
|
84
egs/wenetspeech/ASR/local/prepare_words.py
Normal file
84
egs/wenetspeech/ASR/local/prepare_words.py
Normal file
@ -0,0 +1,84 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# Copyright 2021 Xiaomi Corp. (authors: Mingshuang Luo)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
This script takes as input words.txt without ids:
|
||||||
|
- words_no_ids.txt
|
||||||
|
and generates the new words.txt with related ids.
|
||||||
|
- words.txt
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
|
||||||
|
from tqdm import tqdm
|
||||||
|
|
||||||
|
|
||||||
|
def get_parser():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Prepare words.txt",
|
||||||
|
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--input-file",
|
||||||
|
default="data/lang_char/words_no_ids.txt",
|
||||||
|
type=str,
|
||||||
|
help="the words file without ids for WenetSpeech",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--output-file",
|
||||||
|
default="data/lang_char/words.txt",
|
||||||
|
type=str,
|
||||||
|
help="the words file with ids for WenetSpeech",
|
||||||
|
)
|
||||||
|
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = get_parser()
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
input_file = args.input_file
|
||||||
|
output_file = args.output_file
|
||||||
|
|
||||||
|
f = open(input_file, "r", encoding="utf-8")
|
||||||
|
lines = f.readlines()
|
||||||
|
new_lines = []
|
||||||
|
add_words = ["<eps> 0", "!SIL 1", "<SPOKEN_NOISE> 2", "<UNK> 3"]
|
||||||
|
new_lines.extend(add_words)
|
||||||
|
|
||||||
|
logging.info("Starting reading the input file")
|
||||||
|
for i in tqdm(range(len(lines))):
|
||||||
|
x = lines[i]
|
||||||
|
idx = 4 + i
|
||||||
|
new_line = str(x.strip("\n")) + " " + str(idx)
|
||||||
|
new_lines.append(new_line)
|
||||||
|
|
||||||
|
logging.info("Starting writing the words.txt")
|
||||||
|
f_out = open(output_file, "w", encoding="utf-8")
|
||||||
|
for line in new_lines:
|
||||||
|
f_out.write(line)
|
||||||
|
f_out.write("\n")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
120
egs/wenetspeech/ASR/local/preprocess_wenetspeech.py
Executable file
120
egs/wenetspeech/ASR/local/preprocess_wenetspeech.py
Executable file
@ -0,0 +1,120 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# Copyright 2021 Johns Hopkins University (Piotr Żelasko)
|
||||||
|
# Copyright 2021 Xiaomi Corp. (Fangjun Kuang)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
import logging
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from lhotse import CutSet, SupervisionSegment
|
||||||
|
from lhotse.recipes.utils import read_manifests_if_cached
|
||||||
|
|
||||||
|
# Similar text filtering and normalization procedure as in:
|
||||||
|
# https://github.com/SpeechColab/WenetSpeech/blob/main/toolkits/kaldi/wenetspeech_data_prep.sh
|
||||||
|
|
||||||
|
|
||||||
|
def normalize_text(
|
||||||
|
utt: str,
|
||||||
|
# punct_pattern=re.compile(r"<(COMMA|PERIOD|QUESTIONMARK|EXCLAMATIONPOINT)>"),
|
||||||
|
punct_pattern=re.compile(r"<(PERIOD|QUESTIONMARK|EXCLAMATIONPOINT)>"),
|
||||||
|
whitespace_pattern=re.compile(r"\s\s+"),
|
||||||
|
) -> str:
|
||||||
|
return whitespace_pattern.sub(" ", punct_pattern.sub("", utt))
|
||||||
|
|
||||||
|
|
||||||
|
def has_no_oov(
|
||||||
|
sup: SupervisionSegment,
|
||||||
|
oov_pattern=re.compile(r"<(SIL|MUSIC|NOISE|OTHER)>"),
|
||||||
|
) -> bool:
|
||||||
|
return oov_pattern.search(sup.text) is None
|
||||||
|
|
||||||
|
|
||||||
|
def preprocess_wenet_speech():
|
||||||
|
src_dir = Path("data/manifests")
|
||||||
|
output_dir = Path("data/fbank")
|
||||||
|
output_dir.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
dataset_parts = (
|
||||||
|
"L",
|
||||||
|
"M",
|
||||||
|
"S",
|
||||||
|
"DEV",
|
||||||
|
"TEST_NET",
|
||||||
|
"TEST_MEETING",
|
||||||
|
)
|
||||||
|
|
||||||
|
logging.info("Loading manifest (may take 10 minutes)")
|
||||||
|
manifests = read_manifests_if_cached(
|
||||||
|
dataset_parts=dataset_parts,
|
||||||
|
output_dir=src_dir,
|
||||||
|
suffix="jsonl.gz",
|
||||||
|
)
|
||||||
|
assert manifests is not None
|
||||||
|
|
||||||
|
for partition, m in manifests.items():
|
||||||
|
logging.info(f"Processing {partition}")
|
||||||
|
raw_cuts_path = output_dir / f"cuts_{partition}_raw.jsonl.gz"
|
||||||
|
if raw_cuts_path.is_file():
|
||||||
|
logging.info(f"{partition} already exists - skipping")
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Note this step makes the recipe different than LibriSpeech:
|
||||||
|
# We must filter out some utterances and remove punctuation
|
||||||
|
# to be consistent with Kaldi.
|
||||||
|
logging.info("Filtering OOV utterances from supervisions")
|
||||||
|
m["supervisions"] = m["supervisions"].filter(has_no_oov)
|
||||||
|
logging.info(f"Normalizing text in {partition}")
|
||||||
|
for sup in m["supervisions"]:
|
||||||
|
text = str(sup.text)
|
||||||
|
logging.info(f"Original text: {text}")
|
||||||
|
sup.text = normalize_text(sup.text)
|
||||||
|
text = str(sup.text)
|
||||||
|
logging.info(f"Normalize text: {text}")
|
||||||
|
|
||||||
|
# Create long-recording cut manifests.
|
||||||
|
logging.info(f"Processing {partition}")
|
||||||
|
cut_set = CutSet.from_manifests(
|
||||||
|
recordings=m["recordings"],
|
||||||
|
supervisions=m["supervisions"],
|
||||||
|
)
|
||||||
|
# Run data augmentation that needs to be done in the
|
||||||
|
# time domain.
|
||||||
|
if partition not in ["DEV", "TEST_NET", "TEST_MEETING"]:
|
||||||
|
logging.info(
|
||||||
|
f"Speed perturb for {partition} with factors 0.9 and 1.1 "
|
||||||
|
"(Perturbing may take 8 minutes and saving may take 20 minutes)"
|
||||||
|
)
|
||||||
|
cut_set = (
|
||||||
|
cut_set
|
||||||
|
+ cut_set.perturb_speed(0.9)
|
||||||
|
+ cut_set.perturb_speed(1.1)
|
||||||
|
)
|
||||||
|
logging.info(f"Saving to {raw_cuts_path}")
|
||||||
|
cut_set.to_file(raw_cuts_path)
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
formatter = (
|
||||||
|
"%(asctime)s %(levelname)s [%(filename)s:%(lineno)d] %(message)s"
|
||||||
|
)
|
||||||
|
logging.basicConfig(format=formatter, level=logging.INFO)
|
||||||
|
|
||||||
|
preprocess_wenet_speech()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
83
egs/wenetspeech/ASR/local/text2segments.py
Normal file
83
egs/wenetspeech/ASR/local/text2segments.py
Normal file
@ -0,0 +1,83 @@
|
|||||||
|
#!/usr/bin/env python
|
||||||
|
# -*- coding: utf-8 -*-
|
||||||
|
|
||||||
|
# Copyright 2021 Xiaomi Corp. (authors: Mingshuang Luo)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
|
||||||
|
"""
|
||||||
|
This script takes as input "text", which refers to the transcript file for
|
||||||
|
WenetSpeech:
|
||||||
|
- text
|
||||||
|
and generates the output file text_word_segmentation which is implemented
|
||||||
|
with word segmenting:
|
||||||
|
- text_words_segmentation
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
|
||||||
|
import jieba
|
||||||
|
from tqdm import tqdm
|
||||||
|
|
||||||
|
jieba.enable_paddle()
|
||||||
|
|
||||||
|
|
||||||
|
def get_parser():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="Chinese Word Segmentation for text",
|
||||||
|
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--input-file",
|
||||||
|
default="data/lang_char/text",
|
||||||
|
type=str,
|
||||||
|
help="the input text file for WenetSpeech",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--output-file",
|
||||||
|
default="data/lang_char/text_words_segmentation",
|
||||||
|
type=str,
|
||||||
|
help="the text implemented with words segmenting for WenetSpeech",
|
||||||
|
)
|
||||||
|
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = get_parser()
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
input_file = args.input
|
||||||
|
output_file = args.output
|
||||||
|
|
||||||
|
f = open(input_file, "r", encoding="utf-8")
|
||||||
|
lines = f.readlines()
|
||||||
|
new_lines = []
|
||||||
|
for i in tqdm(range(len(lines))):
|
||||||
|
x = lines[i].rstrip()
|
||||||
|
seg_list = jieba.cut(x, use_paddle=True)
|
||||||
|
new_line = " ".join(seg_list)
|
||||||
|
new_lines.append(new_line)
|
||||||
|
|
||||||
|
f_new = open(output_file, "w", encoding="utf-8")
|
||||||
|
for line in new_lines:
|
||||||
|
f_new.write(line)
|
||||||
|
f_new.write("\n")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
196
egs/wenetspeech/ASR/local/text2token.py
Executable file
196
egs/wenetspeech/ASR/local/text2token.py
Executable file
@ -0,0 +1,196 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# Copyright 2017 Johns Hopkins University (authors: Shinji Watanabe)
|
||||||
|
# 2022 Xiaomi Corp. (authors: Mingshuang Luo)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import codecs
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
from pypinyin import lazy_pinyin, pinyin
|
||||||
|
|
||||||
|
is_python2 = sys.version_info[0] == 2
|
||||||
|
|
||||||
|
|
||||||
|
def exist_or_not(i, match_pos):
|
||||||
|
start_pos = None
|
||||||
|
end_pos = None
|
||||||
|
for pos in match_pos:
|
||||||
|
if pos[0] <= i < pos[1]:
|
||||||
|
start_pos = pos[0]
|
||||||
|
end_pos = pos[1]
|
||||||
|
break
|
||||||
|
|
||||||
|
return start_pos, end_pos
|
||||||
|
|
||||||
|
|
||||||
|
def get_parser():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
description="convert raw text to tokenized text",
|
||||||
|
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--nchar",
|
||||||
|
"-n",
|
||||||
|
default=1,
|
||||||
|
type=int,
|
||||||
|
help="number of characters to split, i.e., \
|
||||||
|
aabb -> a a b b with -n 1 and aa bb with -n 2",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--skip-ncols", "-s", default=0, type=int, help="skip first n columns"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--space", default="<space>", type=str, help="space symbol"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--non-lang-syms",
|
||||||
|
"-l",
|
||||||
|
default=None,
|
||||||
|
type=str,
|
||||||
|
help="list of non-linguistic symobles, e.g., <NOISE> etc.",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"text", type=str, default=False, nargs="?", help="input text"
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--trans_type",
|
||||||
|
"-t",
|
||||||
|
type=str,
|
||||||
|
default="char",
|
||||||
|
choices=["char", "pinyin", "lazy_pinyin"],
|
||||||
|
help="""Transcript type. char/pinyin/lazy_pinyin""",
|
||||||
|
)
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def token2id(
|
||||||
|
texts, token_table, token_type: str = "lazy_pinyin", oov: str = "<unk>"
|
||||||
|
) -> List[List[int]]:
|
||||||
|
"""Convert token to id.
|
||||||
|
Args:
|
||||||
|
texts:
|
||||||
|
The input texts, it refers to the chinese text here.
|
||||||
|
token_table:
|
||||||
|
The token table is built based on "data/lang_xxx/token.txt"
|
||||||
|
token_type:
|
||||||
|
The type of token, such as "pinyin" and "lazy_pinyin".
|
||||||
|
oov:
|
||||||
|
Out of vocabulary token. When a word(token) in the transcript
|
||||||
|
does not exist in the token list, it is replaced with `oov`.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
The list of ids for the input texts.
|
||||||
|
"""
|
||||||
|
if texts is None:
|
||||||
|
raise ValueError("texts can't be None!")
|
||||||
|
else:
|
||||||
|
oov_id = token_table[oov]
|
||||||
|
ids: List[List[int]] = []
|
||||||
|
for text in texts:
|
||||||
|
chars_list = list(str(text))
|
||||||
|
if token_type == "lazy_pinyin":
|
||||||
|
text = lazy_pinyin(chars_list)
|
||||||
|
sub_ids = [
|
||||||
|
token_table[txt] if txt in token_table else oov_id
|
||||||
|
for txt in text
|
||||||
|
]
|
||||||
|
ids.append(sub_ids)
|
||||||
|
else: # token_type = "pinyin"
|
||||||
|
text = pinyin(chars_list)
|
||||||
|
sub_ids = [
|
||||||
|
token_table[txt[0]] if txt[0] in token_table else oov_id
|
||||||
|
for txt in text
|
||||||
|
]
|
||||||
|
ids.append(sub_ids)
|
||||||
|
return ids
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
parser = get_parser()
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
rs = []
|
||||||
|
if args.non_lang_syms is not None:
|
||||||
|
with codecs.open(args.non_lang_syms, "r", encoding="utf-8") as f:
|
||||||
|
nls = [x.rstrip() for x in f.readlines()]
|
||||||
|
rs = [re.compile(re.escape(x)) for x in nls]
|
||||||
|
|
||||||
|
if args.text:
|
||||||
|
f = codecs.open(args.text, encoding="utf-8")
|
||||||
|
else:
|
||||||
|
f = codecs.getreader("utf-8")(
|
||||||
|
sys.stdin if is_python2 else sys.stdin.buffer
|
||||||
|
)
|
||||||
|
|
||||||
|
sys.stdout = codecs.getwriter("utf-8")(
|
||||||
|
sys.stdout if is_python2 else sys.stdout.buffer
|
||||||
|
)
|
||||||
|
line = f.readline()
|
||||||
|
n = args.nchar
|
||||||
|
while line:
|
||||||
|
x = line.split()
|
||||||
|
print(" ".join(x[: args.skip_ncols]), end=" ")
|
||||||
|
a = " ".join(x[args.skip_ncols :]) # noqa E203
|
||||||
|
|
||||||
|
# get all matched positions
|
||||||
|
match_pos = []
|
||||||
|
for r in rs:
|
||||||
|
i = 0
|
||||||
|
while i >= 0:
|
||||||
|
m = r.search(a, i)
|
||||||
|
if m:
|
||||||
|
match_pos.append([m.start(), m.end()])
|
||||||
|
i = m.end()
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
if len(match_pos) > 0:
|
||||||
|
chars = []
|
||||||
|
i = 0
|
||||||
|
while i < len(a):
|
||||||
|
start_pos, end_pos = exist_or_not(i, match_pos)
|
||||||
|
if start_pos is not None:
|
||||||
|
chars.append(a[start_pos:end_pos])
|
||||||
|
i = end_pos
|
||||||
|
else:
|
||||||
|
chars.append(a[i])
|
||||||
|
i += 1
|
||||||
|
a = chars
|
||||||
|
|
||||||
|
if args.trans_type == "pinyin":
|
||||||
|
a = pinyin(list(str(a)))
|
||||||
|
a = [one[0] for one in a]
|
||||||
|
|
||||||
|
if args.trans_type == "lazy_pinyin":
|
||||||
|
a = lazy_pinyin(list(str(a)))
|
||||||
|
|
||||||
|
a = [a[j : j + n] for j in range(0, len(a), n)] # noqa E203
|
||||||
|
|
||||||
|
a_flat = []
|
||||||
|
for z in a:
|
||||||
|
a_flat.append("".join(z))
|
||||||
|
|
||||||
|
a_chars = [z.replace(" ", args.space) for z in a_flat]
|
||||||
|
|
||||||
|
print("".join(a_chars))
|
||||||
|
line = f.readline()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
225
egs/wenetspeech/ASR/prepare.sh
Executable file
225
egs/wenetspeech/ASR/prepare.sh
Executable file
@ -0,0 +1,225 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -eou pipefail
|
||||||
|
|
||||||
|
nj=15
|
||||||
|
stage=0
|
||||||
|
stop_stage=100
|
||||||
|
|
||||||
|
# Split L subset to this number of pieces
|
||||||
|
# This is to avoid OOM during feature extraction.
|
||||||
|
num_splits=1000
|
||||||
|
|
||||||
|
# We assume dl_dir (download dir) contains the following
|
||||||
|
# directories and files. If not, they will be downloaded
|
||||||
|
# by this script automatically.
|
||||||
|
#
|
||||||
|
# - $dl_dir/WenetSpeech
|
||||||
|
# You can find audio, WenetSpeech.json inside it.
|
||||||
|
# You can apply for the download credentials by following
|
||||||
|
# https://github.com/wenet-e2e/WenetSpeech#download
|
||||||
|
#
|
||||||
|
# - $dl_dir/musan
|
||||||
|
# This directory contains the following directories downloaded from
|
||||||
|
# http://www.openslr.org/17/
|
||||||
|
#
|
||||||
|
# - music
|
||||||
|
# - noise
|
||||||
|
# - speech
|
||||||
|
|
||||||
|
dl_dir=$PWD/download
|
||||||
|
|
||||||
|
. shared/parse_options.sh || exit 1
|
||||||
|
|
||||||
|
# All files generated by this script are saved in "data".
|
||||||
|
# You can safely remove "data" and rerun this script to regenerate it.
|
||||||
|
mkdir -p data
|
||||||
|
|
||||||
|
log() {
|
||||||
|
# This function is from espnet
|
||||||
|
local fname=${BASH_SOURCE[1]##*/}
|
||||||
|
echo -e "$(date '+%Y-%m-%d %H:%M:%S') (${fname}:${BASH_LINENO[0]}:${FUNCNAME[1]}) $*"
|
||||||
|
}
|
||||||
|
|
||||||
|
log "dl_dir: $dl_dir"
|
||||||
|
|
||||||
|
if [ $stage -le 0 ] && [ $stop_stage -ge 0 ]; then
|
||||||
|
log "Stage 0: Download data"
|
||||||
|
|
||||||
|
[ ! -e $dl_dir/WenetSpeech ] && mkdir -p $dl_dir/WenetSpeech
|
||||||
|
|
||||||
|
# If you have pre-downloaded it to /path/to/WenetSpeech,
|
||||||
|
# you can create a symlink
|
||||||
|
#
|
||||||
|
# ln -sfv /path/to/WenetSpeech $dl_dir/WenetSpeech
|
||||||
|
#
|
||||||
|
if [ ! -d $dl_dir/WenetSpeech/wenet_speech ] && [ ! -f $dl_dir/WenetSpeech/metadata/v1.list ]; then
|
||||||
|
log "Stage 0: should download WenetSpeech first"
|
||||||
|
exit 1;
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If you have pre-downloaded it to /path/to/musan,
|
||||||
|
# you can create a symlink
|
||||||
|
#
|
||||||
|
#ln -sfv /path/to/musan $dl_dir/musan
|
||||||
|
|
||||||
|
if [ ! -d $dl_dir/musan ]; then
|
||||||
|
lhotse download musan $dl_dir
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 1 ] && [ $stop_stage -ge 1 ]; then
|
||||||
|
log "Stage 1: Prepare WenetSpeech manifest"
|
||||||
|
# We assume that you have downloaded the WenetSpeech corpus
|
||||||
|
# to $dl_dir/WenetSpeech
|
||||||
|
mkdir -p data/manifests
|
||||||
|
lhotse prepare wenet-speech $dl_dir/WenetSpeech data/manifests -j $nj
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 2 ] && [ $stop_stage -ge 2 ]; then
|
||||||
|
log "Stage 2: Prepare musan manifest"
|
||||||
|
# We assume that you have downloaded the musan corpus
|
||||||
|
# to data/musan
|
||||||
|
mkdir -p data/manifests
|
||||||
|
lhotse prepare musan $dl_dir/musan data/manifests
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 3 ] && [ $stop_stage -ge 3 ]; then
|
||||||
|
log "Stage 3: Preprocess WenetSpeech manifest"
|
||||||
|
if [ ! -f data/fbank/.preprocess_complete ]; then
|
||||||
|
python3 ./local/preprocess_wenetspeech.py
|
||||||
|
touch data/fbank/.preprocess_complete
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 4 ] && [ $stop_stage -ge 4 ]; then
|
||||||
|
log "Stage 4: Compute features for DEV and TEST subsets of WenetSpeech (may take 2 minutes)"
|
||||||
|
python3 ./local/compute_fbank_wenetspeech_dev_test.py
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 5 ] && [ $stop_stage -ge 5 ]; then
|
||||||
|
log "Stage 5: Split S subset into ${num_splits} pieces"
|
||||||
|
split_dir=data/fbank/S_split_${num_splits}_test
|
||||||
|
if [ ! -f $split_dir/.split_completed ]; then
|
||||||
|
lhotse split $num_splits ./data/fbank/cuts_S_raw.jsonl.gz $split_dir
|
||||||
|
touch $split_dir/.split_completed
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 6 ] && [ $stop_stage -ge 6 ]; then
|
||||||
|
log "Stage 6: Split M subset into ${num_splits} piece"
|
||||||
|
split_dir=data/fbank/M_split_${num_splits}
|
||||||
|
if [ ! -f $split_dir/.split_completed ]; then
|
||||||
|
lhotse split $num_splits ./data/fbank/cuts_M_raw.jsonl.gz $split_dir
|
||||||
|
touch $split_dir/.split_completed
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 7 ] && [ $stop_stage -ge 7 ]; then
|
||||||
|
log "Stage 7: Split L subset into ${num_splits} pieces"
|
||||||
|
split_dir=data/fbank/L_split_${num_splits}
|
||||||
|
if [ ! -f $split_dir/.split_completed ]; then
|
||||||
|
lhotse split $num_splits ./data/fbank/cuts_L_raw.jsonl.gz $split_dir
|
||||||
|
touch $split_dir/.split_completed
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 8 ] && [ $stop_stage -ge 8 ]; then
|
||||||
|
log "Stage 8: Compute features for S"
|
||||||
|
python3 ./local/compute_fbank_wenetspeech_splits.py \
|
||||||
|
--training-subset S \
|
||||||
|
--num-workers 20 \
|
||||||
|
--batch-duration 600 \
|
||||||
|
--start 0 \
|
||||||
|
--num-splits $num_splits
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 9 ] && [ $stop_stage -ge 9 ]; then
|
||||||
|
log "Stage 9: Compute features for M"
|
||||||
|
python3 ./local/compute_fbank_wenetspeech_splits.py \
|
||||||
|
--training-subset M \
|
||||||
|
--num-workers 20 \
|
||||||
|
--batch-duration 600 \
|
||||||
|
--start 0 \
|
||||||
|
--num-splits $num_splits
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 10 ] && [ $stop_stage -ge 10 ]; then
|
||||||
|
log "Stage 10: Compute features for L"
|
||||||
|
python3 ./local/compute_fbank_wenetspeech_splits.py \
|
||||||
|
--training-subset L \
|
||||||
|
--num-workers 20 \
|
||||||
|
--batch-duration 600 \
|
||||||
|
--start 0 \
|
||||||
|
--num-splits $num_splits
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 11 ] && [ $stop_stage -ge 11 ]; then
|
||||||
|
log "Stage 11: Combine features for S"
|
||||||
|
if [ ! -f data/fbank/cuts_S.jsonl.gz ]; then
|
||||||
|
pieces=$(find data/fbank/S_split_1000 -name "cuts_S.*.jsonl.gz")
|
||||||
|
lhotse combine $pieces data/fbank/cuts_S.jsonl.gz
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 12 ] && [ $stop_stage -ge 12 ]; then
|
||||||
|
log "Stage 12: Combine features for M"
|
||||||
|
if [ ! -f data/fbank/cuts_M.jsonl.gz ]; then
|
||||||
|
pieces=$(find data/fbank/M_split_1000 -name "cuts_M.*.jsonl.gz")
|
||||||
|
lhotse combine $pieces data/fbank/cuts_M.jsonl.gz
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 13 ] && [ $stop_stage -ge 13 ]; then
|
||||||
|
log "Stage 13: Combine features for L"
|
||||||
|
if [ ! -f data/fbank/cuts_L.jsonl.gz ]; then
|
||||||
|
pieces=$(find data/fbank/L_split_1000 -name "cuts_L.*.jsonl.gz")
|
||||||
|
lhotse combine $pieces data/fbank/cuts_L.jsonl.gz
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 14 ] && [ $stop_stage -ge 14 ]; then
|
||||||
|
log "Stage 14: Compute fbank for musan"
|
||||||
|
mkdir -p data/fbank
|
||||||
|
./local/compute_fbank_musan.py
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 15 ] && [ $stop_stage -ge 15 ]; then
|
||||||
|
log "Stage 15: Prepare char based lang"
|
||||||
|
lang_char_dir=data/lang_char
|
||||||
|
mkdir -p $lang_char_dir
|
||||||
|
|
||||||
|
# Prepare text.
|
||||||
|
# Note: in Linux, you can install jq with the following command:
|
||||||
|
# wget -O jq https://github.com/stedolan/jq/release/download/jq-1.6/jq-linux64
|
||||||
|
if [ ! -f $lang_char_dir/text ]; then
|
||||||
|
gunzip -c data/manifests/supervisions_L.jsonl.gz \
|
||||||
|
| jq 'text' | sed 's/"//g' \
|
||||||
|
| ./local/text2token.py -t "char" > $lang_char_dir/text
|
||||||
|
fi
|
||||||
|
|
||||||
|
# The implementation of chinese word segmentation for text,
|
||||||
|
# and it will take about 15 minutes.
|
||||||
|
if [ ! -f $lang_char_dir/text_words_segmentation ]; then
|
||||||
|
python ./local/text2segments.py \
|
||||||
|
--input-file $lang_char_dir/text \
|
||||||
|
--output-file $lang_char_dir/text_words_segmentation
|
||||||
|
fi
|
||||||
|
|
||||||
|
cat $lang_char_dir/text_words_segmentation | sed 's/ /\n/g' \
|
||||||
|
| sort -u | sed '/^$/d' | uniq > $lang_char_dir/words_no_ids.txt
|
||||||
|
|
||||||
|
if [ ! -f $lang_char_dir/words.txt ]; then
|
||||||
|
python ./local/prepare_words.py \
|
||||||
|
--input-file $lang_char_dir/words_no_ids.txt \
|
||||||
|
--output-file $lang_char_dir/words.txt
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ $stage -le 16 ] && [ $stop_stage -ge 16 ]; then
|
||||||
|
log "Stage 16: Prepare char based L_disambig.pt"
|
||||||
|
if [ ! -f data/lang_char/L_disambig.pt ]; then
|
||||||
|
python ./local/prepare_char.py \
|
||||||
|
--lang-dir data/lang_char
|
||||||
|
fi
|
||||||
|
fi
|
@ -0,0 +1,450 @@
|
|||||||
|
# Copyright 2021 Piotr Żelasko
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import inspect
|
||||||
|
import logging
|
||||||
|
from functools import lru_cache
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any, Dict, List, Optional
|
||||||
|
|
||||||
|
import torch
|
||||||
|
from lhotse import (
|
||||||
|
CutSet,
|
||||||
|
Fbank,
|
||||||
|
FbankConfig,
|
||||||
|
load_manifest,
|
||||||
|
set_caching_enabled,
|
||||||
|
)
|
||||||
|
from lhotse.dataset import (
|
||||||
|
CutConcatenate,
|
||||||
|
CutMix,
|
||||||
|
DynamicBucketingSampler,
|
||||||
|
K2SpeechRecognitionDataset,
|
||||||
|
PrecomputedFeatures,
|
||||||
|
SingleCutSampler,
|
||||||
|
SpecAugment,
|
||||||
|
)
|
||||||
|
from lhotse.dataset.input_strategies import OnTheFlyFeatures
|
||||||
|
from lhotse.utils import fix_random_seed
|
||||||
|
from torch.utils.data import DataLoader
|
||||||
|
|
||||||
|
from icefall.utils import str2bool
|
||||||
|
|
||||||
|
set_caching_enabled(False)
|
||||||
|
torch.set_num_threads(1)
|
||||||
|
|
||||||
|
|
||||||
|
class _SeedWorkers:
|
||||||
|
def __init__(self, seed: int):
|
||||||
|
self.seed = seed
|
||||||
|
|
||||||
|
def __call__(self, worker_id: int):
|
||||||
|
fix_random_seed(self.seed + worker_id)
|
||||||
|
|
||||||
|
|
||||||
|
class WenetSpeechAsrDataModule:
|
||||||
|
"""
|
||||||
|
DataModule for k2 ASR experiments.
|
||||||
|
It assumes there is always one train and valid dataloader,
|
||||||
|
but there can be multiple test dataloaders (e.g. LibriSpeech test-clean
|
||||||
|
and test-other).
|
||||||
|
It contains all the common data pipeline modules used in ASR
|
||||||
|
experiments, e.g.:
|
||||||
|
- dynamic batch size,
|
||||||
|
- bucketing samplers,
|
||||||
|
- cut concatenation,
|
||||||
|
- augmentation,
|
||||||
|
- on-the-fly feature extraction
|
||||||
|
This class should be derived for specific corpora used in ASR tasks.
|
||||||
|
"""
|
||||||
|
|
||||||
|
def __init__(self, args: argparse.Namespace):
|
||||||
|
self.args = args
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def add_arguments(cls, parser: argparse.ArgumentParser):
|
||||||
|
group = parser.add_argument_group(
|
||||||
|
title="ASR data related options",
|
||||||
|
description="These options are used for the preparation of "
|
||||||
|
"PyTorch DataLoaders from Lhotse CutSet's -- they control the "
|
||||||
|
"effective batch sizes, sampling strategies, applied data "
|
||||||
|
"augmentations, etc.",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--manifest-dir",
|
||||||
|
type=Path,
|
||||||
|
default=Path("data/fbank"),
|
||||||
|
help="Path to directory with train/valid/test cuts.",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--max-duration",
|
||||||
|
type=int,
|
||||||
|
default=200.0,
|
||||||
|
help="Maximum pooled recordings duration (seconds) in a "
|
||||||
|
"single batch. You can reduce it if it causes CUDA OOM.",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--bucketing-sampler",
|
||||||
|
type=str2bool,
|
||||||
|
default=True,
|
||||||
|
help="When enabled, the batches will come from buckets of "
|
||||||
|
"similar duration (saves padding frames).",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--num-buckets",
|
||||||
|
type=int,
|
||||||
|
default=300,
|
||||||
|
help="The number of buckets for the DynamicBucketingSampler"
|
||||||
|
"(you might want to increase it for larger datasets).",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--concatenate-cuts",
|
||||||
|
type=str2bool,
|
||||||
|
default=False,
|
||||||
|
help="When enabled, utterances (cuts) will be concatenated "
|
||||||
|
"to minimize the amount of padding.",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--duration-factor",
|
||||||
|
type=float,
|
||||||
|
default=1.0,
|
||||||
|
help="Determines the maximum duration of a concatenated cut "
|
||||||
|
"relative to the duration of the longest cut in a batch.",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--gap",
|
||||||
|
type=float,
|
||||||
|
default=1.0,
|
||||||
|
help="The amount of padding (in seconds) inserted between "
|
||||||
|
"concatenated cuts. This padding is filled with noise when "
|
||||||
|
"noise augmentation is used.",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--on-the-fly-feats",
|
||||||
|
type=str2bool,
|
||||||
|
default=False,
|
||||||
|
help="When enabled, use on-the-fly cut mixing and feature "
|
||||||
|
"extraction. Will drop existing precomputed feature manifests "
|
||||||
|
"if available.",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--shuffle",
|
||||||
|
type=str2bool,
|
||||||
|
default=True,
|
||||||
|
help="When enabled (=default), the examples will be "
|
||||||
|
"shuffled for each epoch.",
|
||||||
|
)
|
||||||
|
group.add_argument(
|
||||||
|
"--return-cuts",
|
||||||
|
type=str2bool,
|
||||||
|
default=True,
|
||||||
|
help="When enabled, each batch will have the "
|
||||||
|
"field: batch['supervisions']['cut'] with the cuts that "
|
||||||
|
"were used to construct it.",
|
||||||
|
)
|
||||||
|
|
||||||
|
group.add_argument(
|
||||||
|
"--num-workers",
|
||||||
|
type=int,
|
||||||
|
default=2,
|
||||||
|
help="The number of training dataloader workers that "
|
||||||
|
"collect the batches.",
|
||||||
|
)
|
||||||
|
|
||||||
|
group.add_argument(
|
||||||
|
"--enable-spec-aug",
|
||||||
|
type=str2bool,
|
||||||
|
default=True,
|
||||||
|
help="When enabled, use SpecAugment for training dataset.",
|
||||||
|
)
|
||||||
|
|
||||||
|
group.add_argument(
|
||||||
|
"--spec-aug-time-warp-factor",
|
||||||
|
type=int,
|
||||||
|
default=80,
|
||||||
|
help="Used only when --enable-spec-aug is True. "
|
||||||
|
"It specifies the factor for time warping in SpecAugment. "
|
||||||
|
"Larger values mean more warping. "
|
||||||
|
"A value less than 1 means to disable time warp.",
|
||||||
|
)
|
||||||
|
|
||||||
|
group.add_argument(
|
||||||
|
"--enable-musan",
|
||||||
|
type=str2bool,
|
||||||
|
default=True,
|
||||||
|
help="When enabled, select noise from MUSAN and mix it"
|
||||||
|
"with training dataset. ",
|
||||||
|
)
|
||||||
|
|
||||||
|
group.add_argument(
|
||||||
|
"--lazy-load",
|
||||||
|
type=str2bool,
|
||||||
|
default=True,
|
||||||
|
help="lazily open CutSets to avoid OOM (for L|XL subset)",
|
||||||
|
)
|
||||||
|
|
||||||
|
group.add_argument(
|
||||||
|
"--training-subset",
|
||||||
|
type=str,
|
||||||
|
default="L",
|
||||||
|
help="The training subset for using",
|
||||||
|
)
|
||||||
|
|
||||||
|
def train_dataloaders(
|
||||||
|
self,
|
||||||
|
cuts_train: CutSet,
|
||||||
|
sampler_state_dict: Optional[Dict[str, Any]] = None,
|
||||||
|
) -> DataLoader:
|
||||||
|
"""
|
||||||
|
Args:
|
||||||
|
cuts_train:
|
||||||
|
CutSet for training.
|
||||||
|
sampler_state_dict:
|
||||||
|
The state dict for the training sampler.
|
||||||
|
"""
|
||||||
|
logging.info("About to get Musan cuts")
|
||||||
|
cuts_musan = load_manifest(
|
||||||
|
self.args.manifest_dir / "cuts_musan.json.gz"
|
||||||
|
)
|
||||||
|
|
||||||
|
transforms = []
|
||||||
|
if self.args.enable_musan:
|
||||||
|
logging.info("Enable MUSAN")
|
||||||
|
transforms.append(
|
||||||
|
CutMix(
|
||||||
|
cuts=cuts_musan, prob=0.5, snr=(10, 20), preserve_id=True
|
||||||
|
)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logging.info("Disable MUSAN")
|
||||||
|
|
||||||
|
if self.args.concatenate_cuts:
|
||||||
|
logging.info(
|
||||||
|
f"Using cut concatenation with duration factor "
|
||||||
|
f"{self.args.duration_factor} and gap {self.args.gap}."
|
||||||
|
)
|
||||||
|
# Cut concatenation should be the first transform in the list,
|
||||||
|
# so that if we e.g. mix noise in, it will fill the gaps between
|
||||||
|
# different utterances.
|
||||||
|
transforms = [
|
||||||
|
CutConcatenate(
|
||||||
|
duration_factor=self.args.duration_factor, gap=self.args.gap
|
||||||
|
)
|
||||||
|
] + transforms
|
||||||
|
|
||||||
|
input_transforms = []
|
||||||
|
if self.args.enable_spec_aug:
|
||||||
|
logging.info("Enable SpecAugment")
|
||||||
|
logging.info(
|
||||||
|
f"Time warp factor: {self.args.spec_aug_time_warp_factor}"
|
||||||
|
)
|
||||||
|
# Set the value of num_frame_masks according to Lhotse's version.
|
||||||
|
# In different Lhotse's versions, the default of num_frame_masks is
|
||||||
|
# different.
|
||||||
|
num_frame_masks = 10
|
||||||
|
num_frame_masks_parameter = inspect.signature(
|
||||||
|
SpecAugment.__init__
|
||||||
|
).parameters["num_frame_masks"]
|
||||||
|
if num_frame_masks_parameter.default == 1:
|
||||||
|
num_frame_masks = 2
|
||||||
|
logging.info(f"Num frame mask: {num_frame_masks}")
|
||||||
|
input_transforms.append(
|
||||||
|
SpecAugment(
|
||||||
|
time_warp_factor=self.args.spec_aug_time_warp_factor,
|
||||||
|
num_frame_masks=num_frame_masks,
|
||||||
|
features_mask_size=27,
|
||||||
|
num_feature_masks=2,
|
||||||
|
frames_mask_size=100,
|
||||||
|
)
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logging.info("Disable SpecAugment")
|
||||||
|
|
||||||
|
logging.info("About to create train dataset")
|
||||||
|
train = K2SpeechRecognitionDataset(
|
||||||
|
cut_transforms=transforms,
|
||||||
|
input_transforms=input_transforms,
|
||||||
|
return_cuts=self.args.return_cuts,
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.args.on_the_fly_feats:
|
||||||
|
# NOTE: the PerturbSpeed transform should be added only if we
|
||||||
|
# remove it from data prep stage.
|
||||||
|
# Add on-the-fly speed perturbation; since originally it would
|
||||||
|
# have increased epoch size by 3, we will apply prob 2/3 and use
|
||||||
|
# 3x more epochs.
|
||||||
|
# Speed perturbation probably should come first before
|
||||||
|
# concatenation, but in principle the transforms order doesn't have
|
||||||
|
# to be strict (e.g. could be randomized)
|
||||||
|
# transforms = [PerturbSpeed(factors=[0.9, 1.1], p=2/3)] + transforms # noqa
|
||||||
|
# Drop feats to be on the safe side.
|
||||||
|
train = K2SpeechRecognitionDataset(
|
||||||
|
cut_transforms=transforms,
|
||||||
|
input_strategy=OnTheFlyFeatures(
|
||||||
|
Fbank(FbankConfig(num_mel_bins=80))
|
||||||
|
),
|
||||||
|
input_transforms=input_transforms,
|
||||||
|
return_cuts=self.args.return_cuts,
|
||||||
|
)
|
||||||
|
|
||||||
|
if self.args.bucketing_sampler:
|
||||||
|
logging.info("Using DynamicBucketingSampler.")
|
||||||
|
train_sampler = DynamicBucketingSampler(
|
||||||
|
cuts_train,
|
||||||
|
max_duration=self.args.max_duration,
|
||||||
|
shuffle=self.args.shuffle,
|
||||||
|
num_buckets=self.args.num_buckets,
|
||||||
|
buffer_size=30000,
|
||||||
|
drop_last=True,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
logging.info("Using SingleCutSampler.")
|
||||||
|
train_sampler = SingleCutSampler(
|
||||||
|
cuts_train,
|
||||||
|
max_duration=self.args.max_duration,
|
||||||
|
shuffle=self.args.shuffle,
|
||||||
|
)
|
||||||
|
logging.info("About to create train dataloader")
|
||||||
|
|
||||||
|
# 'seed' is derived from the current random state, which will have
|
||||||
|
# previously been set in the main process.
|
||||||
|
seed = torch.randint(0, 100000, ()).item()
|
||||||
|
worker_init_fn = _SeedWorkers(seed)
|
||||||
|
|
||||||
|
train_dl = DataLoader(
|
||||||
|
train,
|
||||||
|
sampler=train_sampler,
|
||||||
|
batch_size=None,
|
||||||
|
num_workers=self.args.num_workers,
|
||||||
|
persistent_workers=False,
|
||||||
|
worker_init_fn=worker_init_fn,
|
||||||
|
)
|
||||||
|
|
||||||
|
if sampler_state_dict is not None:
|
||||||
|
logging.info("Loading sampler state dict")
|
||||||
|
train_dl.sampler.load_state_dict(sampler_state_dict)
|
||||||
|
|
||||||
|
return train_dl
|
||||||
|
|
||||||
|
def valid_dataloaders(self, cuts_valid: CutSet) -> DataLoader:
|
||||||
|
transforms = []
|
||||||
|
if self.args.concatenate_cuts:
|
||||||
|
transforms = [
|
||||||
|
CutConcatenate(
|
||||||
|
duration_factor=self.args.duration_factor, gap=self.args.gap
|
||||||
|
)
|
||||||
|
] + transforms
|
||||||
|
|
||||||
|
logging.info("About to create dev dataset")
|
||||||
|
if self.args.on_the_fly_feats:
|
||||||
|
validate = K2SpeechRecognitionDataset(
|
||||||
|
cut_transforms=transforms,
|
||||||
|
input_strategy=OnTheFlyFeatures(
|
||||||
|
Fbank(FbankConfig(num_mel_bins=80))
|
||||||
|
),
|
||||||
|
return_cuts=self.args.return_cuts,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
validate = K2SpeechRecognitionDataset(
|
||||||
|
cut_transforms=transforms,
|
||||||
|
return_cuts=self.args.return_cuts,
|
||||||
|
)
|
||||||
|
valid_sampler = DynamicBucketingSampler(
|
||||||
|
cuts_valid,
|
||||||
|
max_duration=self.args.max_duration,
|
||||||
|
rank=0,
|
||||||
|
world_size=1,
|
||||||
|
shuffle=False,
|
||||||
|
)
|
||||||
|
logging.info("About to create dev dataloader")
|
||||||
|
|
||||||
|
from lhotse.dataset.iterable_dataset import IterableDatasetWrapper
|
||||||
|
|
||||||
|
dev_iter_dataset = IterableDatasetWrapper(
|
||||||
|
dataset=validate,
|
||||||
|
sampler=valid_sampler,
|
||||||
|
)
|
||||||
|
valid_dl = DataLoader(
|
||||||
|
dev_iter_dataset,
|
||||||
|
batch_size=None,
|
||||||
|
num_workers=self.args.num_workers,
|
||||||
|
persistent_workers=False,
|
||||||
|
)
|
||||||
|
|
||||||
|
return valid_dl
|
||||||
|
|
||||||
|
def test_dataloaders(self, cuts: CutSet) -> DataLoader:
|
||||||
|
logging.debug("About to create test dataset")
|
||||||
|
test = K2SpeechRecognitionDataset(
|
||||||
|
input_strategy=OnTheFlyFeatures(Fbank(FbankConfig(num_mel_bins=80)))
|
||||||
|
if self.args.on_the_fly_feats
|
||||||
|
else PrecomputedFeatures(),
|
||||||
|
return_cuts=self.args.return_cuts,
|
||||||
|
)
|
||||||
|
sampler = DynamicBucketingSampler(
|
||||||
|
cuts,
|
||||||
|
max_duration=self.args.max_duration,
|
||||||
|
rank=0,
|
||||||
|
world_size=1,
|
||||||
|
shuffle=False,
|
||||||
|
)
|
||||||
|
from lhotse.dataset.iterable_dataset import IterableDatasetWrapper
|
||||||
|
|
||||||
|
test_iter_dataset = IterableDatasetWrapper(
|
||||||
|
dataset=test,
|
||||||
|
sampler=sampler,
|
||||||
|
)
|
||||||
|
test_dl = DataLoader(
|
||||||
|
test_iter_dataset,
|
||||||
|
batch_size=None,
|
||||||
|
num_workers=self.args.num_workers,
|
||||||
|
)
|
||||||
|
return test_dl
|
||||||
|
|
||||||
|
@lru_cache()
|
||||||
|
def train_cuts(self) -> CutSet:
|
||||||
|
logging.info("About to get train cuts")
|
||||||
|
if self.args.lazy_load:
|
||||||
|
logging.info("use lazy cuts")
|
||||||
|
cuts_train = CutSet.from_jsonl_lazy(
|
||||||
|
self.args.manifest_dir
|
||||||
|
/ f"cuts_{self.args.training_subset}.jsonl.gz"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
cuts_train = CutSet.from_file(
|
||||||
|
self.args.manifest_dir
|
||||||
|
/ f"cuts_{self.args.training_subset}.jsonl.gz"
|
||||||
|
)
|
||||||
|
return cuts_train
|
||||||
|
|
||||||
|
@lru_cache()
|
||||||
|
def valid_cuts(self) -> CutSet:
|
||||||
|
logging.info("About to get dev cuts")
|
||||||
|
return load_manifest(self.args.manifest_dir / "cuts_DEV.jsonl.gz")
|
||||||
|
|
||||||
|
@lru_cache()
|
||||||
|
def test_net_cuts(self) -> List[CutSet]:
|
||||||
|
logging.info("About to get TEST_NET cuts")
|
||||||
|
return load_manifest(self.args.manifest_dir / "cuts_TEST_NET.jsonl.gz")
|
||||||
|
|
||||||
|
@lru_cache()
|
||||||
|
def test_meeting_cuts(self) -> List[CutSet]:
|
||||||
|
logging.info("About to get TEST_MEETING cuts")
|
||||||
|
return load_manifest(
|
||||||
|
self.args.manifest_dir / "cuts_TEST_MEETING.jsonl.gz"
|
||||||
|
)
|
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/beam_search.py
Symbolic link
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/beam_search.py
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../../librispeech/ASR/pruned_transducer_stateless2/beam_search.py
|
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/conformer.py
Symbolic link
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/conformer.py
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../../librispeech/ASR/pruned_transducer_stateless2/conformer.py
|
623
egs/wenetspeech/ASR/pruned_transducer_stateless2/decode.py
Executable file
623
egs/wenetspeech/ASR/pruned_transducer_stateless2/decode.py
Executable file
@ -0,0 +1,623 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
#
|
||||||
|
# Copyright 2021 Xiaomi Corporation (Author: Fangjun Kuang)
|
||||||
|
# Copyright 2022 Xiaomi Corporation (Author: Mingshuang Luo)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
"""
|
||||||
|
When training with the L subset, usage:
|
||||||
|
(1) greedy search
|
||||||
|
./pruned_transducer_stateless2/decode.py \
|
||||||
|
--epoch 10 \
|
||||||
|
--avg 2 \
|
||||||
|
--exp-dir ./pruned_transducer_stateless2/exp \
|
||||||
|
--lang-dir data/lang_char \
|
||||||
|
--max-duration 100 \
|
||||||
|
--decoding-method greedy_search
|
||||||
|
|
||||||
|
(2) modified beam search
|
||||||
|
./pruned_transducer_stateless2/decode.py \
|
||||||
|
--epoch 10 \
|
||||||
|
--avg 2 \
|
||||||
|
--exp-dir ./pruned_transducer_stateless2/exp \
|
||||||
|
--lang-dir data/lang_char \
|
||||||
|
--max-duration 100 \
|
||||||
|
--decoding-method modified_beam_search \
|
||||||
|
--beam-size 4
|
||||||
|
|
||||||
|
(3) fast beam search
|
||||||
|
./pruned_transducer_stateless2/decode.py \
|
||||||
|
--epoch 10 \
|
||||||
|
--avg 2 \
|
||||||
|
--exp-dir ./pruned_transducer_stateless2/exp \
|
||||||
|
--lang-dir data/lang_char \
|
||||||
|
--max-duration 1500 \
|
||||||
|
--decoding-method fast_beam_search \
|
||||||
|
--beam 4 \
|
||||||
|
--max-contexts 4 \
|
||||||
|
--max-states 8
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
from collections import defaultdict
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Dict, List, Optional, Tuple
|
||||||
|
|
||||||
|
import k2
|
||||||
|
import torch
|
||||||
|
import torch.nn as nn
|
||||||
|
from asr_datamodule import WenetSpeechAsrDataModule
|
||||||
|
from beam_search import (
|
||||||
|
beam_search,
|
||||||
|
fast_beam_search,
|
||||||
|
greedy_search,
|
||||||
|
greedy_search_batch,
|
||||||
|
modified_beam_search,
|
||||||
|
)
|
||||||
|
from train import get_params, get_transducer_model
|
||||||
|
|
||||||
|
from icefall.checkpoint import (
|
||||||
|
average_checkpoints,
|
||||||
|
find_checkpoints,
|
||||||
|
load_checkpoint,
|
||||||
|
)
|
||||||
|
from icefall.lexicon import Lexicon
|
||||||
|
from icefall.utils import (
|
||||||
|
AttributeDict,
|
||||||
|
setup_logger,
|
||||||
|
store_transcripts,
|
||||||
|
write_error_stats,
|
||||||
|
)
|
||||||
|
|
||||||
|
|
||||||
|
def get_parser():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
formatter_class=argparse.ArgumentDefaultsHelpFormatter
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--epoch",
|
||||||
|
type=int,
|
||||||
|
default=28,
|
||||||
|
help="It specifies the checkpoint to use for decoding."
|
||||||
|
"Note: Epoch counts from 0.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--batch",
|
||||||
|
type=int,
|
||||||
|
default=None,
|
||||||
|
help="It specifies the batch checkpoint to use for decoding."
|
||||||
|
"Note: Epoch counts from 0.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--avg",
|
||||||
|
type=int,
|
||||||
|
default=15,
|
||||||
|
help="Number of checkpoints to average. Automatically select "
|
||||||
|
"consecutive checkpoints before the checkpoint specified by "
|
||||||
|
"'--epoch'. ",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--avg-last-n",
|
||||||
|
type=int,
|
||||||
|
default=0,
|
||||||
|
help="""If positive, --epoch and --avg are ignored and it
|
||||||
|
will use the last n checkpoints exp_dir/checkpoint-xxx.pt
|
||||||
|
where xxx is the number of processed batches while
|
||||||
|
saving that checkpoint.
|
||||||
|
""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--exp-dir",
|
||||||
|
type=str,
|
||||||
|
default="pruned_transducer_stateless2/exp",
|
||||||
|
help="The experiment dir",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--lang-dir",
|
||||||
|
type=str,
|
||||||
|
default="data/lang_char",
|
||||||
|
help="""The lang dir
|
||||||
|
It contains language related input files such as
|
||||||
|
"lexicon.txt"
|
||||||
|
""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--decoding-method",
|
||||||
|
type=str,
|
||||||
|
default="greedy_search",
|
||||||
|
help="""Possible values are:
|
||||||
|
- greedy_search
|
||||||
|
- beam_search
|
||||||
|
- modified_beam_search
|
||||||
|
- fast_beam_search
|
||||||
|
""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--beam-size",
|
||||||
|
type=int,
|
||||||
|
default=4,
|
||||||
|
help="""An interger indicating how many candidates we will keep for each
|
||||||
|
frame. Used only when --decoding-method is beam_search or
|
||||||
|
modified_beam_search.""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--beam",
|
||||||
|
type=float,
|
||||||
|
default=4,
|
||||||
|
help="""A floating point value to calculate the cutoff score during beam
|
||||||
|
search (i.e., `cutoff = max-score - beam`), which is the same as the
|
||||||
|
`beam` in Kaldi.
|
||||||
|
Used only when --decoding-method is fast_beam_search""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--max-contexts",
|
||||||
|
type=int,
|
||||||
|
default=4,
|
||||||
|
help="""Used only when --decoding-method is
|
||||||
|
fast_beam_search""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--max-states",
|
||||||
|
type=int,
|
||||||
|
default=8,
|
||||||
|
help="""Used only when --decoding-method is
|
||||||
|
fast_beam_search""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--context-size",
|
||||||
|
type=int,
|
||||||
|
default=2,
|
||||||
|
help="The context size in the decoder. 1 means bigram; "
|
||||||
|
"2 means tri-gram",
|
||||||
|
)
|
||||||
|
parser.add_argument(
|
||||||
|
"--max-sym-per-frame",
|
||||||
|
type=int,
|
||||||
|
default=1,
|
||||||
|
help="""Maximum number of symbols per frame.
|
||||||
|
Used only when --decoding_method is greedy_search""",
|
||||||
|
)
|
||||||
|
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def decode_one_batch(
|
||||||
|
params: AttributeDict,
|
||||||
|
model: nn.Module,
|
||||||
|
lexicon: Lexicon,
|
||||||
|
batch: dict,
|
||||||
|
decoding_graph: Optional[k2.Fsa] = None,
|
||||||
|
) -> Dict[str, List[List[str]]]:
|
||||||
|
"""Decode one batch and return the result in a dict. The dict has the
|
||||||
|
following format:
|
||||||
|
|
||||||
|
- key: It indicates the setting used for decoding. For example,
|
||||||
|
if greedy_search is used, it would be "greedy_search"
|
||||||
|
If beam search with a beam size of 7 is used, it would be
|
||||||
|
"beam_7"
|
||||||
|
- value: It contains the decoding result. `len(value)` equals to
|
||||||
|
batch size. `value[i]` is the decoding result for the i-th
|
||||||
|
utterance in the given batch.
|
||||||
|
Args:
|
||||||
|
params:
|
||||||
|
It's the return value of :func:`get_params`.
|
||||||
|
model:
|
||||||
|
The neural model.
|
||||||
|
batch:
|
||||||
|
It is the return value from iterating
|
||||||
|
`lhotse.dataset.K2SpeechRecognitionDataset`. See its documentation
|
||||||
|
for the format of the `batch`.
|
||||||
|
decoding_graph:
|
||||||
|
The decoding graph. Can be either a `k2.trivial_graph` or HLG, Used
|
||||||
|
only when --decoding_method is fast_beam_search.
|
||||||
|
Returns:
|
||||||
|
Return the decoding result. See above description for the format of
|
||||||
|
the returned dict.
|
||||||
|
"""
|
||||||
|
device = model.device
|
||||||
|
feature = batch["inputs"]
|
||||||
|
assert feature.ndim == 3
|
||||||
|
|
||||||
|
feature = feature.to(device)
|
||||||
|
# at entry, feature is (N, T, C)
|
||||||
|
|
||||||
|
supervisions = batch["supervisions"]
|
||||||
|
feature_lens = supervisions["num_frames"].to(device)
|
||||||
|
|
||||||
|
encoder_out, encoder_out_lens = model.encoder(
|
||||||
|
x=feature, x_lens=feature_lens
|
||||||
|
)
|
||||||
|
hyps = []
|
||||||
|
|
||||||
|
if params.decoding_method == "fast_beam_search":
|
||||||
|
hyp_tokens = fast_beam_search(
|
||||||
|
model=model,
|
||||||
|
decoding_graph=decoding_graph,
|
||||||
|
encoder_out=encoder_out,
|
||||||
|
encoder_out_lens=encoder_out_lens,
|
||||||
|
beam=params.beam,
|
||||||
|
max_contexts=params.max_contexts,
|
||||||
|
max_states=params.max_states,
|
||||||
|
)
|
||||||
|
for i in range(encoder_out.size(0)):
|
||||||
|
hyps.append([lexicon.token_table[idx] for idx in hyp_tokens[i]])
|
||||||
|
elif (
|
||||||
|
params.decoding_method == "greedy_search"
|
||||||
|
and params.max_sym_per_frame == 1
|
||||||
|
):
|
||||||
|
hyp_tokens = greedy_search_batch(
|
||||||
|
model=model,
|
||||||
|
encoder_out=encoder_out,
|
||||||
|
)
|
||||||
|
for i in range(encoder_out.size(0)):
|
||||||
|
hyps.append([lexicon.token_table[idx] for idx in hyp_tokens[i]])
|
||||||
|
elif params.decoding_method == "modified_beam_search":
|
||||||
|
hyp_tokens = modified_beam_search(
|
||||||
|
model=model,
|
||||||
|
encoder_out=encoder_out,
|
||||||
|
beam=params.beam_size,
|
||||||
|
)
|
||||||
|
for i in range(encoder_out.size(0)):
|
||||||
|
hyps.append([lexicon.token_table[idx] for idx in hyp_tokens[i]])
|
||||||
|
else:
|
||||||
|
batch_size = encoder_out.size(0)
|
||||||
|
|
||||||
|
for i in range(batch_size):
|
||||||
|
# fmt: off
|
||||||
|
encoder_out_i = encoder_out[i:i+1, :encoder_out_lens[i]]
|
||||||
|
# fmt: on
|
||||||
|
if params.decoding_method == "greedy_search":
|
||||||
|
hyp = greedy_search(
|
||||||
|
model=model,
|
||||||
|
encoder_out=encoder_out_i,
|
||||||
|
max_sym_per_frame=params.max_sym_per_frame,
|
||||||
|
)
|
||||||
|
elif params.decoding_method == "beam_search":
|
||||||
|
hyp = beam_search(
|
||||||
|
model=model,
|
||||||
|
encoder_out=encoder_out_i,
|
||||||
|
beam=params.beam_size,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
raise ValueError(
|
||||||
|
f"Unsupported decoding method: {params.decoding_method}"
|
||||||
|
)
|
||||||
|
hyps.append([lexicon.token_table[idx] for idx in hyp])
|
||||||
|
|
||||||
|
if params.decoding_method == "greedy_search":
|
||||||
|
return {"greedy_search": hyps}
|
||||||
|
elif params.decoding_method == "fast_beam_search":
|
||||||
|
return {
|
||||||
|
(
|
||||||
|
f"beam_{params.beam}_"
|
||||||
|
f"max_contexts_{params.max_contexts}_"
|
||||||
|
f"max_states_{params.max_states}"
|
||||||
|
): hyps
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {f"beam_size_{params.beam_size}": hyps}
|
||||||
|
|
||||||
|
|
||||||
|
def decode_dataset(
|
||||||
|
dl: torch.utils.data.DataLoader,
|
||||||
|
params: AttributeDict,
|
||||||
|
model: nn.Module,
|
||||||
|
lexicon: Lexicon,
|
||||||
|
decoding_graph: Optional[k2.Fsa] = None,
|
||||||
|
) -> Dict[str, List[Tuple[List[str], List[str]]]]:
|
||||||
|
"""Decode dataset.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
dl:
|
||||||
|
PyTorch's dataloader containing the dataset to decode.
|
||||||
|
params:
|
||||||
|
It is returned by :func:`get_params`.
|
||||||
|
model:
|
||||||
|
The neural model.
|
||||||
|
decoding_graph:
|
||||||
|
The decoding graph. Can be either a `k2.trivial_graph` or HLG, Used
|
||||||
|
only when --decoding_method is fast_beam_search.
|
||||||
|
Returns:
|
||||||
|
Return a dict, whose key may be "greedy_search" if greedy search
|
||||||
|
is used, or it may be "beam_7" if beam size of 7 is used.
|
||||||
|
Its value is a list of tuples. Each tuple contains two elements:
|
||||||
|
The first is the reference transcript, and the second is the
|
||||||
|
predicted result.
|
||||||
|
"""
|
||||||
|
num_cuts = 0
|
||||||
|
|
||||||
|
try:
|
||||||
|
num_batches = len(dl)
|
||||||
|
except TypeError:
|
||||||
|
num_batches = "?"
|
||||||
|
|
||||||
|
if params.decoding_method == "greedy_search":
|
||||||
|
log_interval = 100
|
||||||
|
else:
|
||||||
|
log_interval = 2
|
||||||
|
|
||||||
|
results = defaultdict(list)
|
||||||
|
for batch_idx, batch in enumerate(dl):
|
||||||
|
texts = batch["supervisions"]["text"]
|
||||||
|
texts = [list(str(text)) for text in texts]
|
||||||
|
|
||||||
|
hyps_dict = decode_one_batch(
|
||||||
|
params=params,
|
||||||
|
model=model,
|
||||||
|
lexicon=lexicon,
|
||||||
|
decoding_graph=decoding_graph,
|
||||||
|
batch=batch,
|
||||||
|
)
|
||||||
|
|
||||||
|
for name, hyps in hyps_dict.items():
|
||||||
|
this_batch = []
|
||||||
|
assert len(hyps) == len(texts)
|
||||||
|
for hyp_words, ref_text in zip(hyps, texts):
|
||||||
|
this_batch.append((ref_text, hyp_words))
|
||||||
|
|
||||||
|
results[name].extend(this_batch)
|
||||||
|
|
||||||
|
num_cuts += len(texts)
|
||||||
|
|
||||||
|
if batch_idx % log_interval == 0:
|
||||||
|
batch_str = f"{batch_idx}/{num_batches}"
|
||||||
|
|
||||||
|
logging.info(
|
||||||
|
f"batch {batch_str}, cuts processed until now is {num_cuts}"
|
||||||
|
)
|
||||||
|
return results
|
||||||
|
|
||||||
|
|
||||||
|
def save_results(
|
||||||
|
params: AttributeDict,
|
||||||
|
test_set_name: str,
|
||||||
|
results_dict: Dict[str, List[Tuple[List[int], List[int]]]],
|
||||||
|
):
|
||||||
|
test_set_wers = dict()
|
||||||
|
for key, results in results_dict.items():
|
||||||
|
recog_path = (
|
||||||
|
params.res_dir / f"recogs-{test_set_name}-{key}-{params.suffix}.txt"
|
||||||
|
)
|
||||||
|
store_transcripts(filename=recog_path, texts=results)
|
||||||
|
logging.info(f"The transcripts are stored in {recog_path}")
|
||||||
|
|
||||||
|
# The following prints out WERs, per-word error statistics and aligned
|
||||||
|
# ref/hyp pairs.
|
||||||
|
errs_filename = (
|
||||||
|
params.res_dir / f"errs-{test_set_name}-{key}-{params.suffix}.txt"
|
||||||
|
)
|
||||||
|
with open(errs_filename, "w") as f:
|
||||||
|
wer = write_error_stats(
|
||||||
|
f, f"{test_set_name}-{key}", results, enable_log=True
|
||||||
|
)
|
||||||
|
test_set_wers[key] = wer
|
||||||
|
|
||||||
|
logging.info("Wrote detailed error stats to {}".format(errs_filename))
|
||||||
|
|
||||||
|
test_set_wers = sorted(test_set_wers.items(), key=lambda x: x[1])
|
||||||
|
errs_info = (
|
||||||
|
params.res_dir
|
||||||
|
/ f"wer-summary-{test_set_name}-{key}-{params.suffix}.txt"
|
||||||
|
)
|
||||||
|
with open(errs_info, "w") as f:
|
||||||
|
print("settings\tWER", file=f)
|
||||||
|
for key, val in test_set_wers:
|
||||||
|
print("{}\t{}".format(key, val), file=f)
|
||||||
|
|
||||||
|
s = "\nFor {}, WER of different settings are:\n".format(test_set_name)
|
||||||
|
note = "\tbest for {}".format(test_set_name)
|
||||||
|
for key, val in test_set_wers:
|
||||||
|
s += "{}\t{}{}\n".format(key, val, note)
|
||||||
|
note = ""
|
||||||
|
logging.info(s)
|
||||||
|
|
||||||
|
|
||||||
|
@torch.no_grad()
|
||||||
|
def main():
|
||||||
|
parser = get_parser()
|
||||||
|
WenetSpeechAsrDataModule.add_arguments(parser)
|
||||||
|
args = parser.parse_args()
|
||||||
|
args.exp_dir = Path(args.exp_dir)
|
||||||
|
|
||||||
|
params = get_params()
|
||||||
|
params.update(vars(args))
|
||||||
|
|
||||||
|
assert params.decoding_method in (
|
||||||
|
"greedy_search",
|
||||||
|
"beam_search",
|
||||||
|
"fast_beam_search",
|
||||||
|
"modified_beam_search",
|
||||||
|
)
|
||||||
|
params.res_dir = params.exp_dir / params.decoding_method
|
||||||
|
|
||||||
|
params.suffix = f"epoch-{params.epoch}-avg-{params.avg}"
|
||||||
|
if "fast_beam_search" in params.decoding_method:
|
||||||
|
params.suffix += f"-beam-{params.beam}"
|
||||||
|
params.suffix += f"-max-contexts-{params.max_contexts}"
|
||||||
|
params.suffix += f"-max-states-{params.max_states}"
|
||||||
|
elif "beam_search" in params.decoding_method:
|
||||||
|
params.suffix += f"-beam-{params.beam_size}"
|
||||||
|
else:
|
||||||
|
params.suffix += f"-context-{params.context_size}"
|
||||||
|
params.suffix += f"-max-sym-per-frame-{params.max_sym_per_frame}"
|
||||||
|
|
||||||
|
setup_logger(f"{params.res_dir}/log-decode-{params.suffix}")
|
||||||
|
logging.info("Decoding started")
|
||||||
|
|
||||||
|
device = torch.device("cpu")
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
device = torch.device("cuda", 0)
|
||||||
|
|
||||||
|
logging.info(f"Device: {device}")
|
||||||
|
|
||||||
|
lexicon = Lexicon(params.lang_dir)
|
||||||
|
params.blank_id = lexicon.token_table["<blk>"]
|
||||||
|
params.vocab_size = max(lexicon.tokens) + 1
|
||||||
|
|
||||||
|
logging.info(params)
|
||||||
|
|
||||||
|
logging.info("About to create model")
|
||||||
|
model = get_transducer_model(params)
|
||||||
|
|
||||||
|
if params.avg_last_n > 0:
|
||||||
|
filenames = find_checkpoints(params.exp_dir)[: params.avg_last_n]
|
||||||
|
logging.info(f"averaging {filenames}")
|
||||||
|
model.to(device)
|
||||||
|
model.load_state_dict(average_checkpoints(filenames, device=device))
|
||||||
|
elif params.avg == 1:
|
||||||
|
load_checkpoint(f"{params.exp_dir}/epoch-{params.epoch}.pt", model)
|
||||||
|
elif params.batch is not None:
|
||||||
|
filenames = f"{params.exp_dir}/checkpoint-{params.batch}.pt"
|
||||||
|
logging.info(f"averaging {filenames}")
|
||||||
|
model.to(device)
|
||||||
|
model.load_state_dict(average_checkpoints([filenames], device=device))
|
||||||
|
else:
|
||||||
|
start = params.epoch - params.avg + 1
|
||||||
|
filenames = []
|
||||||
|
for i in range(start, params.epoch + 1):
|
||||||
|
if start >= 0:
|
||||||
|
filenames.append(f"{params.exp_dir}/epoch-{i}.pt")
|
||||||
|
logging.info(f"averaging {filenames}")
|
||||||
|
model.to(device)
|
||||||
|
model.load_state_dict(average_checkpoints(filenames, device=device))
|
||||||
|
|
||||||
|
model.to(device)
|
||||||
|
model.eval()
|
||||||
|
model.device = device
|
||||||
|
|
||||||
|
if params.decoding_method == "fast_beam_search":
|
||||||
|
decoding_graph = k2.trivial_graph(params.vocab_size - 1, device=device)
|
||||||
|
else:
|
||||||
|
decoding_graph = None
|
||||||
|
|
||||||
|
num_param = sum([p.numel() for p in model.parameters()])
|
||||||
|
logging.info(f"Number of model parameters: {num_param}")
|
||||||
|
|
||||||
|
# Note: Please use "pip install webdataset==0.1.103"
|
||||||
|
# for installing the webdataset.
|
||||||
|
import glob
|
||||||
|
import os
|
||||||
|
|
||||||
|
from lhotse import CutSet
|
||||||
|
from lhotse.dataset.webdataset import export_to_webdataset
|
||||||
|
|
||||||
|
wenetspeech = WenetSpeechAsrDataModule(args)
|
||||||
|
|
||||||
|
dev = "dev"
|
||||||
|
test_net = "test_net"
|
||||||
|
test_meeting = "test_meeting"
|
||||||
|
|
||||||
|
if not os.path.exists(f"{dev}/shared-0.tar"):
|
||||||
|
os.makedirs(dev)
|
||||||
|
dev_cuts = wenetspeech.valid_cuts()
|
||||||
|
export_to_webdataset(
|
||||||
|
dev_cuts,
|
||||||
|
output_path=f"{dev}/shared-%d.tar",
|
||||||
|
shard_size=300,
|
||||||
|
)
|
||||||
|
|
||||||
|
if not os.path.exists(f"{test_net}/shared-0.tar"):
|
||||||
|
os.makedirs(test_net)
|
||||||
|
test_net_cuts = wenetspeech.test_net_cuts()
|
||||||
|
export_to_webdataset(
|
||||||
|
test_net_cuts,
|
||||||
|
output_path=f"{test_net}/shared-%d.tar",
|
||||||
|
shard_size=300,
|
||||||
|
)
|
||||||
|
|
||||||
|
if not os.path.exists(f"{test_meeting}/shared-0.tar"):
|
||||||
|
os.makedirs(test_meeting)
|
||||||
|
test_meeting_cuts = wenetspeech.test_meeting_cuts()
|
||||||
|
export_to_webdataset(
|
||||||
|
test_meeting_cuts,
|
||||||
|
output_path=f"{test_meeting}/shared-%d.tar",
|
||||||
|
shard_size=300,
|
||||||
|
)
|
||||||
|
|
||||||
|
dev_shards = [
|
||||||
|
str(path)
|
||||||
|
for path in sorted(glob.glob(os.path.join(dev, "shared-*.tar")))
|
||||||
|
]
|
||||||
|
cuts_dev_webdataset = CutSet.from_webdataset(
|
||||||
|
dev_shards,
|
||||||
|
split_by_worker=True,
|
||||||
|
split_by_node=True,
|
||||||
|
shuffle_shards=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
test_net_shards = [
|
||||||
|
str(path)
|
||||||
|
for path in sorted(glob.glob(os.path.join(test_net, "shared-*.tar")))
|
||||||
|
]
|
||||||
|
cuts_test_net_webdataset = CutSet.from_webdataset(
|
||||||
|
test_net_shards,
|
||||||
|
split_by_worker=True,
|
||||||
|
split_by_node=True,
|
||||||
|
shuffle_shards=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
test_meeting_shards = [
|
||||||
|
str(path)
|
||||||
|
for path in sorted(
|
||||||
|
glob.glob(os.path.join(test_meeting, "shared-*.tar"))
|
||||||
|
)
|
||||||
|
]
|
||||||
|
cuts_test_meeting_webdataset = CutSet.from_webdataset(
|
||||||
|
test_meeting_shards,
|
||||||
|
split_by_worker=True,
|
||||||
|
split_by_node=True,
|
||||||
|
shuffle_shards=True,
|
||||||
|
)
|
||||||
|
|
||||||
|
dev_dl = wenetspeech.valid_dataloaders(cuts_dev_webdataset)
|
||||||
|
test_net_dl = wenetspeech.test_dataloaders(cuts_test_net_webdataset)
|
||||||
|
test_meeting_dl = wenetspeech.test_dataloaders(cuts_test_meeting_webdataset)
|
||||||
|
|
||||||
|
test_sets = ["DEV", "TEST_NET", "TEST_MEETING"]
|
||||||
|
test_dl = [dev_dl, test_net_dl, test_meeting_dl]
|
||||||
|
|
||||||
|
for test_set, test_dl in zip(test_sets, test_dl):
|
||||||
|
results_dict = decode_dataset(
|
||||||
|
dl=test_dl,
|
||||||
|
params=params,
|
||||||
|
model=model,
|
||||||
|
lexicon=lexicon,
|
||||||
|
decoding_graph=decoding_graph,
|
||||||
|
)
|
||||||
|
save_results(
|
||||||
|
params=params,
|
||||||
|
test_set_name=test_set,
|
||||||
|
results_dict=results_dict,
|
||||||
|
)
|
||||||
|
|
||||||
|
logging.info("Done!")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/decoder.py
Symbolic link
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/decoder.py
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../../librispeech/ASR/pruned_transducer_stateless2/decoder.py
|
@ -0,0 +1 @@
|
|||||||
|
../../../librispeech/ASR/transducer_stateless/encoder_interface.py
|
178
egs/wenetspeech/ASR/pruned_transducer_stateless2/export.py
Normal file
178
egs/wenetspeech/ASR/pruned_transducer_stateless2/export.py
Normal file
@ -0,0 +1,178 @@
|
|||||||
|
# Copyright 2021 Xiaomi Corporation (Author: Fangjun Kuang)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
|
||||||
|
# This script converts several saved checkpoints
|
||||||
|
# to a single one using model averaging.
|
||||||
|
"""
|
||||||
|
Usage:
|
||||||
|
./pruned_transducer_stateless2/export.py \
|
||||||
|
--exp-dir ./pruned_transducer_stateless2/exp \
|
||||||
|
--lang-dir data/lang_char \
|
||||||
|
--epoch 10 \
|
||||||
|
--avg 2
|
||||||
|
|
||||||
|
It will generate a file exp_dir/pretrained.pt
|
||||||
|
|
||||||
|
To use the generated file with `pruned_transducer_stateless2/decode.py`,
|
||||||
|
you can do:
|
||||||
|
|
||||||
|
cd /path/to/exp_dir
|
||||||
|
ln -s pretrained.pt epoch-9999.pt
|
||||||
|
|
||||||
|
cd /path/to/egs/wenetspeech/ASR
|
||||||
|
./pruned_transducer_stateless2/decode.py \
|
||||||
|
--exp-dir ./pruned_transducer_stateless2/exp \
|
||||||
|
--epoch 10 \
|
||||||
|
--avg 2 \
|
||||||
|
--max-duration 100 \
|
||||||
|
--lang-dir data/lang_char
|
||||||
|
"""
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
import torch
|
||||||
|
from train import get_params, get_transducer_model
|
||||||
|
|
||||||
|
from icefall.checkpoint import average_checkpoints, load_checkpoint
|
||||||
|
from icefall.lexicon import Lexicon
|
||||||
|
from icefall.utils import str2bool
|
||||||
|
|
||||||
|
|
||||||
|
def get_parser():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
formatter_class=argparse.ArgumentDefaultsHelpFormatter
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--epoch",
|
||||||
|
type=int,
|
||||||
|
default=28,
|
||||||
|
help="It specifies the checkpoint to use for decoding."
|
||||||
|
"Note: Epoch counts from 0.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--avg",
|
||||||
|
type=int,
|
||||||
|
default=15,
|
||||||
|
help="Number of checkpoints to average. Automatically select "
|
||||||
|
"consecutive checkpoints before the checkpoint specified by "
|
||||||
|
"'--epoch'. ",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--exp-dir",
|
||||||
|
type=str,
|
||||||
|
default="pruned_transducer_stateless2/exp",
|
||||||
|
help="""It specifies the directory where all training related
|
||||||
|
files, e.g., checkpoints, log, etc, are saved
|
||||||
|
""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--lang-dir",
|
||||||
|
type=str,
|
||||||
|
default="data/lang_char",
|
||||||
|
help="The lang dir",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--jit",
|
||||||
|
type=str2bool,
|
||||||
|
default=False,
|
||||||
|
help="""True to save a model after applying torch.jit.script.
|
||||||
|
""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--context-size",
|
||||||
|
type=int,
|
||||||
|
default=2,
|
||||||
|
help="The context size in the decoder. 1 means bigram; "
|
||||||
|
"2 means tri-gram",
|
||||||
|
)
|
||||||
|
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
args = get_parser().parse_args()
|
||||||
|
args.exp_dir = Path(args.exp_dir)
|
||||||
|
|
||||||
|
assert args.jit is False, "Support torchscript will be added later"
|
||||||
|
|
||||||
|
params = get_params()
|
||||||
|
params.update(vars(args))
|
||||||
|
|
||||||
|
device = torch.device("cpu")
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
device = torch.device("cuda", 0)
|
||||||
|
|
||||||
|
logging.info(f"device: {device}")
|
||||||
|
|
||||||
|
lexicon = Lexicon(params.lang_dir)
|
||||||
|
|
||||||
|
params.blank_id = 0
|
||||||
|
params.vocab_size = max(lexicon.tokens) + 1
|
||||||
|
|
||||||
|
logging.info(params)
|
||||||
|
|
||||||
|
logging.info("About to create model")
|
||||||
|
model = get_transducer_model(params)
|
||||||
|
|
||||||
|
model.to(device)
|
||||||
|
|
||||||
|
if params.avg == 1:
|
||||||
|
load_checkpoint(f"{params.exp_dir}/epoch-{params.epoch}.pt", model)
|
||||||
|
else:
|
||||||
|
start = params.epoch - params.avg + 1
|
||||||
|
filenames = []
|
||||||
|
for i in range(start, params.epoch + 1):
|
||||||
|
if start >= 0:
|
||||||
|
filenames.append(f"{params.exp_dir}/epoch-{i}.pt")
|
||||||
|
logging.info(f"averaging {filenames}")
|
||||||
|
model.to(device)
|
||||||
|
model.load_state_dict(average_checkpoints(filenames, device=device))
|
||||||
|
|
||||||
|
model.eval()
|
||||||
|
|
||||||
|
model.to("cpu")
|
||||||
|
model.eval()
|
||||||
|
|
||||||
|
if params.jit:
|
||||||
|
logging.info("Using torch.jit.script")
|
||||||
|
model = torch.jit.script(model)
|
||||||
|
filename = params.exp_dir / "cpu_jit.pt"
|
||||||
|
model.save(str(filename))
|
||||||
|
logging.info(f"Saved to {filename}")
|
||||||
|
else:
|
||||||
|
logging.info("Not using torch.jit.script")
|
||||||
|
# Save it using a format so that it can be loaded
|
||||||
|
# by :func:`load_checkpoint`
|
||||||
|
filename = params.exp_dir / "pretrained.pt"
|
||||||
|
torch.save({"model": model.state_dict()}, str(filename))
|
||||||
|
logging.info(f"Saved to {filename}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
formatter = (
|
||||||
|
"%(asctime)s %(levelname)s [%(filename)s:%(lineno)d] %(message)s"
|
||||||
|
)
|
||||||
|
|
||||||
|
logging.basicConfig(format=formatter, level=logging.INFO)
|
||||||
|
main()
|
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/joiner.py
Symbolic link
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/joiner.py
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../../librispeech/ASR/pruned_transducer_stateless2/joiner.py
|
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/model.py
Symbolic link
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/model.py
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../../librispeech/ASR/pruned_transducer_stateless2/model.py
|
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/optim.py
Symbolic link
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/optim.py
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../../librispeech/ASR/pruned_transducer_stateless2/optim.py
|
342
egs/wenetspeech/ASR/pruned_transducer_stateless2/pretrained.py
Normal file
342
egs/wenetspeech/ASR/pruned_transducer_stateless2/pretrained.py
Normal file
@ -0,0 +1,342 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
# Copyright 2021 Xiaomi Corp. (authors: Fangjun Kuang)
|
||||||
|
# 2022 Xiaomi Crop. (authors: Mingshuang Luo)
|
||||||
|
#
|
||||||
|
# See ../../../../LICENSE for clarification regarding multiple authors
|
||||||
|
#
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
# you may not use this file except in compliance with the License.
|
||||||
|
# You may obtain a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
# See the License for the specific language governing permissions and
|
||||||
|
# limitations under the License.
|
||||||
|
"""
|
||||||
|
Usage:
|
||||||
|
(1) greedy search
|
||||||
|
./pruned_transducer_stateless2/pretrained.py \
|
||||||
|
--checkpoint ./pruned_transducer_stateless2/exp/pretrained.pt \
|
||||||
|
--lang-dir ./data/lang_char \
|
||||||
|
--method greedy_search \
|
||||||
|
--max-sym-per-frame 1 \
|
||||||
|
/path/to/foo.wav \
|
||||||
|
/path/to/bar.wav
|
||||||
|
(2) modified beam search
|
||||||
|
./pruned_transducer_stateless2/pretrained.py \
|
||||||
|
--checkpoint ./pruned_transducer_stateless2/exp/pretrained.pt \
|
||||||
|
--lang-dir ./data/lang_char \
|
||||||
|
--method modified_beam_search \
|
||||||
|
--beam-size 4 \
|
||||||
|
/path/to/foo.wav \
|
||||||
|
/path/to/bar.wav
|
||||||
|
(3) fast beam search
|
||||||
|
./pruned_transducer_stateless2/pretrained.py \
|
||||||
|
--checkpoint ./pruned_transducer_stateless/exp/pretrained.pt \
|
||||||
|
--lang-dir ./data/lang_char \
|
||||||
|
--method fast_beam_search \
|
||||||
|
--beam 4 \
|
||||||
|
--max-contexts 4 \
|
||||||
|
--max-states 8 \
|
||||||
|
/path/to/foo.wav \
|
||||||
|
/path/to/bar.wav
|
||||||
|
You can also use `./pruned_transducer_stateless2/exp/epoch-xx.pt`.
|
||||||
|
Note: ./pruned_transducer_stateless2/exp/pretrained.pt is generated by
|
||||||
|
./pruned_transducer_stateless2/export.py
|
||||||
|
"""
|
||||||
|
|
||||||
|
|
||||||
|
import argparse
|
||||||
|
import logging
|
||||||
|
import math
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
import k2
|
||||||
|
import kaldifeat
|
||||||
|
import torch
|
||||||
|
import torchaudio
|
||||||
|
from beam_search import (
|
||||||
|
beam_search,
|
||||||
|
fast_beam_search_one_best,
|
||||||
|
greedy_search,
|
||||||
|
greedy_search_batch,
|
||||||
|
modified_beam_search,
|
||||||
|
)
|
||||||
|
from torch.nn.utils.rnn import pad_sequence
|
||||||
|
from train import get_params, get_transducer_model
|
||||||
|
|
||||||
|
from icefall.lexicon import Lexicon
|
||||||
|
|
||||||
|
|
||||||
|
def get_parser():
|
||||||
|
parser = argparse.ArgumentParser(
|
||||||
|
formatter_class=argparse.ArgumentDefaultsHelpFormatter
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--checkpoint",
|
||||||
|
type=str,
|
||||||
|
required=True,
|
||||||
|
help="Path to the checkpoint. "
|
||||||
|
"The checkpoint is assumed to be saved by "
|
||||||
|
"icefall.checkpoint.save_checkpoint().",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--lang-dir",
|
||||||
|
type=str,
|
||||||
|
help="""Path to lang.
|
||||||
|
""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--decoding-method",
|
||||||
|
type=str,
|
||||||
|
default="greedy_search",
|
||||||
|
help="""Possible values are:
|
||||||
|
- greedy_search
|
||||||
|
- modified_beam_search
|
||||||
|
- fast_beam_search
|
||||||
|
""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"sound_files",
|
||||||
|
type=str,
|
||||||
|
nargs="+",
|
||||||
|
help="The input sound file(s) to transcribe. "
|
||||||
|
"Supported formats are those supported by torchaudio.load(). "
|
||||||
|
"For example, wav and flac are supported. "
|
||||||
|
"The sample rate has to be 16kHz.",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--sample-rate",
|
||||||
|
type=int,
|
||||||
|
default=48000,
|
||||||
|
help="The sample rate of the input sound file",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--beam-size",
|
||||||
|
type=int,
|
||||||
|
default=4,
|
||||||
|
help="Used only when --method is beam_search and modified_beam_search ",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--beam",
|
||||||
|
type=float,
|
||||||
|
default=4,
|
||||||
|
help="""A floating point value to calculate the cutoff score during beam
|
||||||
|
search (i.e., `cutoff = max-score - beam`), which is the same as the
|
||||||
|
`beam` in Kaldi.
|
||||||
|
Used only when --decoding-method is fast_beam_search""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--max-contexts",
|
||||||
|
type=int,
|
||||||
|
default=4,
|
||||||
|
help="""Used only when --decoding-method is
|
||||||
|
fast_beam_search""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--max-states",
|
||||||
|
type=int,
|
||||||
|
default=8,
|
||||||
|
help="""Used only when --decoding-method is
|
||||||
|
fast_beam_search""",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--context-size",
|
||||||
|
type=int,
|
||||||
|
default=2,
|
||||||
|
help="The context size in the decoder. 1 means bigram; "
|
||||||
|
"2 means tri-gram",
|
||||||
|
)
|
||||||
|
|
||||||
|
parser.add_argument(
|
||||||
|
"--max-sym-per-frame",
|
||||||
|
type=int,
|
||||||
|
default=1,
|
||||||
|
help="""Maximum number of symbols per frame. Used only when
|
||||||
|
--method is greedy_search.
|
||||||
|
""",
|
||||||
|
)
|
||||||
|
|
||||||
|
return parser
|
||||||
|
|
||||||
|
|
||||||
|
def read_sound_files(
|
||||||
|
filenames: List[str], expected_sample_rate: float
|
||||||
|
) -> List[torch.Tensor]:
|
||||||
|
"""Read a list of sound files into a list 1-D float32 torch tensors.
|
||||||
|
Args:
|
||||||
|
filenames:
|
||||||
|
A list of sound filenames.
|
||||||
|
expected_sample_rate:
|
||||||
|
The expected sample rate of the sound files.
|
||||||
|
Returns:
|
||||||
|
Return a list of 1-D float32 torch tensors.
|
||||||
|
"""
|
||||||
|
ans = []
|
||||||
|
for f in filenames:
|
||||||
|
wave, sample_rate = torchaudio.load(f)
|
||||||
|
assert sample_rate == expected_sample_rate, (
|
||||||
|
f"expected sample rate: {expected_sample_rate}. "
|
||||||
|
f"Given: {sample_rate}"
|
||||||
|
)
|
||||||
|
# We use only the first channel
|
||||||
|
ans.append(wave[0])
|
||||||
|
return ans
|
||||||
|
|
||||||
|
|
||||||
|
@torch.no_grad()
|
||||||
|
def main():
|
||||||
|
parser = get_parser()
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
params = get_params()
|
||||||
|
|
||||||
|
params.update(vars(args))
|
||||||
|
|
||||||
|
lexicon = Lexicon(params.lang_dir)
|
||||||
|
params.blank_id = lexicon.token_table["<blk>"]
|
||||||
|
params.vocab_size = max(lexicon.tokens) + 1
|
||||||
|
|
||||||
|
logging.info(f"{params}")
|
||||||
|
|
||||||
|
device = torch.device("cpu")
|
||||||
|
if torch.cuda.is_available():
|
||||||
|
device = torch.device("cuda", 0)
|
||||||
|
|
||||||
|
logging.info(f"device: {device}")
|
||||||
|
|
||||||
|
logging.info("Creating model")
|
||||||
|
model = get_transducer_model(params)
|
||||||
|
|
||||||
|
checkpoint = torch.load(args.checkpoint, map_location="cpu")
|
||||||
|
model.load_state_dict(checkpoint["model"], strict=False)
|
||||||
|
model.to(device)
|
||||||
|
model.eval()
|
||||||
|
model.device = device
|
||||||
|
|
||||||
|
if params.decoding_method == "fast_beam_search":
|
||||||
|
decoding_graph = k2.trivial_graph(params.vocab_size - 1, device=device)
|
||||||
|
else:
|
||||||
|
decoding_graph = None
|
||||||
|
|
||||||
|
logging.info("Constructing Fbank computer")
|
||||||
|
opts = kaldifeat.FbankOptions()
|
||||||
|
opts.device = device
|
||||||
|
opts.frame_opts.dither = 0
|
||||||
|
opts.frame_opts.snip_edges = False
|
||||||
|
opts.frame_opts.samp_freq = params.sample_rate
|
||||||
|
opts.mel_opts.num_bins = params.feature_dim
|
||||||
|
|
||||||
|
fbank = kaldifeat.Fbank(opts)
|
||||||
|
|
||||||
|
logging.info(f"Reading sound files: {params.sound_files}")
|
||||||
|
waves = read_sound_files(
|
||||||
|
filenames=params.sound_files, expected_sample_rate=params.sample_rate
|
||||||
|
)
|
||||||
|
waves = [w.to(device) for w in waves]
|
||||||
|
|
||||||
|
logging.info("Decoding started")
|
||||||
|
features = fbank(waves)
|
||||||
|
feature_lengths = [f.size(0) for f in features]
|
||||||
|
|
||||||
|
features = pad_sequence(
|
||||||
|
features, batch_first=True, padding_value=math.log(1e-10)
|
||||||
|
)
|
||||||
|
|
||||||
|
feature_lengths = torch.tensor(feature_lengths, device=device)
|
||||||
|
|
||||||
|
with torch.no_grad():
|
||||||
|
encoder_out, encoder_out_lens = model.encoder(
|
||||||
|
x=features, x_lens=feature_lengths
|
||||||
|
)
|
||||||
|
|
||||||
|
hyps = []
|
||||||
|
msg = f"Using {params.decoding_method}"
|
||||||
|
logging.info(msg)
|
||||||
|
|
||||||
|
if params.decoding_method == "fast_beam_search":
|
||||||
|
hyp_tokens = fast_beam_search_one_best(
|
||||||
|
model=model,
|
||||||
|
decoding_graph=decoding_graph,
|
||||||
|
encoder_out=encoder_out,
|
||||||
|
encoder_out_lens=encoder_out_lens,
|
||||||
|
beam=params.beam,
|
||||||
|
max_contexts=params.max_contexts,
|
||||||
|
max_states=params.max_states,
|
||||||
|
)
|
||||||
|
for i in range(encoder_out.size(0)):
|
||||||
|
hyps.append([lexicon.token_table[idx] for idx in hyp_tokens[i]])
|
||||||
|
elif (
|
||||||
|
params.decoding_method == "greedy_search"
|
||||||
|
and params.max_sym_per_frame == 1
|
||||||
|
):
|
||||||
|
hyp_tokens = greedy_search_batch(
|
||||||
|
model=model,
|
||||||
|
encoder_out=encoder_out,
|
||||||
|
encoder_out_lens=encoder_out_lens,
|
||||||
|
)
|
||||||
|
for i in range(encoder_out.size(0)):
|
||||||
|
hyps.append([lexicon.token_table[idx] for idx in hyp_tokens[i]])
|
||||||
|
elif params.decoding_method == "modified_beam_search":
|
||||||
|
hyp_tokens = modified_beam_search(
|
||||||
|
model=model,
|
||||||
|
encoder_out=encoder_out,
|
||||||
|
encoder_out_lens=encoder_out_lens,
|
||||||
|
beam=params.beam_size,
|
||||||
|
)
|
||||||
|
for i in range(encoder_out.size(0)):
|
||||||
|
hyps.append([lexicon.token_table[idx] for idx in hyp_tokens[i]])
|
||||||
|
else:
|
||||||
|
batch_size = encoder_out.size(0)
|
||||||
|
|
||||||
|
for i in range(batch_size):
|
||||||
|
# fmt: off
|
||||||
|
encoder_out_i = encoder_out[i:i+1, :encoder_out_lens[i]]
|
||||||
|
# fmt: on
|
||||||
|
if params.decoding_method == "greedy_search":
|
||||||
|
hyp = greedy_search(
|
||||||
|
model=model,
|
||||||
|
encoder_out=encoder_out_i,
|
||||||
|
max_sym_per_frame=params.max_sym_per_frame,
|
||||||
|
)
|
||||||
|
elif params.decoding_method == "beam_search":
|
||||||
|
hyp = beam_search(
|
||||||
|
model=model,
|
||||||
|
encoder_out=encoder_out_i,
|
||||||
|
beam=params.beam_size,
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
raise ValueError(
|
||||||
|
f"Unsupported decoding method: {params.decoding_method}"
|
||||||
|
)
|
||||||
|
hyps.append([lexicon.token_table[idx] for idx in hyp])
|
||||||
|
|
||||||
|
s = "\n"
|
||||||
|
for filename, hyp in zip(params.sound_files, hyps):
|
||||||
|
words = " ".join(hyp)
|
||||||
|
s += f"{filename}:\n{words}\n\n"
|
||||||
|
logging.info(s)
|
||||||
|
|
||||||
|
logging.info("Decoding Done")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
formatter = (
|
||||||
|
"%(asctime)s %(levelname)s [%(filename)s:%(lineno)d] %(message)s"
|
||||||
|
)
|
||||||
|
|
||||||
|
logging.basicConfig(format=formatter, level=logging.INFO)
|
||||||
|
main()
|
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/scaling.py
Symbolic link
1
egs/wenetspeech/ASR/pruned_transducer_stateless2/scaling.py
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../../librispeech/ASR/pruned_transducer_stateless2/scaling.py
|
1025
egs/wenetspeech/ASR/pruned_transducer_stateless2/train.py
Normal file
1025
egs/wenetspeech/ASR/pruned_transducer_stateless2/train.py
Normal file
File diff suppressed because it is too large
Load Diff
1
egs/wenetspeech/ASR/shared
Symbolic link
1
egs/wenetspeech/ASR/shared
Symbolic link
@ -0,0 +1 @@
|
|||||||
|
../../librispeech/ASR/shared
|
Loading…
x
Reference in New Issue
Block a user