mirror of
https://github.com/k2-fsa/icefall.git
synced 2025-08-09 18:12:19 +00:00
* Fix torch.nn.Embedding error for torch below 1.8.0 * Changes to fbank computation, use lilcom chunky writer * Add min in q,k,v of attention * Remove learnable offset, use relu instead. * Experiments based on SpecAugment change * Merge specaug change from Mingshuang. * Use much more aggressive SpecAug setup * Fix to num_feature_masks bug I introduced; reduce max_frames_mask_fraction 0.4->0.3 * Change p=0.5->0.9, mask_fraction 0.3->0.2 * Change p=0.9 to p=0.8 in SpecAug * Fix num_time_masks code; revert 0.8 to 0.9 * Change max_frames from 0.2 to 0.15 * Remove ReLU in attention * Adding diagnostics code... * Refactor/simplify ConformerEncoder * First version of rand-combine iterated-training-like idea. * Improvements to diagnostics (RE those with 1 dim * Add pelu to this good-performing setup.. * Small bug fixes/imports * Add baseline for the PeLU expt, keeping only the small normalization-related changes. * pelu_base->expscale, add 2xExpScale in subsampling, and in feedforward units. * Double learning rate of exp-scale units * Combine ExpScale and swish for memory reduction * Add import * Fix backprop bug * Fix bug in diagnostics * Increase scale on Scale from 4 to 20 * Increase scale from 20 to 50. * Fix duplicate Swish; replace norm+swish with swish+exp-scale in convolution module * Reduce scale from 50 to 20 * Add deriv-balancing code * Double the threshold in brelu; slightly increase max_factor. * Fix exp dir * Convert swish nonlinearities to ReLU * Replace relu with swish-squared. * Restore ConvolutionModule to state before changes; change all Swish,Swish(Swish) to SwishOffset. * Replace norm on input layer with scale of 0.1. * Extensions to diagnostics code * Update diagnostics * Add BasicNorm module * Replace most normalizations with scales (still have norm in conv) * Change exp dir * Replace norm in ConvolutionModule with a scaling factor. * use nonzero threshold in DerivBalancer * Add min-abs-value 0.2 * Fix dirname * Change min-abs threshold from 0.2 to 0.5 * Scale up pos_bias_u and pos_bias_v before use. * Reduce max_factor to 0.01 * Fix q*scaling logic * Change max_factor in DerivBalancer from 0.025 to 0.01; fix scaling code. * init 1st conv module to smaller variance * Change how scales are applied; fix residual bug * Reduce min_abs from 0.5 to 0.2 * Introduce in_scale=0.5 for SwishExpScale * Fix scale from 0.5 to 2.0 as I really intended.. * Set scaling on SwishExpScale * Add identity pre_norm_final for diagnostics. * Add learnable post-scale for mha * Fix self.post-scale-mha * Another rework, use scales on linear/conv * Change dir name * Reduce initial scaling of modules * Bug-fix RE bias * Cosmetic change * Reduce initial_scale. * Replace ExpScaleRelu with DoubleSwish() * DoubleSwish fix * Use learnable scales for joiner and decoder * Add max-abs-value constraint in DerivBalancer * Add max-abs-value * Change dir name * Remove ExpScale in feedforward layes. * Reduce max-abs limit from 1000 to 100; introduce 2 DerivBalancer modules in conv layer. * Make DoubleSwish more memory efficient * Reduce constraints from deriv-balancer in ConvModule. * Add warmup mode * Remove max-positive constraint in deriv-balancing; add second DerivBalancer in conv module. * Add some extra info to diagnostics * Add deriv-balancer at output of embedding. * Add more stats. * Make epsilon in BasicNorm learnable, optionally. * Draft of 0mean changes.. * Rework of initialization * Fix typo * Remove dead code * Modifying initialization from normal->uniform; add initial_scale when initializing * bug fix re sqrt * Remove xscale from pos_embedding * Remove some dead code. * Cosmetic changes/renaming things * Start adding some files.. * Add more files.. * update decode.py file type * Add remaining files in pruned_transducer_stateless2 * Fix diagnostics-getting code * Scale down pruned loss in warmup mode * Reduce warmup scale on pruned loss form 0.1 to 0.01. * Remove scale_speed, make swish deriv more efficient. * Cosmetic changes to swish * Double warm_step * Fix bug with import * Change initial std from 0.05 to 0.025. * Set also scale for embedding to 0.025. * Remove logging code that broke with newer Lhotse; fix bug with pruned_loss * Add norm+balancer to VggSubsampling * Incorporate changes from master into pruned_transducer_stateless2. * Add max-abs=6, debugged version * Change 0.025,0.05 to 0.01 in initializations * Fix balancer code * Whitespace fix * Reduce initial pruned_loss scale from 0.01 to 0.0 * Increase warm_step (and valid_interval) * Change max-abs from 6 to 10 * Change how warmup works. * Add changes from master to decode.py, train.py * Simplify the warmup code; max_abs 10->6 * Make warmup work by scaling layer contributions; leave residual layer-drop * Fix bug * Fix test mode with random layer dropout * Add random-number-setting function in dataloader * Fix/patch how fix_random_seed() is imported. * Reduce layer-drop prob * Reduce layer-drop prob after warmup to 1 in 100 * Change power of lr-schedule from -0.5 to -0.333 * Increase model_warm_step to 4k * Change max-keep-prob to 0.95 * Refactoring and simplifying conformer and frontend * Rework conformer, remove some code. * Reduce 1st conv channels from 64 to 32 * Add another convolutional layer * Fix padding bug * Remove dropout in output layer * Reduce speed of some components * Initial refactoring to remove unnecessary vocab_size * Fix RE identity * Bug-fix * Add final dropout to conformer * Remove some un-used code * Replace nn.Linear with ScaledLinear in simple joiner * Make 2 projections.. * Reduce initial_speed * Use initial_speed=0.5 * Reduce initial_speed further from 0.5 to 0.25 * Reduce initial_speed from 0.5 to 0.25 * Change how warmup is applied. * Bug fix to warmup_scale * Fix test-mode * Remove final dropout * Make layer dropout rate 0.075, was 0.1. * First draft of model rework * Various bug fixes * Change learning speed of simple_lm_proj * Revert transducer_stateless/ to state in upstream/master * Fix to joiner to allow different dims * Some cleanups * Make training more efficient, avoid redoing some projections. * Change how warm-step is set * First draft of new approach to learning rates + init * Some fixes.. * Change initialization to 0.25 * Fix type of parameter * Fix weight decay formula by adding 1/1-beta * Fix weight decay formula by adding 1/1-beta * Fix checkpoint-writing * Fix to reading scheudler from optim * Simplified optimizer, rework somet things.. * Reduce model_warm_step from 4k to 3k * Fix bug in lambda * Bug-fix RE sign of target_rms * Changing initial_speed from 0.25 to 01 * Change some defaults in LR-setting rule. * Remove initial_speed * Set new scheduler * Change exponential part of lrate to be epoch based * Fix bug * Set 2n rule.. * Implement 2o schedule * Make lrate rule more symmetric * Implement 2p version of learning rate schedule. * Refactor how learning rate is set. * Fix import * Modify init (#301) * update icefall/__init__.py to import more common functions. * update icefall/__init__.py * make imports style consistent. * exclude black check for icefall/__init__.py in pyproject.toml. * Minor fixes for logging (#296) * Minor fixes for logging * Minor fix * Fix dir names * Modify beam search to be efficient with current joienr * Fix adding learning rate to tensorboard * Fix docs in optim.py * Support mix precision training on the reworked model (#305) * Add mix precision support * Minor fixes * Minor fixes * Minor fixes * Tedlium3 pruned transducer stateless (#261) * update tedlium3-pruned-transducer-stateless-codes * update README.md * update README.md * add fast beam search for decoding * do a change for RESULTS.md * do a change for RESULTS.md * do a fix * do some changes for pruned RNN-T * Add mix precision support * Minor fixes * Minor fixes * Updating RESULTS.md; fix in beam_search.py * Fix rebase * Code style check for librispeech pruned transducer stateless2 (#308) * Update results for tedlium3 pruned RNN-T (#307) * Update README.md * Fix CI errors. (#310) * Add more results * Fix tensorboard log location * Add one more epoch of full expt * fix comments * Add results for mixed precision with max-duration 300 * Changes for pretrained.py (tedlium3 pruned RNN-T) (#311) * GigaSpeech recipe (#120) * initial commit * support download, data prep, and fbank * on-the-fly feature extraction by default * support BPE based lang * support HLG for BPE * small fix * small fix * chunked feature extraction by default * Compute features for GigaSpeech by splitting the manifest. * Fixes after review. * Split manifests into 2000 pieces. * set audio duration mismatch tolerance to 0.01 * small fix * add conformer training recipe * Add conformer.py without pre-commit checking * lazy loading and use SingleCutSampler * DynamicBucketingSampler * use KaldifeatFbank to compute fbank for musan * use pretrained language model and lexicon * use 3gram to decode, 4gram to rescore * Add decode.py * Update .flake8 * Delete compute_fbank_gigaspeech.py * Use BucketingSampler for valid and test dataloader * Update params in train.py * Use bpe_500 * update params in decode.py * Decrease num_paths while CUDA OOM * Added README * Update RESULTS * black * Decrease num_paths while CUDA OOM * Decode with post-processing * Update results * Remove lazy_load option * Use default `storage_type` * Keep the original tolerance * Use split-lazy * black * Update pretrained model Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com> * Add LG decoding (#277) * Add LG decoding * Add log weight pushing * Minor fixes * Support computing RNN-T loss with torchaudio (#316) * Support modified beam search decoding for streaming inference with Emformer model. * Formatted imports. * Update results for torchaudio RNN-T. (#322) * Fixed streaming decoding codes for emformer model. * Fixed docs. * Sorted imports for transducer_emformer/streaming_feature_extractor.py * Minor fix for transducer_emformer/streaming_feature_extractor.py Co-authored-by: pkufool <wkang@pku.org.cn> Co-authored-by: Daniel Povey <dpovey@gmail.com> Co-authored-by: Mingshuang Luo <37799481+luomingshuang@users.noreply.github.com> Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com> Co-authored-by: Guo Liyong <guonwpu@qq.com> Co-authored-by: Wang, Guanbo <wgb14@outlook.com>
326 lines
9.7 KiB
Bash
Executable File
326 lines
9.7 KiB
Bash
Executable File
#!/usr/bin/env bash
|
|
|
|
set -eou pipefail
|
|
|
|
nj=15
|
|
stage=0
|
|
stop_stage=100
|
|
|
|
# Split XL subset to a number of pieces (about 2000)
|
|
# This is to avoid OOM during feature extraction.
|
|
num_per_split=50
|
|
|
|
# We assume dl_dir (download dir) contains the following
|
|
# directories and files. If not, they will be downloaded
|
|
# by this script automatically.
|
|
#
|
|
# - $dl_dir/GigaSpeech
|
|
# You can find audio, dict, GigaSpeech.json inside it.
|
|
# You can apply for the download credentials by following
|
|
# https://github.com/SpeechColab/GigaSpeech#download
|
|
#
|
|
# - $dl_dir/lm
|
|
# This directory contains the language model downloaded from
|
|
# https://huggingface.co/wgb14/gigaspeech_lm
|
|
#
|
|
# - 3gram_pruned_1e7.arpa.gz
|
|
# - 4gram.arpa.gz
|
|
# - lexicon.txt
|
|
#
|
|
# - $dl_dir/musan
|
|
# This directory contains the following directories downloaded from
|
|
# http://www.openslr.org/17/
|
|
#
|
|
# - music
|
|
# - noise
|
|
# - speech
|
|
dl_dir=$PWD/download
|
|
|
|
. shared/parse_options.sh || exit 1
|
|
|
|
# vocab size for sentence piece models.
|
|
# It will generate data/lang_bpe_xxx,
|
|
# data/lang_bpe_yyy if the array contains xxx, yyy
|
|
vocab_sizes=(
|
|
500
|
|
)
|
|
|
|
# All files generated by this script are saved in "data".
|
|
# You can safely remove "data" and rerun this script to regenerate it.
|
|
mkdir -p data
|
|
|
|
log() {
|
|
# This function is from espnet
|
|
local fname=${BASH_SOURCE[1]##*/}
|
|
echo -e "$(date '+%Y-%m-%d %H:%M:%S') (${fname}:${BASH_LINENO[0]}:${FUNCNAME[1]}) $*"
|
|
}
|
|
|
|
log "dl_dir: $dl_dir"
|
|
|
|
if [ $stage -le -1 ] && [ $stop_stage -ge -1 ]; then
|
|
log "stage -1: Download LM"
|
|
# We assume that you have installed the git-lfs, if not, you could install it
|
|
# using: `sudo apt-get install git-lfs && git-lfs install`
|
|
[ ! -e $dl_dir/lm ] && mkdir -p $dl_dir/lm
|
|
git clone https://huggingface.co/wgb14/gigaspeech_lm $dl_dir/lm
|
|
gunzip -c $dl_dir/lm/3gram_pruned_1e7.arpa.gz > $dl_dir/lm/3gram_pruned_1e7.arpa
|
|
gunzip -c $dl_dir/lm/4gram.arpa.gz > $dl_dir/lm/4gram.arpa
|
|
fi
|
|
|
|
if [ $stage -le 0 ] && [ $stop_stage -ge 0 ]; then
|
|
log "Stage 0: Download data"
|
|
|
|
[ ! -e $dl_dir/GigaSpeech ] && mkdir -p $dl_dir/GigaSpeech
|
|
|
|
# If you have pre-downloaded it to /path/to/GigaSpeech,
|
|
# you can create a symlink
|
|
#
|
|
# ln -sfv /path/to/GigaSpeech $dl_dir/GigaSpeech
|
|
#
|
|
if [ ! -d $dl_dir/GigaSpeech/audio ] && [ ! -f $dl_dir/GigaSpeech.json ]; then
|
|
# Check credentials.
|
|
if [ ! -f $dl_dir/password ]; then
|
|
echo -n "$0: Please apply for the download credentials by following"
|
|
echo -n "https://github.com/SpeechColab/GigaSpeech#download"
|
|
echo " and save it to $dl_dir/password."
|
|
exit 1;
|
|
fi
|
|
PASSWORD=`cat $dl_dir/password 2>/dev/null`
|
|
if [ -z "$PASSWORD" ]; then
|
|
echo "$0: Error, $dl_dir/password is empty."
|
|
exit 1;
|
|
fi
|
|
PASSWORD_MD5=`echo $PASSWORD | md5sum | cut -d ' ' -f 1`
|
|
if [[ $PASSWORD_MD5 != "dfbf0cde1a3ce23749d8d81e492741b8" ]]; then
|
|
echo "$0: Error, invalid $dl_dir/password."
|
|
exit 1;
|
|
fi
|
|
# Download XL, DEV and TEST sets by default.
|
|
lhotse download gigaspeech --subset auto --host tsinghua \
|
|
$dl_dir/password $dl_dir/GigaSpeech
|
|
fi
|
|
|
|
# If you have pre-downloaded it to /path/to/musan,
|
|
# you can create a symlink
|
|
#
|
|
# ln -sfv /path/to/musan $dl_dir/
|
|
#
|
|
if [ ! -d $dl_dir/musan ]; then
|
|
lhotse download musan $dl_dir
|
|
fi
|
|
fi
|
|
|
|
if [ $stage -le 1 ] && [ $stop_stage -ge 1 ]; then
|
|
log "Stage 1: Prepare GigaSpeech manifest (may take 15 minutes)"
|
|
# We assume that you have downloaded the GigaSpeech corpus
|
|
# to $dl_dir/GigaSpeech
|
|
mkdir -p data/manifests
|
|
lhotse prepare gigaspeech --subset auto -j $nj \
|
|
$dl_dir/GigaSpeech data/manifests
|
|
fi
|
|
|
|
if [ $stage -le 2 ] && [ $stop_stage -ge 2 ]; then
|
|
log "Stage 2: Prepare musan manifest"
|
|
# We assume that you have downloaded the musan corpus
|
|
# to $dl_dir/musan
|
|
mkdir -p data/manifests
|
|
lhotse prepare musan $dl_dir/musan data/manifests
|
|
fi
|
|
|
|
if [ $stage -le 3 ] && [ $stop_stage -ge 3 ]; then
|
|
log "State 3: Preprocess GigaSpeech manifest"
|
|
if [ ! -f data/fbank/.preprocess_complete ]; then
|
|
python3 ./local/preprocess_gigaspeech.py
|
|
touch data/fbank/.preprocess_complete
|
|
fi
|
|
fi
|
|
|
|
if [ $stage -le 4 ] && [ $stop_stage -ge 4 ]; then
|
|
log "Stage 4: Compute features for DEV and TEST subsets of GigaSpeech (may take 2 minutes)"
|
|
python3 ./local/compute_fbank_gigaspeech_dev_test.py
|
|
fi
|
|
|
|
if [ $stage -le 5 ] && [ $stop_stage -ge 5 ]; then
|
|
log "Stage 5: Split XL subset into pieces (may take 30 minutes)"
|
|
split_dir=data/fbank/XL_split
|
|
if [ ! -f $split_dir/.split_completed ]; then
|
|
lhotse split-lazy ./data/fbank/cuts_XL_raw.jsonl.gz $split_dir $num_per_split
|
|
touch $split_dir/.split_completed
|
|
fi
|
|
fi
|
|
|
|
if [ $stage -le 6 ] && [ $stop_stage -ge 6 ]; then
|
|
log "Stage 6: Compute features for XL"
|
|
num_splits=$(find data/fbank/XL_split -name "cuts_XL_raw.*.jsonl.gz" | wc -l)
|
|
python3 ./local/compute_fbank_gigaspeech_splits.py \
|
|
--num-workers 20 \
|
|
--batch-duration 600 \
|
|
--num-splits $num_splits
|
|
fi
|
|
|
|
if [ $stage -le 7 ] && [ $stop_stage -ge 7 ]; then
|
|
log "Stage 7: Combine features for XL (may take 3 hours)"
|
|
if [ ! -f data/fbank/cuts_XL.jsonl.gz ]; then
|
|
pieces=$(find data/fbank/XL_split -name "cuts_XL.*.jsonl.gz")
|
|
lhotse combine $pieces data/fbank/cuts_XL.jsonl.gz
|
|
fi
|
|
fi
|
|
|
|
if [ $stage -le 8 ] && [ $stop_stage -ge 8 ]; then
|
|
log "Stage 8: Compute fbank for musan"
|
|
mkdir -p data/fbank
|
|
./local/compute_fbank_musan.py
|
|
fi
|
|
|
|
if [ $stage -le 9 ] && [ $stop_stage -ge 9 ]; then
|
|
log "Stage 9: Prepare phone based lang"
|
|
lang_dir=data/lang_phone
|
|
mkdir -p $lang_dir
|
|
|
|
(echo '!SIL SIL'; echo '<SPOKEN_NOISE> SPN'; echo '<UNK> SPN'; ) |
|
|
cat - $dl_dir/lm/lexicon.txt |
|
|
sort | uniq > $lang_dir/lexicon.txt
|
|
|
|
if [ ! -f $lang_dir/L_disambig.pt ]; then
|
|
./local/prepare_lang.py --lang-dir $lang_dir
|
|
fi
|
|
|
|
if [ ! -f $lang_dir/transcript_words.txt ]; then
|
|
gunzip -c "data/manifests/gigaspeech_supervisions_XL.jsonl.gz" \
|
|
| jq '.text' \
|
|
| sed 's/"//g' \
|
|
> $lang_dir/transcript_words.txt
|
|
|
|
# Delete utterances with garbage meta tags
|
|
garbage_utterance_tags="<SIL> <MUSIC> <NOISE> <OTHER>"
|
|
for tag in $garbage_utterance_tags; do
|
|
sed -i "/${tag}/d" $lang_dir/transcript_words.txt
|
|
done
|
|
|
|
# Delete punctuations in utterances
|
|
punctuation_tags="<COMMA> <EXCLAMATIONPOINT> <PERIOD> <QUESTIONMARK>"
|
|
for tag in $punctuation_tags; do
|
|
sed -i "s/${tag}//g" $lang_dir/transcript_words.txt
|
|
done
|
|
|
|
# Ensure space only appears once
|
|
sed -i 's/\t/ /g' $lang_dir/transcript_words.txt
|
|
sed -i 's/[ ][ ]*/ /g' $lang_dir/transcript_words.txt
|
|
fi
|
|
|
|
cat $lang_dir/transcript_words.txt | sed 's/ /\n/g' \
|
|
| sort -u | sed '/^$/d' > $lang_dir/words.txt
|
|
(echo '!SIL'; echo '<SPOKEN_NOISE>'; echo '<UNK>'; ) |
|
|
cat - $lang_dir/words.txt | sort | uniq | awk '
|
|
BEGIN {
|
|
print "<eps> 0";
|
|
}
|
|
{
|
|
if ($1 == "<s>") {
|
|
print "<s> is in the vocabulary!" | "cat 1>&2"
|
|
exit 1;
|
|
}
|
|
if ($1 == "</s>") {
|
|
print "</s> is in the vocabulary!" | "cat 1>&2"
|
|
exit 1;
|
|
}
|
|
printf("%s %d\n", $1, NR);
|
|
}
|
|
END {
|
|
printf("#0 %d\n", NR+1);
|
|
printf("<s> %d\n", NR+2);
|
|
printf("</s> %d\n", NR+3);
|
|
}' > $lang_dir/words || exit 1;
|
|
mv $lang_dir/words $lang_dir/words.txt
|
|
fi
|
|
|
|
if [ $stage -le 10 ] && [ $stop_stage -ge 10 ]; then
|
|
log "Stage 10: Prepare BPE based lang"
|
|
|
|
for vocab_size in ${vocab_sizes[@]}; do
|
|
lang_dir=data/lang_bpe_${vocab_size}
|
|
mkdir -p $lang_dir
|
|
# We reuse words.txt from phone based lexicon
|
|
# so that the two can share G.pt later.
|
|
cp data/lang_phone/{words.txt,transcript_words.txt} $lang_dir
|
|
|
|
if [ ! -f $lang_dir/bpe.model ]; then
|
|
./local/train_bpe_model.py \
|
|
--lang-dir $lang_dir \
|
|
--vocab-size $vocab_size \
|
|
--transcript $lang_dir/transcript_words.txt
|
|
fi
|
|
|
|
if [ ! -f $lang_dir/L_disambig.pt ]; then
|
|
./local/prepare_lang_bpe.py --lang-dir $lang_dir
|
|
fi
|
|
done
|
|
fi
|
|
|
|
if [ $stage -le 11 ] && [ $stop_stage -ge 11 ]; then
|
|
log "Stage 11: Prepare bigram P"
|
|
|
|
for vocab_size in ${vocab_sizes[@]}; do
|
|
lang_dir=data/lang_bpe_${vocab_size}
|
|
|
|
if [ ! -f $lang_dir/transcript_tokens.txt ]; then
|
|
./local/convert_transcript_words_to_tokens.py \
|
|
--lexicon $lang_dir/lexicon.txt \
|
|
--transcript $lang_dir/transcript_words.txt \
|
|
--oov "<UNK>" \
|
|
> $lang_dir/transcript_tokens.txt
|
|
fi
|
|
|
|
if [ ! -f $lang_dir/P.arpa ]; then
|
|
./shared/make_kn_lm.py \
|
|
-ngram-order 2 \
|
|
-text $lang_dir/transcript_tokens.txt \
|
|
-lm $lang_dir/P.arpa
|
|
fi
|
|
|
|
if [ ! -f $lang_dir/P.fst.txt ]; then
|
|
python3 -m kaldilm \
|
|
--read-symbol-table="$lang_dir/tokens.txt" \
|
|
--disambig-symbol='#0' \
|
|
--max-order=2 \
|
|
$lang_dir/P.arpa > $lang_dir/P.fst.txt
|
|
fi
|
|
done
|
|
fi
|
|
|
|
if [ $stage -le 12 ] && [ $stop_stage -ge 12 ]; then
|
|
log "Stage 12: Prepare G"
|
|
# We assume you have install kaldilm, if not, please install
|
|
# it using: pip install kaldilm
|
|
|
|
mkdir -p data/lm
|
|
|
|
if [ ! -f data/lm/G_3_gram.fst.txt ]; then
|
|
# It is used in building HLG
|
|
python3 -m kaldilm \
|
|
--read-symbol-table="data/lang_phone/words.txt" \
|
|
--disambig-symbol='#0' \
|
|
--max-order=3 \
|
|
$dl_dir/lm/3gram_pruned_1e7.arpa > data/lm/G_3_gram.fst.txt
|
|
fi
|
|
|
|
if [ ! -f data/lm/G_4_gram.fst.txt ]; then
|
|
# It is used for LM rescoring
|
|
python3 -m kaldilm \
|
|
--read-symbol-table="data/lang_phone/words.txt" \
|
|
--disambig-symbol='#0' \
|
|
--max-order=4 \
|
|
$dl_dir/lm/4gram.arpa > data/lm/G_4_gram.fst.txt
|
|
fi
|
|
fi
|
|
|
|
if [ $stage -le 13 ] && [ $stop_stage -ge 13 ]; then
|
|
log "Stage 13: Compile HLG"
|
|
./local/compile_hlg.py --lang-dir data/lang_phone
|
|
|
|
for vocab_size in ${vocab_sizes[@]}; do
|
|
lang_dir=data/lang_bpe_${vocab_size}
|
|
./local/compile_hlg.py --lang-dir $lang_dir
|
|
done
|
|
fi
|