406 Commits

Author SHA1 Message Date
Yunusemre
7157f62af3
Merging onnx models (#518)
* add export function of onnx-all-in-one to export.py

* add onnx_check script for all-in-one onnx model

* minor fix

* remove unused arguments

* add onnx-all-in-one test

* fix style

* fix style

* fix requirements

* fix input/output names

* fix installing onnx_graphsurgeon

* fix instaliing onnx_graphsurgeon

* revert to previous requirements.txt

* fix minor
2022-08-04 23:03:41 +08:00
Zengwei Yao
a4dd273776
fix about tensorboard (#516)
* fix metricstracker

* fix style
2022-08-04 19:57:12 +08:00
Fangjun Kuang
6af5a82d8f
Convert ScaledEmbedding to nn.Embedding for inference. (#517)
* Convert ScaledEmbedding to nn.Embedding for inference.

* Fix CI style issues.
2022-08-03 15:34:55 +08:00
Fangjun Kuang
58a96e5b68
Support exporting to ONNX format (#501)
* WIP: Support exporting to ONNX format

* Minor fixes.

* Combine encoder/decoder/joiner into a single file.

* Revert merging three onnx models into a single one.

It's quite time consuming to extract a sub-graph from the combined
model. For instance, it takes more than one hour to extract
the encoder model.

* Update CI to test ONNX models.

* Decode with exported models.

* Fix typos.

* Add more doc.

* Remove ncnn as it is not fully tested yet.

* Fix as_strided for streaming conformer.
2022-08-03 10:30:28 +08:00
Wei Kang
2f75236c05
Support dynamic chunk streaming training in pruned_transcuder_stateless5 (#454)
* support dynamic chunk streaming training

* Add simulate streaming decoding

* Support streaming decoding

* fix causal

* Minor fixes

* fix streaming decode; add results
2022-07-29 16:40:06 +08:00
Fangjun Kuang
ec69967584
Set overwrite=True when extracting features in batches. (#487) 2022-07-29 11:17:19 +08:00
Fangjun Kuang
4612b03947
Fix using G before assignment in pruned_transducer_stateless/decode.py (#494) 2022-07-26 10:37:02 +08:00
Wei Kang
b1d0956855
Add modified_beam_search for streaming decode (#489)
* Add modified_beam_search for pruned_transducer_stateless/streaming_decode.py

* refactor

* modified beam search for stateless3,4

* Fix comments

* Add real streamng ci
2022-07-25 16:53:23 +08:00
Zengwei Yao
8203d10be7
Add stats about duration and padding proportion (#485)
* add stats about duration and padding proportion

* add  for utt_duration

* add stats for other recipes

* add stats for other 2 recipes

* modify doc

* minor change
2022-07-25 16:40:43 +08:00
Quandwang
116d0cf26d
CTC attention model with reworked Conformer encoder and reworked Transformer decoder (#462)
* ctc attention model with reworked conformer encoder and reworked transformer decoder

* remove unnecessary func

* resolve flake8 conflicts

* fix typos and modify the expr of ScaledEmbedding

* use original beam size

* minor changes to the scripts

* add rnn lm decoding

* minor changes

* check whether q k v weight is None

* check whether q k v weight is None

* check whether q k v weight is None

* style correction

* update results

* update results

* upload the decoding results of rnn-lm to the RESULTS

* upload the decoding results of rnn-lm to the RESULTS

* Update egs/librispeech/ASR/RESULTS.md

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* Update egs/librispeech/ASR/RESULTS.md

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* Update egs/librispeech/ASR/RESULTS.md

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-07-22 15:31:25 +08:00
ezerhouni
608473b4eb
Add RNN-LM rescoring in fast beam search (#475) 2022-07-18 16:52:17 +08:00
ezerhouni
ffca1ae7fb
[WIP] Rnn-T LM nbest rescoring (#471) 2022-07-15 10:32:54 +08:00
LIyong.Guo
f8d28f0998
update multi_quantization installation (#469)
* update multi_quantization installation

* Update egs/librispeech/ASR/pruned_transducer_stateless6/train.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-07-13 21:16:45 +08:00
Zengwei Yao
bc2882ddcc
Simplified memory bank for Emformer (#440)
* init files

* use average value as memory vector for each chunk

* change tail padding length from right_context_length to chunk_length

* correct the files, ln -> cp

* fix bug in conv_emformer_transducer_stateless2/emformer.py

* fix doc in conv_emformer_transducer_stateless/emformer.py

* refactor init states for stream

* modify .flake8

* fix bug about memory mask when memory_size==0

* add @torch.jit.export for init_states function

* update RESULTS.md

* minor change

* update README.md

* modify doc

* replace torch.div() with <<

* fix bug, >> -> <<

* use i&i-1 to judge if it is a power of 2

* minor fix

* fix error in RESULTS.md
2022-07-12 19:19:58 +08:00
Zengwei Yao
ce26495238
Rand combine update result (#467)
* update RESULTS.md

* fix test code in pruned_transducer_stateless5/conformer.py

* minor fix

* delete doc

* fix style
2022-07-11 18:13:31 +08:00
Zengwei Yao
d80f29e662
Modification about random combine (#452)
* comment some lines, random combine from 1/3 layers, on linear layers in combiner

* delete commented lines

* minor change
2022-06-30 12:23:49 +08:00
Wei Kang
6e609c67a2
Using streaming conformer as transducer encoder (#380)
* support streaming in conformer

* Add more documents

* support streaming on pruned_transducer_stateless2; add delay penalty; fixes for decode states

* Minor fixes

* streaming for pruned_transducer_stateless4

* Fix conv cache error, support async streaming decoding

* Fix style

* Fix style

* Fix style

* Add torch.jit.export

* mask the initial cache

* Cutting off invalid frames of encoder_embed output

* fix relative positional encoding in streaming decoding for compution saving

* Minor fixes

* Minor fixes

* Minor fixes

* Minor fixes

* Minor fixes

* Fix jit export for torch 1.6

* Minor fixes for streaming decoding

* Minor fixes on decode stream

* move model parameters to train.py

* make states in forward streaming optional

* update pretrain to support streaming model

* update results.md

* update tensorboard and pre-models

* fix typo

* Fix tests

* remove unused arguments

* add streaming decoding ci

* Minor fix

* Minor fix

* disable right context by default
2022-06-28 00:18:54 +08:00
Jun Wang
d792bdc9bc
fix typo (#445) 2022-06-25 11:00:53 +08:00
Tiance Wang
c0ea334738
fix bug of concatenating list to tuple (#444) 2022-06-24 19:31:09 +08:00
ezerhouni
0475d75d15
[Ready to be merged] Add RNN-LM to Conformer-CTC decoding (#439) 2022-06-23 19:37:03 +08:00
Fangjun Kuang
dc89b61b80
Add fast_beam_search_nbest. (#420)
* Add fast_beam_search_nbest.

* Fix CI errors.

* Fix CI errors.

* More fixes.

* Small fixes.

* Support using log_add in LG decoding with fast_beam_search.

* Support LG decoding in pruned_transducer_stateless

* Support LG for pruned_transducer_stateless2.

* Support LG for fast beam search.

* Minor fixes.
2022-06-22 00:09:25 +08:00
Fangjun Kuang
7100c33820
Add pruned RNN-T for aishell. (#436)
* Add pruned RNN-T for aishell.

* support torch script.

* Update CI.

* Minor fixes.

* Add links to sherpa.
2022-06-21 21:17:22 +08:00
Zengwei Yao
d3daeaf5cd
Upload extracted codebook indexes (#429)
* save only vq-related info to manifest

* support to join manifest files

* support using extracted codebook indexes

* fix doc

* minor fix

* add enable-distillation argument option, fix monir typos

* fix style

* fix typo
2022-06-21 19:16:59 +08:00
Zengwei Yao
a42d96dfe0
Fix warmup (#435)
* fix warmup when scan_pessimistic_batches_for_oom

* delete comments
2022-06-20 13:40:01 +08:00
Fangjun Kuang
ab788980c9
Fix an error introduced by supporting torchscript for torch 1.6.0 (#434) 2022-06-18 08:57:20 +08:00
Fangjun Kuang
d53f69108f
Support torch 1.6.0 (#433) 2022-06-17 22:24:47 +08:00
Wei Kang
5379c8e9fa
Disable drop_last in testing time (#427) 2022-06-16 15:43:48 +08:00
Zengwei Yao
53f38c01d2
Emformer with conv module and scaling mechanism (#389)
* copy files from existing branch

* add rule in .flake8

* monir style fix

* fix typos

* add tail padding

* refactor, use fixed-length cache for batch decoding

* copy from streaming branch

* copy from streaming branch

* modify emformer states stack and unstack, streaming decoding, to be continued

* refactor Stream class

* remane streaming_feature_extractor.py

* refactor streaming decoding

* test states stack and unstack

* fix bugs, no grad, and num_proccessed_frames

* add modify_beam_search, fast_beam_search

* support torch.jit.export

* use torch.div

* copy from pruned_transducer_stateless4

* modify export.py

* add author info

* delete other test functions

* minor fix

* modify doc

* fix style

* minor fix doc

* minor fix

* minor fix doc

* update RESULTS.md

* fix typo

* add info

* fix typo

* fix doc

* add test function for conv module, and minor fix.

* add copyright info

* minor change of test_emformer.py

* fix doc of stack and unstack, test case with batch_size=1

* update README.md
2022-06-13 15:09:17 +08:00
Fangjun Kuang
9f6c748b30
Add links to sherpa. (#417)
* Add links to sherpa.
2022-06-10 12:19:18 +08:00
Fangjun Kuang
dbda1644b5
Replace load_manifest_lazy with load_manifest for MUSAN. (#412) 2022-06-09 11:42:18 +08:00
Fangjun Kuang
ed66877694
Replace ChunkedLilcomHdf5Writer with LilcomChunkyWriter. (#411) 2022-06-09 11:18:52 +08:00
Quandwang
8512aaf585
fix typos (#409) 2022-06-08 20:08:44 +08:00
Fangjun Kuang
1094a3cb37
Replace LilcomChunkyWriter with ChunkedLilcomHdf5Writer. (#404) 2022-06-07 18:14:25 +08:00
Fangjun Kuang
80c46f0abd
Fix exporting emformer with torchscript using torch 1.6.0 (#402) 2022-06-07 09:19:37 +08:00
Fangjun Kuang
29fa878fff
Fix Emformer for torchscript using torch 1.6.0 (#401) 2022-06-06 17:08:07 +08:00
Fangjun Kuang
f1abce72f8
Use jsonl for CutSet in the LibriSpeech recipe. (#397)
* Use jsonl for cutsets in the librispeech recipe.

* Use lazy cutset for all recipes.

* More fixes to use lazy CutSet.

* Remove force=True from logging to support Python < 3.8

* Minor fixes.

* Fix style issues.
2022-06-06 10:19:16 +08:00
Zengwei Yao
148f69d8d9
Update RESULTS.md (#388)
* update RESULT.md about pruned_transducer_stateless4

* Update RESULT.md

This PR is only to update RESULT.md about pruned_transducer_stateless4.

* set default value of --use-averaged-model to True

* update RESULTS.md and add decode command

* minor fix

* update export.py

* add uploaded files links

* update link

* fix typos
2022-06-04 15:52:35 +08:00
Fangjun Kuang
fbfc98f1d3
Add streaming Emformer stateless RNN-T. (#390)
* Add streaming Emformer stateless RNN-T.

* Update results for streaming Emformer.

* Minor fixes.
2022-06-01 14:31:47 +08:00
LIyong.Guo
c4ee2bc0af
[Ready to merge]stateless6: states4 + hubert distillation. (#387)
* a copy of stateless4 as base

* distillation with hubert

* fix typo

* example usage

* usage

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* fix comment

* add results of 100hours

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* check fairseq and quantization

* a short intro to distillation framework

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* add intro of statless6 in README

* fix type error of dst_manifest_dir

* Update egs/librispeech/ASR/pruned_transducer_stateless6/hubert_xlarge.py

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>

* make export.py call stateless6/train.py instead of stateless2/train.py

* update results by stateless6

* adjust results format

* fix typo

Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2022-05-28 12:37:50 +08:00
Ewald Enzinger
8c5722de8c
[egs] Add prefix when reading manifests due to recent lhotse changes (#382)
* [egs] Add prefix when reading manifests due to recent lhotse changes

* Fix wenetspeech

* Fix style issues
2022-05-23 23:37:35 +08:00
Fangjun Kuang
2f1e23cde1
Narrower and deeper conformer (#330)
* Copy files for editing.

* Add random combine from #229.

* Minor fixes.

* Pass model parameters from the command line.

* Fix warnings.

* Fix warnings.

* Update readme.

* Rename to avoid conflicts.

* Update results.

* Add CI for pruned_transducer_stateless5

* Typo fixes.

* Remove random combiner.

* Update decode.py and train.py to use periodically averaged models.

* Minor fixes.

* Revert to use random combiner.

* Update results.

* Minor fixes.
2022-05-23 14:39:11 +08:00
Daniel Povey
4e23fb2252
Improve diagnostics code memory-wise and accumulate more stats. (#373)
* Update diagnostics, hopefully print more stats.

# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless4b/train.py

* Remove memory-limit options arg

* Remove unnecessary option for diagnostics code, collect on more batches
2022-05-19 11:45:59 +08:00
Fangjun Kuang
f6ce135608
Various fixes to support torch script. (#371)
* Various fixes to support torch script.

* Add tests to ensure that the model is torch scriptable.

* Update tests.
2022-05-16 21:46:59 +08:00
Fangjun Kuang
f23dd43719
Update results for libri+giga multi dataset setup. (#363)
* Update results for libri+giga multi dataset setup.
2022-05-14 21:45:39 +08:00
Fangjun Kuang
0f180b3ce2
Validate that there are no OOV tokens in BPE-based lexicons. (#359)
* Validate that there are no OOV tokens in BPE-based lexicons.

* Typo fixes.
2022-05-13 14:00:35 +08:00
Fangjun Kuang
7b7acdf369
Support --iter in export.py (#360) 2022-05-13 10:51:44 +08:00
Fangjun Kuang
aeb8986e35
Ignore padding frames during RNN-T decoding. (#358)
* Ignore padding frames during RNN-T decoding.

* Fix outdated decoding code.

* Minor fixes.
2022-05-13 07:39:14 +08:00
Fangjun Kuang
bc284e88e6
Run decode.py in GitHub actions. (#356) 2022-05-10 14:51:34 +08:00
Zengwei Yao
20f092e709
Support decoding with averaged model when using --iter (#353)
* support decoding with averaged model when using --iter

* minor fix

* monir fix of copyright date
2022-05-07 13:09:11 +08:00
Zengwei Yao
c059ef3169
Keep model_avg on cpu (#348)
* keep model_avg on cpu

* explicitly convert model_avg to cpu

* minor fix

* remove device convertion for model_avg

* modify usage of the model device in train.py

* change model.device to next(model.parameters()).device for decoding

* assert params.start_epoch>0

* assert params.start_epoch>0, params.start_epoch
2022-05-07 10:42:34 +08:00