48 Commits

Author SHA1 Message Date
Desh Raj
107df3b115 apply black on all files 2022-11-17 09:42:17 -05:00
Fangjun Kuang
60317120ca
Revert "Apply new Black style changes" 2022-11-17 20:19:32 +08:00
Desh Raj
d110b04ad3 apply new black formatting to all files 2022-11-16 13:06:43 -05:00
Fangjun Kuang
7e82f87126
Add Zipformer from Dan (#672) 2022-11-12 18:11:19 +08:00
Zengwei Yao
aa58c2ee02
Modify ActivationBalancer for speed (#612)
* add a probability to apply ActivationBalancer

* minor fix

* minor fix
2022-10-13 15:14:28 +08:00
Fangjun Kuang
1c07d2fb37
Remove all-in-one for onnx export (#614)
* Remove all-in-one for onnx export

* Exit on error for CI
2022-10-12 10:34:06 +08:00
LIyong.Guo
923b60a7c6
padding zeros (#591) 2022-09-28 21:20:33 +08:00
Wei Kang
5c17255eec
Sort results to make it more convenient to compare decoding results (#522)
* Sort result to make it more convenient to compare decoding results

* Add cut_id to recognition results

* add cut_id to results for all recipes

* Fix torch.jit.script

* Fix comments

* Minor fixes

* Fix torch.jit.tracing for Pytorch version before v1.9.0
2022-08-12 07:12:50 +08:00
Fangjun Kuang
58a96e5b68
Support exporting to ONNX format (#501)
* WIP: Support exporting to ONNX format

* Minor fixes.

* Combine encoder/decoder/joiner into a single file.

* Revert merging three onnx models into a single one.

It's quite time consuming to extract a sub-graph from the combined
model. For instance, it takes more than one hour to extract
the encoder model.

* Update CI to test ONNX models.

* Decode with exported models.

* Fix typos.

* Add more doc.

* Remove ncnn as it is not fully tested yet.

* Fix as_strided for streaming conformer.
2022-08-03 10:30:28 +08:00
Wei Kang
6e609c67a2
Using streaming conformer as transducer encoder (#380)
* support streaming in conformer

* Add more documents

* support streaming on pruned_transducer_stateless2; add delay penalty; fixes for decode states

* Minor fixes

* streaming for pruned_transducer_stateless4

* Fix conv cache error, support async streaming decoding

* Fix style

* Fix style

* Fix style

* Add torch.jit.export

* mask the initial cache

* Cutting off invalid frames of encoder_embed output

* fix relative positional encoding in streaming decoding for compution saving

* Minor fixes

* Minor fixes

* Minor fixes

* Minor fixes

* Minor fixes

* Fix jit export for torch 1.6

* Minor fixes for streaming decoding

* Minor fixes on decode stream

* move model parameters to train.py

* make states in forward streaming optional

* update pretrain to support streaming model

* update results.md

* update tensorboard and pre-models

* fix typo

* Fix tests

* remove unused arguments

* add streaming decoding ci

* Minor fix

* Minor fix

* disable right context by default
2022-06-28 00:18:54 +08:00
Jun Wang
d792bdc9bc
fix typo (#445) 2022-06-25 11:00:53 +08:00
Fangjun Kuang
f6ce135608
Various fixes to support torch script. (#371)
* Various fixes to support torch script.

* Add tests to ensure that the model is torch scriptable.

* Update tests.
2022-05-16 21:46:59 +08:00
Mingshuang Luo
93c60a9d30
Code style check for librispeech pruned transducer stateless2 (#308) 2022-04-11 22:15:18 +08:00
Daniel Povey
61486a0f76 Remove initial_speed 2022-04-06 13:17:26 +08:00
Daniel Povey
72f4a673b1 First draft of new approach to learning rates + init 2022-04-04 20:21:34 +08:00
Daniel Povey
8be10d3d6c First draft of model rework 2022-04-02 20:03:21 +08:00
Daniel Povey
eec597fdd5 Merge changes from master 2022-04-02 18:45:20 +08:00
Daniel Povey
e0ba4ef3ec Make layer dropout rate 0.075, was 0.1. 2022-04-02 17:48:54 +08:00
Daniel Povey
45f872c27d Remove final dropout 2022-04-01 19:33:20 +08:00
Daniel Povey
92ec2e356e Fix test-mode 2022-04-01 12:22:12 +08:00
Daniel Povey
8caa18e2fe Bug fix to warmup_scale 2022-03-31 17:30:51 +08:00
Daniel Povey
e663713258 Change how warmup is applied. 2022-03-31 14:43:49 +08:00
Daniel Povey
9a0c2e7fee Merge branch 'rework2i' into rework2i_restoredrop 2022-03-31 12:17:02 +08:00
Daniel Povey
f47fe8337a Remove some un-used code 2022-03-31 12:16:08 +08:00
Daniel Povey
0599f38281 Add final dropout to conformer 2022-03-31 11:53:54 +08:00
Daniel Povey
f87811e65c Fix RE identity 2022-03-30 21:41:46 +08:00
Daniel Povey
709c387ce6 Initial refactoring to remove unnecessary vocab_size 2022-03-30 21:40:22 +08:00
Daniel Povey
74121ac478 Merge branch 'rework2h_randloader_pow0.333_conv_8' into rework2h_randloader_pow0.333_conv_8_lessdrop_speed
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless2/conformer.py
2022-03-30 12:24:15 +08:00
Daniel Povey
37ab0bcfa5 Reduce speed of some components 2022-03-30 11:46:23 +08:00
Daniel Povey
7c46c3b0d4 Remove dropout in output layer 2022-03-30 11:20:04 +08:00
Daniel Povey
21a099b110 Fix padding bug 2022-03-30 11:18:04 +08:00
Daniel Povey
ca6337b78a Add another convolutional layer 2022-03-30 11:12:35 +08:00
Daniel Povey
1b8d7defd0 Reduce 1st conv channels from 64 to 32 2022-03-30 00:44:18 +08:00
Daniel Povey
4e453a4bf9 Rework conformer, remove some code. 2022-03-29 23:41:13 +08:00
Daniel Povey
11124b03ea Refactoring and simplifying conformer and frontend 2022-03-29 20:32:14 +08:00
Daniel Povey
2cde99509f Change max-keep-prob to 0.95 2022-03-27 23:21:42 +08:00
Daniel Povey
953aecf5e3 Reduce layer-drop prob after warmup to 1 in 100 2022-03-27 00:25:32 +08:00
Daniel Povey
b43468bb67 Reduce layer-drop prob 2022-03-26 19:36:33 +08:00
Daniel Povey
0e694739f2 Fix test mode with random layer dropout 2022-03-25 23:28:52 +08:00
Daniel Povey
4b650e9f01 Make warmup work by scaling layer contributions; leave residual layer-drop 2022-03-25 20:34:33 +08:00
Daniel Povey
1f548548d2 Simplify the warmup code; max_abs 10->6 2022-03-24 15:06:11 +08:00
Daniel Povey
9a8aa1f54a Change how warmup works. 2022-03-22 15:36:20 +08:00
Daniel Povey
cef6348703 Change max-abs from 6 to 10 2022-03-22 13:50:54 +08:00
Daniel Povey
11a04c50ae Change 0.025,0.05 to 0.01 in initializations 2022-03-21 21:29:24 +08:00
Daniel Povey
05e30d0c46 Add max-abs=6, debugged version 2022-03-21 21:15:00 +08:00
Daniel Povey
6769087d70 Remove scale_speed, make swish deriv more efficient. 2022-03-18 16:31:25 +08:00
Daniel Povey
11bea4513e Add remaining files in pruned_transducer_stateless2 2022-03-17 11:17:52 +08:00
Daniel Povey
cc8e4412f7 Add more files.. 2022-03-16 22:16:40 +08:00