511 Commits

Author SHA1 Message Date
Fangjun Kuang
87cf9231ea
Support specifying iteration number of checkpoints for decoding. (#289) 2022-04-03 13:02:08 +08:00
Daniel Povey
9f62a0296c Revert transducer_stateless/ to state in upstream/master 2022-04-02 21:16:39 +08:00
Daniel Povey
807fcada68 Change learning speed of simple_lm_proj 2022-04-02 20:15:11 +08:00
Daniel Povey
34500afc43 Various bug fixes 2022-04-02 20:06:43 +08:00
Daniel Povey
8be10d3d6c First draft of model rework 2022-04-02 20:03:21 +08:00
Daniel Povey
eec597fdd5 Merge changes from master 2022-04-02 18:45:20 +08:00
Daniel Povey
e0ba4ef3ec Make layer dropout rate 0.075, was 0.1. 2022-04-02 17:48:54 +08:00
Daniel Povey
45f872c27d Remove final dropout 2022-04-01 19:33:20 +08:00
Daniel Povey
92ec2e356e Fix test-mode 2022-04-01 12:22:12 +08:00
Fangjun Kuang
e7493ede90
Don't use a lambda for dataloader's worker_init_fn. (#284)
* Don't use a lambda for dataloader's worker_init_fn.
2022-03-31 20:32:00 +08:00
Daniel Povey
8caa18e2fe Bug fix to warmup_scale 2022-03-31 17:30:51 +08:00
Fangjun Kuang
9a11808ed3
Set the seed for dataloader. (#282)
Also, suppress torch warnings about division by truncation.
2022-03-31 16:48:46 +08:00
Daniel Povey
49bc761ba1 Merge branch 'rework2i_restoredrop_scaled_warmup' into rework2i_restoredrop_scaled_warmup_2proj
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless2/model.py
2022-03-31 14:45:55 +08:00
Daniel Povey
e663713258 Change how warmup is applied. 2022-03-31 14:43:49 +08:00
Daniel Povey
fcb0dba2cf Reduce initial_speed from 0.5 to 0.25 2022-03-31 13:47:28 +08:00
Daniel Povey
025d690995 Reduce initial_speed further from 0.5 to 0.25 2022-03-31 13:39:56 +08:00
Daniel Povey
ec54fa85cc Use initial_speed=0.5 2022-03-31 13:04:09 +08:00
Daniel Povey
e59db01b7c Reduce initial_speed 2022-03-31 13:03:26 +08:00
Daniel Povey
c67ae0f3a1 Make 2 projections.. 2022-03-31 13:02:40 +08:00
Daniel Povey
f75d40c725 Replace nn.Linear with ScaledLinear in simple joiner 2022-03-31 12:18:31 +08:00
Daniel Povey
9a0c2e7fee Merge branch 'rework2i' into rework2i_restoredrop 2022-03-31 12:17:02 +08:00
Daniel Povey
f47fe8337a Remove some un-used code 2022-03-31 12:16:08 +08:00
Daniel Povey
0599f38281 Add final dropout to conformer 2022-03-31 11:53:54 +08:00
Daniel Povey
a2aca9f643 Bug-fix 2022-03-30 21:42:15 +08:00
Daniel Povey
f87811e65c Fix RE identity 2022-03-30 21:41:46 +08:00
Daniel Povey
709c387ce6 Initial refactoring to remove unnecessary vocab_size 2022-03-30 21:40:22 +08:00
Daniel Povey
74121ac478 Merge branch 'rework2h_randloader_pow0.333_conv_8' into rework2h_randloader_pow0.333_conv_8_lessdrop_speed
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless2/conformer.py
2022-03-30 12:24:15 +08:00
Daniel Povey
37ab0bcfa5 Reduce speed of some components 2022-03-30 11:46:23 +08:00
Daniel Povey
7c46c3b0d4 Remove dropout in output layer 2022-03-30 11:20:04 +08:00
Daniel Povey
21a099b110 Fix padding bug 2022-03-30 11:18:04 +08:00
Daniel Povey
ca6337b78a Add another convolutional layer 2022-03-30 11:12:35 +08:00
Daniel Povey
1b8d7defd0 Reduce 1st conv channels from 64 to 32 2022-03-30 00:44:18 +08:00
Daniel Povey
4e453a4bf9 Rework conformer, remove some code. 2022-03-29 23:41:13 +08:00
Daniel Povey
11124b03ea Refactoring and simplifying conformer and frontend 2022-03-29 20:32:14 +08:00
Daniel Povey
57f943b25c Merge branch 'rework2h_randloader' into rework2h_pow0.333 2022-03-29 19:05:39 +08:00
Daniel Povey
2cde99509f Change max-keep-prob to 0.95 2022-03-27 23:21:42 +08:00
Daniel Povey
262388134d Increase model_warm_step to 4k 2022-03-27 11:18:16 +08:00
Daniel Povey
8a8134b9e5 Change power of lr-schedule from -0.5 to -0.333 2022-03-27 00:31:08 +08:00
Daniel Povey
953aecf5e3 Reduce layer-drop prob after warmup to 1 in 100 2022-03-27 00:25:32 +08:00
Daniel Povey
b43468bb67 Reduce layer-drop prob 2022-03-26 19:36:33 +08:00
Daniel Povey
8a38d9a855 Fix/patch how fix_random_seed() is imported. 2022-03-26 15:43:47 +08:00
Daniel Povey
26a1730392 Add random-number-setting function in dataloader 2022-03-26 14:53:23 +08:00
Daniel Povey
0e694739f2 Fix test mode with random layer dropout 2022-03-25 23:28:52 +08:00
Daniel Povey
d2ed3dfc90 Fix bug 2022-03-25 20:35:11 +08:00
Daniel Povey
4b650e9f01 Make warmup work by scaling layer contributions; leave residual layer-drop 2022-03-25 20:34:33 +08:00
Daniel Povey
1f548548d2 Simplify the warmup code; max_abs 10->6 2022-03-24 15:06:11 +08:00
Daniel Povey
aab72bc2a5 Add changes from master to decode.py, train.py 2022-03-24 13:10:54 +08:00
Daniel Povey
5d9dae3064 Merge changes from master 2022-03-24 12:59:36 +08:00
Fangjun Kuang
395a3f952b
Batch decoding for models trained with optimized_transducer (#267)
* Add greedy search in batch mode.
* Add modified beam search in batch mode.
2022-03-23 19:11:34 +08:00
Fangjun Kuang
3ae7265737
More fixes to the checkpoint code. (#266) 2022-03-23 14:37:54 +08:00