Commit Graph

  • eec597fdd5 Merge changes from master Daniel Povey 2022-04-02 18:43:42 +08:00
  • e0ba4ef3ec Make layer dropout rate 0.075, was 0.1. Daniel Povey 2022-04-02 17:47:12 +08:00
  • 0b6a2213c3
    Modify icefall/__init__.py. (#287) Zengwei Yao 2022-04-02 15:01:45 +08:00
  • 189ca555b1
    Use Emformer as RNN-T encoder. (#278) Fangjun Kuang 2022-04-02 13:37:39 +08:00
  • bc9a82b518 Modify icefall/__init__.py and .flake8. yaozengwei 2022-04-02 10:34:06 +08:00
  • 00baebd46e Modify icefall/__init__.py to import common functions defined in icefall/utils.py. yaozengwei 2022-04-01 22:33:15 +08:00
  • 128c9db545 Add RNN-T Emformer for Aishell. Fangjun Kuang 2022-04-01 21:58:40 +08:00
  • 3479af0e6e Minor fixes. Fangjun Kuang 2022-04-01 19:46:52 +08:00
  • 45f872c27d Remove final dropout Daniel Povey 2022-04-01 19:33:20 +08:00
  • 6f64a0ed8d Support streaming decoding. Fangjun Kuang 2022-04-01 19:07:34 +08:00
  • 92ec2e356e Fix test-mode Daniel Povey 2022-04-01 12:22:12 +08:00
  • 397216eb78 Merge branch 'master' of https://github.com/k2-fsa/icefall into spgi Desh Raj 2022-03-31 14:55:33 -04:00
  • e7493ede90
    Don't use a lambda for dataloader's worker_init_fn. (#284) Fangjun Kuang 2022-03-31 20:32:00 +08:00
  • 4f87802200 Fixes after comment. Fangjun Kuang 2022-03-31 20:29:40 +08:00
  • 9e5ca821bc Don't use a lambda for dataloader's worker_init_fn. Fangjun Kuang 2022-03-31 20:22:18 +08:00
  • feb526c2a4 Predicting blanks via gradients from the trivial joiner. Fangjun Kuang 2022-03-31 20:12:41 +08:00
  • 2ff81b2838 support half precision training pkufool 2022-03-31 19:58:57 +08:00
  • 239a8fa1f2 Copy files for editing. Fangjun Kuang 2022-03-31 16:54:58 +08:00
  • 8caa18e2fe Bug fix to warmup_scale Daniel Povey 2022-03-31 17:30:51 +08:00
  • e7d369ab29 Copy files. Fangjun Kuang 2022-03-31 16:51:27 +08:00
  • 9a11808ed3
    Set the seed for dataloader. (#282) Fangjun Kuang 2022-03-31 16:48:46 +08:00
  • 49bc761ba1 Merge branch 'rework2i_restoredrop_scaled_warmup' into rework2i_restoredrop_scaled_warmup_2proj Daniel Povey 2022-03-31 14:45:55 +08:00
  • e663713258 Change how warmup is applied. Daniel Povey 2022-03-31 14:43:49 +08:00
  • fcb0dba2cf Reduce initial_speed from 0.5 to 0.25 Daniel Povey 2022-03-31 13:47:28 +08:00
  • 025d690995 Reduce initial_speed further from 0.5 to 0.25 Daniel Povey 2022-03-31 13:39:56 +08:00
  • ec54fa85cc Use initial_speed=0.5 Daniel Povey 2022-03-31 13:04:09 +08:00
  • e59db01b7c Reduce initial_speed Daniel Povey 2022-03-31 13:03:26 +08:00
  • c67ae0f3a1 Make 2 projections.. Daniel Povey 2022-03-31 13:02:40 +08:00
  • f75d40c725 Replace nn.Linear with ScaledLinear in simple joiner Daniel Povey 2022-03-31 12:18:31 +08:00
  • 9a0c2e7fee Merge branch 'rework2i' into rework2i_restoredrop Daniel Povey 2022-03-31 12:17:02 +08:00
  • f47fe8337a Remove some un-used code Daniel Povey 2022-03-31 12:16:08 +08:00
  • 65a2275312 Set the seed for dataloader. Fangjun Kuang 2022-03-31 12:09:03 +08:00
  • 0599f38281 Add final dropout to conformer Daniel Povey 2022-03-31 11:53:54 +08:00
  • fc40bfea82
    fix typo of torch.eig (#281) LIyong.Guo 2022-03-31 10:43:46 +08:00
  • 2045125fd9
    Fix CI. (#280) Fangjun Kuang 2022-03-31 10:43:02 +08:00
  • 07a9a3410d Minor fixes. Fangjun Kuang 2022-03-31 10:27:58 +08:00
  • d368a69e3f fix typo of torch.eig glynpu 2022-03-30 23:09:56 +08:00
  • a2aca9f643 Bug-fix Daniel Povey 2022-03-30 21:42:15 +08:00
  • f87811e65c Fix RE identity Daniel Povey 2022-03-30 21:41:46 +08:00
  • 709c387ce6 Initial refactoring to remove unnecessary vocab_size Daniel Povey 2022-03-30 21:40:22 +08:00
  • f284ae6d81 Fix CI. Fangjun Kuang 2022-03-30 18:58:39 +08:00
  • 981b064007
    Update doc to clarify the installation order of dependencies. (#279) Fangjun Kuang 2022-03-30 18:50:54 +08:00
  • cb9271a3e2 Update doc to clarify the installation order of dependencies. Fangjun Kuang 2022-03-30 18:49:35 +08:00
  • 5728a4456e Use Emformer model as RNN-T encoder. Fangjun Kuang 2022-03-30 17:09:15 +08:00
  • e867a62d32 Copy files. Fangjun Kuang 2022-03-30 16:09:51 +08:00
  • b4c7a27f3c Add emformer model. Fangjun Kuang 2022-03-30 16:09:32 +08:00
  • f686635b54
    Update diagnostics (#260) Mingshuang Luo 2022-03-30 14:52:55 +08:00
  • 4b5b49c9b0 Add log weight pushing pkufool 2022-03-30 12:58:26 +08:00
  • 74121ac478 Merge branch 'rework2h_randloader_pow0.333_conv_8' into rework2h_randloader_pow0.333_conv_8_lessdrop_speed Daniel Povey 2022-03-30 12:24:15 +08:00
  • 37ab0bcfa5 Reduce speed of some components Daniel Povey 2022-03-30 11:46:23 +08:00
  • 7c46c3b0d4 Remove dropout in output layer Daniel Povey 2022-03-30 11:20:04 +08:00
  • 21a099b110 Fix padding bug Daniel Povey 2022-03-30 11:18:04 +08:00
  • ca6337b78a Add another convolutional layer Daniel Povey 2022-03-30 11:11:32 +08:00
  • 8bb2f01b11 Add LG decoding pkufool 2022-03-30 10:52:47 +08:00
  • 1b8d7defd0 Reduce 1st conv channels from 64 to 32 Daniel Povey 2022-03-30 00:44:18 +08:00
  • 4e453a4bf9 Rework conformer, remove some code. Daniel Povey 2022-03-29 23:41:13 +08:00
  • 11124b03ea Refactoring and simplifying conformer and frontend Daniel Povey 2022-03-29 20:32:14 +08:00
  • 57f943b25c Merge branch 'rework2h_randloader' into rework2h_pow0.333 Daniel Povey 2022-03-29 19:05:39 +08:00
  • 7c5249fb88 Minor fixes. Fangjun Kuang 2022-03-29 16:10:19 +08:00
  • 0aa7bf08b8
    Merge branch 'master' into wenetspeech-pruned-transducer-stateless-pinyin Mingshuang Luo 2022-03-28 21:49:21 +08:00
  • 5fa6c67873 do a change for .flake8 luomingshuang 2022-03-28 21:48:35 +08:00
  • e53411c99e pruned_transducer_stateless_for_wenetspeech luomingshuang 2022-03-28 21:18:33 +08:00
  • 52f1f6775d Update beam search to support max/log_add in selecting duplicate hyps. Fangjun Kuang 2022-03-28 12:33:58 +08:00
  • ce079e1417 Add results for standard beam search. Fangjun Kuang 2022-03-27 23:46:50 +08:00
  • 2cde99509f Change max-keep-prob to 0.95 Daniel Povey 2022-03-27 23:21:42 +08:00
  • cc6c608fb6 Minor fixes. Fangjun Kuang 2022-03-27 21:50:56 +08:00
  • 58794ba493 Add results. Fangjun Kuang 2022-03-27 21:35:21 +08:00
  • b0d34fbb8c Update results for stateless transducer without pruned RNN-T loss. Fangjun Kuang 2022-03-27 16:16:08 +08:00
  • 262388134d Increase model_warm_step to 4k Daniel Povey 2022-03-27 11:18:16 +08:00
  • 8a8134b9e5 Change power of lr-schedule from -0.5 to -0.333 Daniel Povey 2022-03-17 13:18:58 +08:00
  • 953aecf5e3 Reduce layer-drop prob after warmup to 1 in 100 Daniel Povey 2022-03-27 00:25:32 +08:00
  • b43468bb67 Reduce layer-drop prob Daniel Povey 2022-03-26 19:36:33 +08:00
  • 8a38d9a855 Fix/patch how fix_random_seed() is imported. Daniel Povey 2022-03-26 15:43:47 +08:00
  • 26a1730392 Add random-number-setting function in dataloader Daniel Povey 2022-03-26 14:46:27 +08:00
  • 0e694739f2 Fix test mode with random layer dropout Daniel Povey 2022-03-25 23:28:52 +08:00
  • 609f76675f do a fix luomingshuang 2022-03-25 20:56:13 +08:00
  • d2ed3dfc90 Fix bug Daniel Povey 2022-03-25 20:35:11 +08:00
  • 4b650e9f01 Make warmup work by scaling layer contributions; leave residual layer-drop Daniel Povey 2022-03-25 20:34:33 +08:00
  • 5553a086d0 do a change for RESULTS.md luomingshuang 2022-03-25 19:32:23 +08:00
  • 33fbde8d97 do a change for RESULTS.md luomingshuang 2022-03-25 19:31:07 +08:00
  • cc68bc256b add fast beam search for decoding luomingshuang 2022-03-25 19:26:58 +08:00
  • aecb6dce71 Randomly combining output from different transformer encoder layers. Fangjun Kuang 2022-03-25 17:39:57 +08:00
  • 12de88043a Randomly combining output from encoder layers. Fangjun Kuang 2022-03-25 17:13:22 +08:00
  • 1f548548d2 Simplify the warmup code; max_abs 10->6 Daniel Povey 2022-03-24 15:06:06 +08:00
  • aab72bc2a5 Add changes from master to decode.py, train.py Daniel Povey 2022-03-24 13:10:54 +08:00
  • 5d9dae3064 Merge changes from master Daniel Povey 2022-03-24 12:59:36 +08:00
  • 2da510cb8b Merge branch 'master' of https://github.com/k2-fsa/icefall into spgi Desh Raj 2022-03-23 15:44:23 -04:00
  • 395a3f952b
    Batch decoding for models trained with optimized_transducer (#267) Fangjun Kuang 2022-03-23 19:11:34 +08:00
  • d21ac737ad Minor fixes. Fangjun Kuang 2022-03-23 19:11:01 +08:00
  • 055406a67e Minor fixes. Fangjun Kuang 2022-03-23 17:35:39 +08:00
  • 127155b37d Minor fixes Fangjun Kuang 2022-03-23 17:23:22 +08:00
  • ced16beab0 Minor fixes. Fangjun Kuang 2022-03-23 16:23:29 +08:00
  • 8cc5cd81b3 Add modified beam search in batch mode. Fangjun Kuang 2022-03-23 15:44:33 +08:00
  • 7fa5860073 Add greedy search in batch mode. Fangjun Kuang 2022-03-23 11:43:26 +08:00
  • aa71eaaac7 Refactoring. Fangjun Kuang 2022-03-23 11:10:01 +08:00
  • 3ae7265737
    More fixes to the checkpoint code. (#266) Fangjun Kuang 2022-03-23 14:37:54 +08:00
  • 20a965e8a9 More fixes to the checkpoint code. Fangjun Kuang 2022-03-23 14:17:53 +08:00
  • 1ce4349c17
    Merge adb54aea91abe211b19ec75eeb422b15a3867405 into 6a091da0b0543befb0492848d3583700c274d111 Fangjun Kuang 2022-03-23 12:52:43 +08:00
  • 6a091da0b0
    Minor fixes for saving checkpoints. (#265) Fangjun Kuang 2022-03-23 12:22:05 +08:00
  • 4fe0c0dca2 Fix loading checkpoints saved by previous code. Fangjun Kuang 2022-03-23 12:21:11 +08:00