1776 Commits

Author SHA1 Message Date
Daniel Povey
125ea04a42 Rework positional encoding 2022-11-09 20:48:27 +08:00
Daniel Povey
e4a3b2da7d Mostly-cosmetic fixes found via mypy 2022-11-09 17:40:09 +08:00
Daniel Povey
308059edba Cosmetic fixes 2022-11-09 17:14:18 +08:00
Daniel Povey
f8210e1d80 Reduce feedforward dim of the 4th and 5th encoder stacks.
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless7/train.py
2022-11-09 14:52:44 +08:00
Daniel Povey
d1d4be8ecc Remove debug statement 2022-11-09 14:18:23 +08:00
Daniel Povey
3ff3f440ee Make sub-module dropped out independently. 2022-11-09 14:15:56 +08:00
Daniel Povey
423f9e3026 Increase query-head-dim from 24 to 32. 2022-11-09 13:28:29 +08:00
Daniel Povey
364a4c3838 Reduce pos_dim from 384 to 128. 2022-11-09 13:27:27 +08:00
Daniel Povey
cc260711b8 Make pos_dim the same as it was in scaled_adam_exp229.. although this was probably too high. 2022-11-09 13:26:18 +08:00
Daniel Povey
cba194aa26 Bug fix RE masking 2022-11-09 13:12:34 +08:00
Daniel Povey
20e6d2a157 Rework zipformer code for clarity and extensibility 2022-11-09 12:56:07 +08:00
Daniel Povey
797a0e6ce7 Change order of convolution and nonlin-attention modules 2022-11-08 20:00:25 +08:00
Daniel Povey
36bff9b369 Fix to comment 2022-11-07 12:33:12 +08:00
Daniel Povey
47f42ef5db Replace the 1st of the ConvolutionModules with NonlinAttentionModule 2022-11-05 14:19:43 +08:00
Daniel Povey
eb6e2b5a1d Have 2 squeeze-excite modules per layer, using different attention heads. 2022-11-04 17:40:51 +08:00
Daniel Povey
efbe20694f Use the attention weights as input for the ModifiedSEModule 2022-11-04 16:01:07 +08:00
Daniel Povey
0d94783e76 Instead of a pooling operation, use the first bottleneck_dim dimensions of the preceding self_attn.forward2 as the input to the squeeze-excite module. 2022-11-04 15:16:59 +08:00
Daniel Povey
c27ee8cfcf Merge branch 'scaled_adam_exp277' into scaled_adam_exp281 2022-11-04 15:06:23 +08:00
Daniel Povey
67d470766f Revert bottleneck_dim from 8 to 16 2022-11-04 15:02:56 +08:00
Daniel Povey
cefcd061bd Merge branch 'scaled_adam_exp271' into scaled_adam_exp274 2022-11-04 14:50:00 +08:00
Daniel Povey
31d9bbfb3c Merge branch 'scaled_adam_exp268b' into scaled_adam_exp279 2022-11-04 14:42:00 +08:00
Daniel Povey
70300e34d3 Merge branch 'scaled_adam_exp273' into scaled_adam_exp277
# Conflicts:
#	egs/librispeech/ASR/pruned_transducer_stateless7/zipformer.py
2022-11-04 12:41:02 +08:00
Daniel Povey
f625810de1 Use the balancer; remove the unused sigmoid module. 2022-11-03 19:21:37 +08:00
Daniel Povey
a9c384e69e Add Whiten module after squeeze_proj. 2022-11-03 19:04:34 +08:00
Daniel Povey
11cb30bf49 Reduce bottleneck dim fo SEModule from 16 to 8. 2022-11-03 17:30:36 +08:00
Daniel Povey
824c3afd7d Merge branch 'scaled_adam_exp265b' into scaled_adam_exp268b 2022-11-03 14:35:00 +08:00
Daniel Povey
eb915f170c Merge branch 'scaled_adam_exp265b' into scaled_adam_exp266b 2022-11-03 14:32:40 +08:00
Daniel Povey
97a1dd40cf Change initialization value of weight in SimpleCombine from 0.0 to 0.1; ignore infinities in MetricsTracker
.
2022-11-03 13:46:14 +08:00
Daniel Povey
44bdda1218 Move ModifiedSEModule to end of ZipformerEncoderLayer 2022-11-03 13:13:41 +08:00
Daniel Povey
a2dbce2a9a Add Whiten module, with whitening_limit=10.0, at output of ModifiedSEModule 2022-11-03 13:02:54 +08:00
Daniel Povey
a27670d097 Restore feedforward3 module 2022-11-03 12:41:19 +08:00
Daniel Povey
0379ab57a2 Make weight in SimpleCombine a vector 2022-11-03 12:20:29 +08:00
Daniel Povey
e08f5c1bce Replace Pooling module with ModifiedSEModule 2022-11-01 14:38:06 +08:00
Daniel Povey
4da4a3a5df Merge branch 'scaled_adam_exp236' into scaled_adam_exp242 2022-10-31 19:37:53 +08:00
Daniel Povey
b7876baed6 Remove dynamic weights in SimpleCombine 2022-10-31 19:22:01 +08:00
Daniel Povey
b091ae5482 Add bias in weight module 2022-10-31 17:11:21 +08:00
Daniel Povey
5e51534fbc Introduce minimum probs in the SimpleCombiner 2022-10-31 17:02:21 +08:00
Daniel Povey
12f17f550e Introduce dropout rate to dynamic submodules of conformer. 2022-10-31 16:18:52 +08:00
Daniel Povey
3de8a5aef2 Bug fix 2022-10-31 15:50:46 +08:00
Daniel Povey
5fda800b6d Implement pooling module, add it after initial feedforward. 2022-10-31 15:49:18 +08:00
Daniel Povey
730e6c8914 Change schedule after initial loss not promising 2022-10-31 13:47:26 +08:00
Daniel Povey
b8db0f53f1 Change to schedule of bypass_scale min: make it larger, decrease slower. 2022-10-31 13:11:59 +08:00
Daniel Povey
efbb1d25c7 Restore the changes from scaled_adam_219 and scaled_adam_exp220, accidentally lost, re layer skipping 2022-10-30 14:59:49 +08:00
Daniel Povey
e4a22bbe96 Reduce initial clamp_min for bypass_scale from 1.0 to 0.5. 2022-10-30 14:43:02 +08:00
Daniel Povey
e9c69d8477 Add warmup schedule for zipformer encoder layer, from 1.0 -> 0.2. 2022-10-30 14:41:18 +08:00
Daniel Povey
8b0722e626 Rework how warmup count is produced; should not affect results. 2022-10-30 14:17:41 +08:00
Daniel Povey
6b6143f28c Merge branch 'scaled_adam_exp218' into scaled_adam_exp221 2022-10-30 13:17:29 +08:00
Daniel Povey
a3561c8dcd Have warmup schedule for layer-skipping 2022-10-29 21:01:00 +08:00
Daniel Povey
072776b2a1 Apply layer-skip dropout prob 2022-10-29 20:11:39 +08:00
Daniel Povey
9a7979d7b8 Avoid falling off the loop for weird inputs 2022-10-29 20:03:41 +08:00