10 Commits

Author SHA1 Message Date
Zengwei Yao
9ceffa4db1
Merge 3dc33515c0bcba749a775cf08b8aba546763fb66 into 3199058194a48d45aeee740f2aa9bdbef0bec29d 2023-09-12 08:25:12 -07:00
yaozengwei
3dc33515c0 split utterance over 512 frames into overlapping chunks 2023-08-04 10:26:52 +08:00
Fangjun Kuang
1dbbd7759e
Add tests for subsample.py and fix typos (#1180) 2023-07-25 14:46:18 +08:00
yaozengwei
215541c7c5 Do block-wise attention when seq_len is larger than 512, with block_size <= 512 2023-07-23 16:12:57 +08:00
yaozengwei
ee485c02fc modify attn_offsets 2023-07-21 15:38:22 +08:00
yaozengwei
6aaa971b34 make block-size be a list 2023-07-21 11:34:19 +08:00
yaozengwei
80a14f93d3 Use block-wise attention 2023-07-20 19:38:03 +08:00
Fangjun Kuang
947f0614c9
Fix running exported model on GPU. (#1131) 2023-06-15 12:25:15 +08:00
danfu
0cb71ad3bc
add updated zipformer onnx export (#1108)
Co-authored-by: Fangjun Kuang <csukuangfj@gmail.com>
2023-06-12 14:02:23 +08:00
Zengwei Yao
f18b539fbc
Add the upgraded Zipformer model (#1058)
* add the zipformer codes, copied from branch from_dan_scaled_adam_exp1119

* support model export with torch.jit.script

* update RESULTS.md

* support exporting streaming model with torch.jit.script

* add results of streaming models, with some minor changes

* update README.md

* add CI test

* update k2 version in requirements-ci.txt

* update pyproject.toml
2023-05-19 16:47:59 +08:00