Mistmoon
94e828e9ab
Merge branch 'k2-fsa:master' into cr-ctc-aishell
2025-07-04 16:03:44 +08:00
hhzzff
1ab9421b23
add timestamps for ctc-decode in aishell
2025-07-02 10:40:14 +08:00
Fangjun Kuang
fba5e67d5e
Fix CI tests. ( #1974 )
...
- Introduce unified AMP helpers (create_grad_scaler, torch_autocast) to handle
deprecations in PyTorch ≥2.3.0
- Replace direct uses of torch.cuda.amp.GradScaler and torch.cuda.amp.autocast
with the new utilities across all training and inference scripts
- Update all torch.load calls to include weights_only=False for compatibility with
newer PyTorch versions
2025-07-01 13:47:55 +08:00
Fangjun Kuang
300a821f58
Fix aishell training ( #1916 )
2025-04-10 10:30:37 +08:00
Fangjun Kuang
d4d4f281ec
Revert "Replace deprecated pytorch methods ( #1814 )" ( #1841 )
...
This reverts commit 3e4da5f78160d3dba3bdf97968bd7ceb8c11631f.
2024-12-18 16:49:57 +08:00
Li Peng
3e4da5f781
Replace deprecated pytorch methods ( #1814 )
...
* Replace deprecated pytorch methods
- torch.cuda.amp.GradScaler(...) => torch.amp.GradScaler("cuda", ...)
- torch.cuda.amp.autocast(...) => torch.amp.autocast("cuda", ...)
* Replace `with autocast(...)` with `with autocast("cuda", ...)`
Co-authored-by: Li Peng <lipeng@unisound.ai>
2024-12-16 10:24:16 +08:00
Zengwei Yao
334beed2af
fix usages of returned losses after adding attention-decoder in zipformer ( #1689 )
2024-07-12 16:50:58 +08:00
zr_jin
eb132da00d
additional instruction for the grad_scale is too small
error ( #1550 )
2024-03-14 11:33:49 +08:00
zr_jin
23913f6afd
Minor refinements for some stale but recently merged PRs ( #1354 )
...
* incorporate https://github.com/k2-fsa/icefall/pull/1269
* incorporate https://github.com/k2-fsa/icefall/pull/1301
* black formatted
* incorporate https://github.com/k2-fsa/icefall/pull/1162
* black formatted
2023-10-31 10:28:20 +08:00
zr_jin
d76c3fe472
Migrate zipformer model to other Chinese datasets ( #1216 )
...
added zipformer recipe for AISHELL-1
2023-10-24 16:24:46 +08:00