Fangjun Kuang
fba5e67d5e
Fix CI tests. ( #1974 )
...
- Introduce unified AMP helpers (create_grad_scaler, torch_autocast) to handle
deprecations in PyTorch ≥2.3.0
- Replace direct uses of torch.cuda.amp.GradScaler and torch.cuda.amp.autocast
with the new utilities across all training and inference scripts
- Update all torch.load calls to include weights_only=False for compatibility with
newer PyTorch versions
2025-07-01 13:47:55 +08:00
Fangjun Kuang
d4d4f281ec
Revert "Replace deprecated pytorch methods ( #1814 )" ( #1841 )
...
This reverts commit 3e4da5f78160d3dba3bdf97968bd7ceb8c11631f.
2024-12-18 16:49:57 +08:00
Li Peng
3e4da5f781
Replace deprecated pytorch methods ( #1814 )
...
* Replace deprecated pytorch methods
- torch.cuda.amp.GradScaler(...) => torch.amp.GradScaler("cuda", ...)
- torch.cuda.amp.autocast(...) => torch.amp.autocast("cuda", ...)
* Replace `with autocast(...)` with `with autocast("cuda", ...)`
Co-authored-by: Li Peng <lipeng@unisound.ai>
2024-12-16 10:24:16 +08:00
zr_jin
5445ea6df6
Use shuffled LibriSpeech cuts instead ( #1450 )
...
* use shuffled LibriSpeech cuts instead
* leave the old code in comments for reference
2024-01-08 15:09:21 +08:00
Fangjun Kuang
48c2c22dbe
Fix export to ncnn for lstm3 ( #900 )
2023-02-13 11:44:25 +08:00
Desh Raj
d31db01037
manual correction of black formatting
2022-11-17 14:18:05 -05:00
Desh Raj
107df3b115
apply black on all files
2022-11-17 09:42:17 -05:00
Fangjun Kuang
60317120ca
Revert "Apply new Black style changes"
2022-11-17 20:19:32 +08:00
Desh Raj
d110b04ad3
apply new black formatting to all files
2022-11-16 13:06:43 -05:00
Fangjun Kuang
e334e570d8
Filter utterances with number_tokens > number_feature_frames. ( #604 )
2022-11-12 07:57:58 +08:00
Zengwei Yao
3600ce1b5f
Apply delay penalty on transducer ( #654 )
...
* add delay penalty
* fix CI
* fix CI
2022-11-04 16:10:09 +08:00
Zengwei Yao
03668771d7
Get timestamps during decoding ( #598 )
...
* print out timestamps during decoding
* add word-level alignments
* support to compute mean symbol delay with word-level alignments
* print variance of symbol delay
* update doc
* support to compute delay for pruned_transducer_stateless4
* fix bug
* add doc
2022-11-01 10:24:00 +08:00
Zengwei Yao
f3ad32777a
Gradient filter for training lstm model ( #564 )
...
* init files
* add gradient filter module
* refact getting median value
* add cutoff for grad filter
* delete comments
* apply gradient filter in LSTM module, to filter both input and params
* fix typing and refactor
* filter with soft mask
* rename lstm_transducer_stateless2 to lstm_transducer_stateless3
* fix typos, and update RESULTS.md
* minor fix
* fix return typing
* fix typo
2022-09-29 11:15:43 +08:00