8 Commits

Author SHA1 Message Date
Daniel Povey
c1063def95 First version of rand-combine iterated-training-like idea. 2022-02-27 17:34:58 +08:00
Daniel Povey
63d8d935d4 Refactor/simplify ConformerEncoder 2022-02-27 13:56:15 +08:00
Daniel Povey
2af1b3af98 Remove ReLU in attention 2022-02-14 19:39:19 +08:00
Daniel Povey
a859dcb205 Remove learnable offset, use relu instead. 2022-02-07 12:14:48 +08:00
Daniel Povey
48a764eccf Add min in q,k,v of attention 2022-02-06 21:19:37 +08:00
Fangjun Kuang
14c93add50
Remove batchnorm, weight decay, and SOS from transducer conformer encoder (#155)
* Remove batchnorm, weight decay, and SOS.

* Make --context-size configurable.

* Update results.
2021-12-27 16:01:10 +08:00
Fangjun Kuang
cb04c8a750
Limit the number of symbols per frame in RNN-T decoding. (#151) 2021-12-18 11:00:42 +08:00
Fangjun Kuang
1d44da845b
RNN-T Conformer training for LibriSpeech (#143)
* Begin to add RNN-T training for librispeech.

* Copy files from conformer_ctc.

Will edit it.

* Use conformer/transformer model as encoder.

* Begin to add training script.

* Add training code.

* Remove long utterances to avoid OOM when a large max_duraiton is used.

* Begin to add decoding script.

* Add decoding script.

* Minor fixes.

* Add beam search.

* Use LSTM layers for the encoder.

Need more tunings.

* Use stateless decoder.

* Minor fixes to make it ready for merge.

* Fix README.

* Update RESULT.md to include RNN-T Conformer.

* Minor fixes.

* Fix tests.

* Minor fixes.

* Minor fixes.

* Fix tests.
2021-12-18 07:42:51 +08:00