mirror of
https://github.com/k2-fsa/icefall.git
synced 2025-08-10 02:22:17 +00:00
* Begin to use multiple datasets. * Finish preparing training datasets. * Minor fixes * Copy files. * Finish training code. * Display losses for gigaspeech and librispeech separately. * Fix decode.py * Make the probability to select a batch from GigaSpeech configurable. * Update results. * Minor fixes.
22 lines
1.3 KiB
Markdown
22 lines
1.3 KiB
Markdown
|
|
# Introduction
|
|
|
|
Please refer to <https://icefall.readthedocs.io/en/latest/recipes/librispeech.html>
|
|
for how to run models in this recipe.
|
|
|
|
# Transducers
|
|
|
|
There are various folders containing the name `transducer` in this folder.
|
|
The following table lists the differences among them.
|
|
|
|
| | Encoder | Decoder | Comment |
|
|
|---------------------------------------|-----------|--------------------|---------------------------------------------------|
|
|
| `transducer` | Conformer | LSTM | |
|
|
| `transducer_stateless` | Conformer | Embedding + Conv1d | |
|
|
| `transducer_lstm` | LSTM | LSTM | |
|
|
| `transducer_stateless_multi_datasets` | Conformer | Embedding + Conv1d | Using data from GigaSpeech as extra training data |
|
|
|
|
The decoder in `transducer_stateless` is modified from the paper
|
|
[Rnn-Transducer with Stateless Prediction Network](https://ieeexplore.ieee.org/document/9054419/).
|
|
We place an additional Conv1d layer right after the input embedding layer.
|