Update README.md

This commit is contained in:
Dongji Gao 2023-09-25 14:38:03 -04:00 committed by GitHub
parent ef9b68b510
commit c89c5a7299
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -35,7 +35,7 @@ We modify $G(\mathbf{y})$ by adding self-loop arcs into each state and bypass ar
We incorporate the penalty strategy and apply different configurations for the self-loop arc and bypass arc. The penalties are set as
$\lambda_{1_{i}} = \beta_{1} * \tau_{1}^{i},\quad \lambda_{2_{i}} = \beta_{2} * \tau_{2}^{i}$
$$\lambda_{1_{i}} = \beta_{1} * \tau_{1}^{i},\quad \lambda_{2_{i}} = \beta_{2} * \tau_{2}^{i}$$
for the $i$-th training epoch. $\beta$ is the initial penalty that encourages the model to rely more on the given transcript at the start of training.
It decays exponentially by a factor of $\tau \in (0, 1)$, gradually encouraging the model to align speech with $\star$ when getting confused.