Fix to comment

This commit is contained in:
Daniel Povey 2022-11-07 12:33:12 +08:00
parent 47f42ef5db
commit 36bff9b369

View File

@ -1559,7 +1559,7 @@ class FeedforwardModule(nn.Module):
class NonlinAttentionModule(nn.Module):
"""This is like the ConvolutionModule, but refactored so that we use multiplication by attention weights (borrowed
from the attention module) in palce of actual convolution.
from the attention module) in place of actual convolution.
Args:
channels (int): The number of channels of conv layers.