ContextNet¶
ContextNet Model¶
-
class
openspeech.models.contextnet.model.
ContextNetLSTMModel
(configs: omegaconf.dictconfig.DictConfig, tokenizer: openspeech.tokenizers.tokenizer.Tokenizer)[source]¶ ContextNet encoder + LSTM decoder.
- Parameters
configs (DictConfig) – configuraion set
tokenizer (Tokenizer) – tokenizer is in charge of preparing the inputs for a model.
- Inputs:
inputs (torch.FloatTensor): A input sequence passed to encoders. Typically for inputs this will be a padded FloatTensor of size
(batch, seq_length, dimension)
. input_lengths (torch.LongTensor): The length of input tensor.(batch)
- Returns
- Result of model predictions that contains y_hats, logits,
encoder_outputs, encoder_logits, encoder_output_lengths.
- Return type
outputs (dict)
-
class
openspeech.models.contextnet.model.
ContextNetModel
(configs: omegaconf.dictconfig.DictConfig, tokenizer: openspeech.tokenizers.tokenizer.Tokenizer)[source]¶ Conformer Encoder Only Model.
- Parameters
configs (DictConfig) – configuration set.
tokenizer (Tokenizer) – tokenizer is in charge of preparing the inputs for a model.
- Inputs:
inputs (torch.FloatTensor): A input sequence passed to encoders. Typically for inputs this will be a padded FloatTensor of size
(batch, seq_length, dimension)
. input_lengths (torch.LongTensor): The length of input tensor.(batch)
- Returns
Result of model predictions that contains y_hats, logits, output_lengths
- Return type
outputs (dict)
-
forward
(inputs: torch.Tensor, input_lengths: torch.Tensor) → Dict[str, torch.Tensor][source]¶ Forward propagate a inputs and targets pair for inference.
- Inputs:
inputs (torch.FloatTensor): A input sequence passed to encoders. Typically for inputs this will be a padded FloatTensor of size
(batch, seq_length, dimension)
. input_lengths (torch.LongTensor): The length of input tensor.(batch)
- Returns
Result of model predictions that contains y_hats, logits, output_lengths
- Return type
outputs (dict)
-
test_step
(batch: tuple, batch_idx: int) → collections.OrderedDict[source]¶ Forward propagate a inputs and targets pair for test.
- Inputs:
batch (tuple): A train batch contains inputs, targets, input_lengths, target_lengths batch_idx (int): The index of batch
- Returns
loss for training
- Return type
loss (torch.Tensor)
-
training_step
(batch: tuple, batch_idx: int) → collections.OrderedDict[source]¶ Forward propagate a inputs and targets pair for training.
- Inputs:
batch (tuple): A train batch contains inputs, targets, input_lengths, target_lengths batch_idx (int): The index of batch
- Returns
loss for training
- Return type
loss (torch.Tensor)
-
validation_step
(batch: tuple, batch_idx: int) → collections.OrderedDict[source]¶ Forward propagate a inputs and targets pair for validation.
- Inputs:
batch (tuple): A train batch contains inputs, targets, input_lengths, target_lengths batch_idx (int): The index of batch
- Returns
loss for training
- Return type
loss (torch.Tensor)
-
class
openspeech.models.contextnet.model.
ContextNetTransducerModel
(configs: omegaconf.dictconfig.DictConfig, tokenizer: openspeech.tokenizers.tokenizer.Tokenizer)[source]¶ ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context Paper: https://arxiv.org/abs/2005.03191
- Parameters
configs (DictConfig) – configuraion set.
tokenizer (Tokenizer) – tokenizer is in charge of preparing the inputs for a model.
- Inputs:
inputs (torch.FloatTensor): A input sequence passed to encoders. Typically for inputs this will be a padded FloatTensor of size
(batch, seq_length, dimension)
. input_lengths (torch.LongTensor): The length of input tensor.(batch)
- Returns
Result of model predictions.
- Return type
outputs (dict)
ContextNet Configuration¶
-
class
openspeech.models.contextnet.configurations.
ContextNetConfigs
(model_name: str = 'contextnet', model_size: str = 'medium', input_dim: int = 80, num_encoder_layers: int = 5, kernel_size: int = 5, num_channels: int = 256, encoder_dim: int = 640, optimizer: str = 'adam')[source]¶ This is the configuration class to store the configuration of a
ContextNet
.It is used to initiated an ContextNet model.
Configuration objects inherit from :class: ~openspeech.dataclass.configs.OpenspeechDataclass.
- Parameters
model_name (str) – Model name (default: contextnet)
model_size (str, optional) – Size of the model[‘small’, ‘medium’, ‘large’] (default : ‘medium’)
input_dim (int, optional) – Dimension of input vector (default : 80)
num_encoder_layers (int, optional) – The number of convolution layers (default : 5)
kernel_size (int, optional) – Value of convolution kernel size (default : 5)
num_channels (int, optional) – The number of channels in the convolution filter (default: 256)
encoder_dim (int, optional) – Dimension of encoder output vector (default: 640)
optimizer (str) – Optimizer for training. (default: adam)
-
class
openspeech.models.contextnet.configurations.
ContextNetLSTMConfigs
(model_name: str = 'contextnet_lstm', model_size: str = 'medium', input_dim: int = 80, num_encoder_layers: int = 5, num_decoder_layers: int = 2, kernel_size: int = 5, num_channels: int = 256, encoder_dim: int = 640, num_attention_heads: int = 8, attention_dropout_p: float = 0.1, decoder_dropout_p: float = 0.1, max_length: int = 128, teacher_forcing_ratio: float = 1.0, rnn_type: str = 'lstm', decoder_attn_mechanism: str = 'loc', optimizer: str = 'adam')[source]¶ This is the configuration class to store the configuration of a
ContextNetLSTM
.It is used to initiated an ContextNetLSTM model.
Configuration objects inherit from :class: ~openspeech.dataclass.configs.OpenspeechDataclass.
- Parameters
model_name (str) – Model name (default: contextnet_lstm)
model_size (str, optional) – Size of the model[‘small’, ‘medium’, ‘large’] (default : ‘medium’)
input_dim (int, optional) – Dimension of input vector (default : 80)
num_encoder_layers (int, optional) – The number of convolution layers (default : 5)
num_decoder_layers (int) – The number of decoder layers. (default: 2)
kernel_size (int, optional) – Value of convolution kernel size (default : 5)
num_channels (int, optional) – The number of channels in the convolution filter (default: 256)
encoder_dim (int, optional) – Dimension of encoder output vector (default: 640)
num_attention_heads (int) – The number of attention heads. (default: 8)
attention_dropout_p (float) – The dropout probability of attention module. (default: 0.1)
decoder_dropout_p (float) – The dropout probability of decoder. (default: 0.1)
max_length (int) – Max decoding length. (default: 128)
teacher_forcing_ratio (float) – The ratio of teacher forcing. (default: 1.0)
rnn_type (str) – Type of rnn cell (rnn, lstm, gru) (default: lstm)
decoder_attn_mechanism (str) – The attention mechanism for decoder. (default: loc)
optimizer (str) – Optimizer for training. (default: adam)
-
class
openspeech.models.contextnet.configurations.
ContextNetTransducerConfigs
(model_name: str = 'contextnet_transducer', model_size: str = 'medium', input_dim: int = 80, num_encoder_layers: int = 5, num_decoder_layers: int = 1, kernel_size: int = 5, num_channels: int = 256, decoder_hidden_state_dim: int = 2048, encoder_dim: int = 640, decoder_output_dim: int = 640, decoder_dropout_p: float = 0.1, rnn_type: str = 'lstm', optimizer: str = 'adam')[source]¶ This is the configuration class to store the configuration of a
ContextNetTransducer
.It is used to initiated an ContextNetTransducer model.
Configuration objects inherit from :class: ~openspeech.dataclass.configs.OpenspeechDataclass.
- Parameters
model_name (str) – Model name (default: contextnet_transducer)
model_size (str, optional) – Size of the model[‘small’, ‘medium’, ‘large’] (default : ‘medium’)
input_dim (int, optional) – Dimension of input vector (default : 80)
num_encoder_layers (int, optional) – The number of convolution layers (default : 5)
num_decoder_layers (int, optional) – The number of rnn layers (default : 1)
kernel_size (int, optional) – Value of convolution kernel size (default : 5)
num_channels (int, optional) – The number of channels in the convolution filter (default: 256)
hidden_dim (int, optional) – The number of features in the decoder hidden state (default : 2048)
encoder_dim (int, optional) – Dimension of encoder output vector (default: 640)
decoder_output_dim (int, optional) – Dimension of decoder output vector (default: 640)
dropout (float, optional) – Dropout probability of decoder (default: 0.1)
rnn_type (str, optional) – Type of rnn cell (rnn, lstm, gru) (default: lstm)
optimizer (str) – Optimizer for training. (default: adam)