Tokenizers

Tokenizer

class openspeech.tokenizers.tokenizer.Tokenizer(*args, **kwargs)[source]

A tokenizer is in charge of preparing the inputs for a model.

Note

Do not use this class directly, use one of the sub classes.

AISHELL-1 Character Tokenizer

class openspeech.tokenizers.aishell.character.AIShellCharacterTokenizer(configs: omegaconf.dictconfig.DictConfig)[source]

Tokenizer class in Character-units for AISHELL.

Parameters

configs (DictConfig) – configuration set.

decode(labels)[source]

Converts label to string.

Parameters

labels (numpy.ndarray) – number label

Returns: sentence
  • sentence (str or list): symbol of labels

load_vocab(vocab_path, encoding='utf-8')[source]

Provides char2id, id2char

Parameters
  • vocab_path (str) – csv file with character labels

  • encoding (str) – encoding method

Returns: unit2id, id2unit
  • unit2id (dict): unit2id[unit] = id

  • id2unit (dict): id2unit[id] = unit

class openspeech.tokenizers.aishell.character.AIShellCharacterTokenizerConfigs(sos_token: str = '<sos>', eos_token: str = '<eos>', pad_token: str = '<pad>', blank_token: str = '<blank>', encoding: str = 'utf-8', unit: str = 'aishell_character', vocab_path: str = '../../../data_aishell/aishell_labels.csv')[source]

KsponSpeech Character Tokenizer

class openspeech.tokenizers.ksponspeech.character.KsponSpeechCharacterTokenizer(configs: omegaconf.dictconfig.DictConfig)[source]

Tokenizer class in Character-units for KsponSpeech.

Parameters

configs (DictConfig) – configuration set.

decode(labels)[source]

Converts label to string (number => Hangeul)

Parameters

labels (numpy.ndarray) – number label

Returns: sentence
  • sentence (str or list): symbol of labels

load_vocab(vocab_path, encoding='utf-8')[source]

Provides char2id, id2char

Parameters
  • vocab_path (str) – csv file with character labels

  • encoding (str) – encoding method

Returns: unit2id, id2unit
  • unit2id (dict): unit2id[unit] = id

  • id2unit (dict): id2unit[id] = unit

class openspeech.tokenizers.ksponspeech.character.KsponSpeechCharacterTokenizerConfigs(sos_token: str = '<sos>', eos_token: str = '<eos>', pad_token: str = '<pad>', blank_token: str = '<blank>', encoding: str = 'utf-8', unit: str = 'kspon_character', vocab_path: str = '../../../aihub_labels.csv')[source]

KsponSpeech Subword Tokenizer

class openspeech.tokenizers.ksponspeech.subword.KsponSpeechSubwordTokenizer(configs: omegaconf.dictconfig.DictConfig)[source]

Tokenizer class in Subword-units for KsponSpeech.

Parameters

configs (DictConfig) – configuration set.

decode(labels)[source]

Converts label to string (number => Hangeul)

Parameters

labels (numpy.ndarray) – number label

Returns: sentence
  • sentence (str or list): symbol of labels

class openspeech.tokenizers.ksponspeech.subword.KsponSpeechSubwordTokenizerConfigs(sos_token: str = '<s>', eos_token: str = '</s>', pad_token: str = '<pad>', blank_token: str = '<blank>', encoding: str = 'utf-8', unit: str = 'kspon_subword', sp_model_path: str = 'sp.model', vocab_size: int = 3200)[source]

KsponSpeech Grapheme Tokenizer

class openspeech.tokenizers.ksponspeech.grapheme.KsponSpeechGraphemeTokenizer(configs: omegaconf.dictconfig.DictConfig)[source]

Tokenizer class in Grapheme-units for KsponSpeech.

Parameters

configs (DictConfig) – configuration set.

decode(labels)[source]

Converts label to string (number => Hangeul)

Parameters

labels (numpy.ndarray) – number label

Returns: sentence
  • sentence (str or list): symbol of labels

load_vocab(vocab_path, encoding='utf-8')[source]

Provides char2id, id2char

Parameters
  • vocab_path (str) – csv file with character labels

  • encoding (str) – encoding method

Returns: unit2id, id2unit
  • unit2id (dict): unit2id[unit] = id

  • id2unit (dict): id2unit[id] = unit

class openspeech.tokenizers.ksponspeech.grapheme.KsponSpeechGraphemeTokenizerConfigs(sos_token: str = '<sos>', eos_token: str = '<eos>', pad_token: str = '<pad>', blank_token: str = '<blank>', encoding: str = 'utf-8', unit: str = 'kspon_grapheme', vocab_path: str = '../../../aihub_labels.csv')[source]

LibriSpeech Character Tokenizer

class openspeech.tokenizers.librispeech.character.LibriSpeechCharacterTokenizer(configs: omegaconf.dictconfig.DictConfig)[source]

Tokenizer class in Character-units for LibriSpeech.

Parameters

configs (DictConfig) – configuration set.

decode(labels)[source]

Converts label to string (number => Hangeul)

Parameters

labels (numpy.ndarray) – number label

Returns: sentence
  • sentence (str or list): symbol of labels

load_vocab(vocab_path, encoding='utf-8')[source]

Provides char2id, id2char

Parameters
  • vocab_path (str) – csv file with character labels

  • encoding (str) – encoding method

Returns: unit2id, id2unit
  • unit2id (dict): unit2id[unit] = id

  • id2unit (dict): id2unit[id] = unit

class openspeech.tokenizers.librispeech.character.LibriSpeechCharacterTokenizerConfigs(sos_token: str = '<sos>', eos_token: str = '<eos>', pad_token: str = '<pad>', blank_token: str = '<blank>', encoding: str = 'utf-8', unit: str = 'libri_character', vocab_path: str = '../../../LibriSpeech/libri_labels.csv')[source]

LibriSpeech Subword Tokenizer

class openspeech.tokenizers.librispeech.subword.LibriSpeechSubwordTokenizer(configs: omegaconf.dictconfig.DictConfig)[source]

Tokenizer class in Subword-units for LibriSpeech.

Parameters

configs (DictConfig) – configuration set.

class openspeech.tokenizers.librispeech.subword.LibriSpeechSubwordTokenizerConfigs(sos_token: str = '<s>', eos_token: str = '</s>', pad_token: str = '<pad>', blank_token: str = '<blank>', encoding: str = 'utf-8', unit: str = 'libri_subword', vocab_size: int = 5000, vocab_path: str = '../../../LibriSpeech/')[source]