How to use the gtts.tokenizer.symbols.TONE_MARKS function in gTTS

To help you get started, we’ve selected a few gTTS examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github pndurette / gTTS / gtts / tokenizer / tokenizer_cases.py View on Github external
def other_punctuation():
    """Match other punctuation.

    Match other punctuation to split on; punctuation that naturally
    inserts a break in speech.

    """
    punc = ''.join((
        set(symbols.ALL_PUNC) -
        set(symbols.TONE_MARKS) -
        set(symbols.PERIOD_COMMA)))
    return RegexBuilder(
        pattern_args=punc,
        pattern_func=lambda x: u"{}".format(x)).regex
github pndurette / gTTS / gtts / tokenizer / pre_processors.py View on Github external
def tone_marks(text):
    """Add a space after tone-modifying punctuation.

    Because the `tone_marks` tokenizer case will split after a tone-modidfying
    punctuation mark, make sure there's whitespace after.

    """
    return PreProcessorRegex(
        search_args=symbols.TONE_MARKS,
        search_func=lambda x: u"(?<={})".format(x),
        repl=' ').run(text)
github luoliyan / chinese-support-redux / chinese / lib / gtts / tokenizer / tokenizer_cases.py View on Github external
def tone_marks():
    """Keep tone-modifying punctuation by matching following character.

    Assumes the `tone_marks` pre-processor was run for cases where there might
    not be any space after a tone-modifying punctuation mark.
    """
    return RegexBuilder(
        pattern_args=symbols.TONE_MARKS,
        pattern_func=lambda x: u"(?<={}).".format(x)).regex
github pndurette / gTTS / gtts / tokenizer / tokenizer_cases.py View on Github external
def tone_marks():
    """Keep tone-modifying punctuation by matching following character.

    Assumes the `tone_marks` pre-processor was run for cases where there might
    not be any space after a tone-modifying punctuation mark.
    """
    return RegexBuilder(
        pattern_args=symbols.TONE_MARKS,
        pattern_func=lambda x: u"(?<={}).".format(x)).regex
github luoliyan / chinese-support-redux / chinese / lib / gtts / tokenizer / tokenizer_cases.py View on Github external
def other_punctuation():
    """Match other punctuation.

    Match other punctuation to split on; punctuation that naturally
    inserts a break in speech.

    """
    punc = ''.join(
        set(symbols.ALL_PUNC) -
        set(symbols.TONE_MARKS) -
        set(symbols.PERIOD_COMMA) -
        set(symbols.COLON))
    return RegexBuilder(
        pattern_args=punc,
        pattern_func=lambda x: u"{}".format(x)).regex