How to use the tokenizers.models.WordPiece.from_files function in tokenizers

To help you get started, we’ve selected a few tokenizers examples, based on popular ways it is used in public projects.

Secure your code as it's written. Use Snyk Code to scan source code in minutes - no build needed - and fix issues immediately.

github huggingface / transformers / src / transformers / tokenization_bert.py View on Github external
pad_to_max_length=False,
        stride=0,
        truncation_strategy="longest_first",
        add_special_tokens=True,
        **kwargs
    ):
        super(BertTokenizerFast, self).__init__(
            unk_token=unk_token,
            sep_token=sep_token,
            pad_token=pad_token,
            cls_token=cls_token,
            mask_token=mask_token,
            **kwargs
        )

        self._tokenizer = tk.Tokenizer(tk.models.WordPiece.from_files(vocab_file, unk_token=unk_token))
        self._update_special_tokens()
        self._tokenizer.with_pre_tokenizer(
            tk.pre_tokenizers.BertPreTokenizer.new(
                do_basic_tokenize=do_basic_tokenize,
                do_lower_case=do_lower_case,
                tokenize_chinese_chars=tokenize_chinese_chars,
                never_split=never_split if never_split is not None else [],
            )
        )
        self._tokenizer.with_decoder(tk.decoders.WordPiece.new())

        if add_special_tokens:
            self._tokenizer.with_post_processor(
                tk.processors.BertProcessing.new(
                    (sep_token, self._tokenizer.token_to_id(sep_token)),
                    (cls_token, self._tokenizer.token_to_id(cls_token)),