bert-base-japanese / tokenizer_config.json
singletongue's picture
Updates incorrect tokenizer configuration file (#2)
32fc253 verified
raw
history blame contribute delete
120 Bytes
{"do_lower_case": false, "subword_tokenizer_type": "wordpiece", "word_tokenizer_type": "mecab", "model_max_length": 512}