bert-base-buddhist-sanskrit / tokenizer_config.json
matej.martinc
First model version
3ab0c42
raw
history blame contribute delete
175 Bytes
{"special_tokens_map_file": "models_reference_all_tokens/special_tokens_map.json", "name_or_path": "models_reference_all_tokens", "tokenizer_class": "PreTrainedTokenizerFast"}