wav2vec2-large-xlsr-53-faroese-100h / tokenizer_config.json
carlosdanielhernandezmena's picture
Uploading the first 7 files needed to test the model: confing.json, preprocessor_config.json, pytorch_model.bin, special_tokens_map.json, tokenizer_config.json, training_args.bin and vocab.json
86a9a20
raw
history blame contribute delete
217 Bytes
{"unk_token": "[UNK]", "bos_token": "<s>", "eos_token": "</s>", "pad_token": "[PAD]", "do_lower_case": false, "word_delimiter_token": "|", "replace_word_delimiter_char": " ", "tokenizer_class": "Wav2Vec2CTCTokenizer"}