issue
from transformers import AutoTokenizer
model_id = "meta-llama/Llama-3.3-70B-Instruct"
Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id,token="token")
python main.py
tokenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████| 55.4k/55.4k [00:00<00:00, 219MB/s]
tokenizer.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 17.2M/17.2M [00:00<00:00, 255MB/s]
special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████| 68.0/68.0 [00:00<00:00, 864kB/s]
Traceback (most recent call last):
File "/home/paperspace/LLM_MODELS/test.py", line 6, in
tokenizer = AutoTokenizer.from_pretrained(model_id,token="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")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.local/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 897, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2271, in from_pretrained
return cls._from_pretrained(
^^^^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2505, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/paperspace/.local/lib/python3.11/site-packages/transformers/tokenization_utils_fast.py", line 115, in init
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception: data did not match any variant of untagged enum ModelWrapper at line 1251003 column 3
how to fix this?