Tokenizer files missing

#2
by munish0838 - opened

The tokenizer files are missing in model repo

same issue here:

Traceback (most recent call last):
File "", line 1, in
File "/home/Ubuntu/miniconda3/envs/ipt/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 2094, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for 'instruction-pretrain/medicine-Llama3-8B'. If you were trying to load it from 'https://huggingface.co./models', make sure you don't have a local directory with the same name. Otherwise, make sure 'instruction-pretrain/medicine-Llama3-8B' is the correct path to a directory containing all relevant files for a LlamaTokenizerFast tokenizer.

Thanks, we've uploaded tokenizer files.

Thanks, it works now. I did an installation video for this model: https://youtu.be/r8p8LxMAHcE

Thanks so much for this! πŸ’—

Sign up or log in to comment