How did you convert it?

#1
by win10 - opened

How did you convert it? No matter how I converted, an error occurred:

Failed to load model

llama.cpp error: 'error loading model vocabulary: cannot find tokenizer merges in model file
'

Interesting :S can you try checking out the exact release I used? Did you download from metas repo?

Interesting :S can you try checking out the exact release I used? Did you download from metas repo?

yes, use llama.cpp new Version
Also used the version you are using

This is the unofficial unsloth/Llama-3.2-3B-Instruct fine-tuning model, neither model seems to work.
win10/Llama-3.2-3B-Instruct-24-9-29

Interesting :S can you try checking out the exact release I used? Did you download from metas repo?

Please wait... Embarrassingly I haven't tested the replacement with the official tokenizer yet.

Interesting :S can you try checking out the exact release I used? Did you download from metas repo?

After changing the vocabulary, it worked successfully. I sincerely apologize for the trouble and thank you for taking the time to clear up my doubts.

No problem at all! I just replied before going back to sleep so I didn't try anything yet and I'm glad you resolved it :D

How did you convert it? No matter how I converted, an error occurred:

Failed to load model

llama.cpp error: 'error loading model vocabulary: cannot find tokenizer merges in model file
'

im getting the exact same error. I went ahead and copied the base model tokenizers into my merged model and then it worked. Why is the merge process changing the files?
Im using Llama Factory, and the safetensor diretory i get, works in oogabooga, but then i have to do this with the base models to convert to gguf. Curious .. thnx

How did you convert it? No matter how I converted, an error occurred:

Failed to load model

llama.cpp error: 'error loading model vocabulary: cannot find tokenizer merges in model file
'

im getting the exact same error. I went ahead and copied the base model tokenizers into my merged model and then it worked. Why is the merge process changing the files?
Im using Llama Factory, and the safetensor diretory i get, works in oogabooga, but then i have to do this with the base models to convert to gguf. Curious .. thnx

Use meta's official tokenizer override.

Sign up or log in to comment