Model_load error using llama.cpp

#10
by jxue005 - opened

So I first did a git clone to download the model directory to ./model, but when I tried to run the following code, it gave me an error.

./main \
-m ./models/phi-2-GGUF/phi-2.Q8_0.gguf \
-f test.txt \
-s 1 --grp-attn-n 4 --grp-attn-w 512 --temp 0 --repeat_penalty 1 --no-penalize-nl
Log start
main: build = 1894 (5c99960)
main: built with Apple clang version 15.0.0 (clang-1500.1.0.2.5) for arm64-apple-darwin23.2.0
main: seed  = 1
gguf_init_from_file: invalid magic characters 'vers'
llama_model_load: error loading model: llama_model_loader: failed to load model from ./models/phi-2-GGUF/phi-2.Q8_0.gguf

llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model './models/phi-2-GGUF/phi-2.Q8_0.gguf'
main: error: unable to load model

I tried to open the model using CTransformers but got the same issue

image.png

I found the reason. I didn't actually download the model to my local directory. I was just using !git clone. The issue is solved when I manually download the model.

Check your model size and see if it is downloaded to your directory.

I tried to open the model using CTransformers but got the same issue

image.png

jxue005 changed discussion status to closed

Sign up or log in to comment