Failed to load model?
#7
by
yamikumods
- opened
Hi, thank you for sharing imat family of the model.
I tried iq3_m and iq3_s variant with latest Noeda's fork of llama.cpp.
I run command "./server -m " for both variant and got same error arguing tensor size mismatch;
llama_model_load: error loading model: done_getting_tensors: wrong number of tensors; expected 642, got 514
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '../../models/ggml-c4ai-command-r-plus-104b-iq3_m.gguf'
{"tid":"0x20126fac0","timestamp":1712544737,"level":"ERR","function":"load_model","line":681,"msg":"unable to load model","model":"../../models/ggml-c4ai-command-r-plus-104b-iq3_m.gguf"}
My env is Mac Studio M2 Max 64GB.
Does anybody have same problem?
Best regards.
Sry, I closed this for duplication.
yamikumods
changed discussion status to
closed