llama.cpp error...could you kindly regenerate?
#2
by
aaha
- opened
Hi @MaziyarPanahi thanks for these quants. When i was looking for a good model for RAG context adhering model, came across this. But am running into the following error - "llama.cpp error: 'error loading model hyperparameters: key not found in model: phi3.attention.sliding_window'". I think llama.cpp released a n update to support this but since this quant was generated earlier it doesn't work. Could you kindly look into this and share an updated version that could work? Also, on a different topic, do you have any suggestions for a sub-3B RAG-friendly low hallucination chat/instruct model i could try? I am currently trying gemma 2 2b, qwen 2 1.5b and this model. Thanks!